Home » Posts tagged 'machine learning'

Tag Archives: machine learning

AI not yet but Machine Learning and Big Data are rapidly evolving

Solve problems

In his book Adventures in the Screen Trade, the hugely successful screenwriter William Goldman’s opening sentence is – “Nobody knows anything.” Goldman is talking about predictions of what might and what might not succeed at the box office. He goes on to write: “Why did Universal, the mightiest studio of all, pass on Star Wars? … Because nobody, nobody — not now, not ever — knows the least goddamn thing about what is or isn’t going to work at the box office.” Prediction is hard, “Not one person in the entire motion picture field knows for a certainty what’s going to work. Every time out it’s a guess.” Of course history is often a good predictor of what might work in the future and when, but according to Goldman time and time again predictions have failed miserably in the entertainment business.

It is exactly the same with technology and Artificial Intelligence (AI), probably more than any other technology has fared the worst when it comes to predictions of when it will be available as a truly ‘thinking machine.’ Fei-Fei Li, director of the Stanford Artificial Intelligence Lab even thinks: “today’s machine-learning and AI tools won’t be enough to bring about real AI.” And Demis Hassabis founder of Google’s DeepMind (and in my opinion one of the most advanced AI developers) forecasts: “it’s many decades away for full AI.”

Researchers are however starting to make considerable advances in soft AI. Although with the exception of less than 30 corporations there is very little tangible evidence that this soft AI or Deep Learning is currently being used productively in the workplace.

Some of the companies currently selling or and/or using soft AI or Deep Learning to enhance their services; IBM’s Watson, Google Search and Google DeepMind, Microsoft Azure (and Cortana), Baidu Search led by Andrew Ng, Palantir Technology, maybe Toyota’s new AI R&D lab if it has released any product internally, within Netflix and Amazon for predictive analytics and other services, the insurer and finance company USAA, Facebook (video), General Electric, the Royal Bank of Scotland, Nvidia, Expedia and MobileEye and to some extent the AI light powered collaborative robots from Rethink Robotics.

There are numerous examples of other companies developing AI and Deep Learning products but less than a hundred early-adopter companies worldwide. Essentially soft AI and Deep Learning solutions, such as Apple’s Siri, Drive.ai, Viv, Intel’s AI solutions, Nervana Systems, Sentient Technologies, and many more are still very much in their infancy, especially when it comes to making any significant impact on business transactions and systems processes.

Machine Learning

On the other hand, Machine Learning (ML), which is a subfield of AI, which some call light AI, is starting to make inroads into organizations worldwide. There are even claims that: “Machine Learning is becoming so pervasive today that you probably use it dozens of times per day without knowing it.”

Although according to Intel: “less than 10 per cent of servers worldwide were deployed in support of machine learning last year (2015).” It is highly probable Google, Facebook, Salesforce, Microsoft and Amazon would have taken up a large percentage of that 10 percent alone.

ML technologies, such as the location awareness systems like Apple’s iBeacon software, which connects information from a user’s Apple profile to in-store systems and advertising boards, allowing for a ‘personalized’ shopping experience and tracking of (profiled) customers within physical stores. IBM’s Watson and Google DeepMind’s Machine Learning have both shown how their systems can analyze vast amounts of information (data), recognize sophisticated patterns, make significant savings on energy consumption and empower humans with new analytical capabilities.

The promise of Machine Learning is to allow computers to learn from experience and understand information through a hierarchy of concepts. Currently ML is beneficial for pattern and speech recognition and predictive analytics. It is therefore very beneficial in search, data analytics and statistics – when there is lots of data available. Deep Learning helps computers solve problems that humans solve intuitively (or automatically by memory) like recognizing spoken words or faces in images.

Neither Machine Learning nor Deep Learning should be considered as a attempt to simulate the human brain – which is one goal of AI.

Crossing the chasm – not without lots of data

If driverless vehicles can move around with decreasing problems, this is not because AI has finally arrived, it is not that we have machines that are capable of human intelligence, but it is that we have machines that are very useful in dealing with big data and are able to make decisions based on uncertainties regarding the perception and interpretation of their environment – but we are not quite there yet! Today we have systems targeted at narrow tasks and domains, but not that promised by ‘general purpose’ AI, which should be able to accomplish a wide range of tasks, including those not foreseen by the system’s designers.

Essentially there’s nothing in the very recent developments in machine learning that significantly affects our ability to model, understand and make predictions in systems where data is scarce.

Nevertheless companies are starting to take notice, investors are funding ML startups, and corporations recognize that utilizing ML technologies is a good step forward for organizations interested in gaining the benefits promised by Big Data and Cognitive Computing over the long term. Microsoft’s CEO, Satya Nadella, says the company is heavily invested in ML and he is: “very bullish about making machine learning capability available (over the next 5 years) to every developer, every application, and letting any company use these core cognitive capabilities to add intelligence into their core operations.”

The next wave – understanding information

Organizations that have lots of data know that information is always limited, incomplete and possibly noisy. ML algorithms are capable of searching the data and building a knowledge base to provide useful information – for example ML algorithms can separate spam emails from genuine emails. A machine learning algorithm is an algorithm that is able to learn from data, however the performance of machine learning algorithms depends heavily on the representation of the data they are given.

Machine Learning algorithms often work on the principle most widely known as Occam’s razor. This principle states that among competing hypotheses that explain known observations equally well one should choose the “simplest” one. In my opinion this is why we should use machines only to augment human labor and not to replace it.

Machine Learning and Big Data will greatly compliment human ingenuity – a human-machine combination of statistical analysis, critical thinking, inference, persuasion and quantitative reasoning all wrapped up in one.

“Every block of stone has a statue inside it and it is the task of the sculptor to discover it. I saw the angel in the marble and carved until I set him free.” ~ Michelangelo (1475–1564)

The key questions businesses and policy makers need to be concerned with as we enter the new era of Machine Learning and Big Data:

1) who owns the data?

2) how is it used?

3) how is it processed and stored?

Update 16th August 2016

There is a very insightful Quora answer by François CholletDeep learning researcher at Google where he confirms what I have been saying above:

“Our successes, which while significant are still very limited in scope, have fueled a narrative about AI being almost solved, a narrative according to which machines can now “understand” images or language. The reality is that we are very, very far away from that.”

 

Photo credit, this was a screen grab of a conference presentation, now I do not remember the presenter or conference but if I find it I will update the credit!

 

 

 

 

5 reads in robotics for elder care, artificial intelligence research and new jobs

How Cost Effective Is a Robotic Solution for Elder Care

Robots serving various tasks and purposes in the medical/health and social care sectors beyond the traditional scope of surgical and rehabilitation robots are poised to become one of the most important technological innovations of the 21st century. Nevertheless, unresolved issues for these platforms are: patient safety, as the robots are necessarily quite powerful and rigid and the cost effectiveness of these solutions. (PDF)

Be more afraid of machine stupidity than of machine intelligence

“I would make a distinction between machine intelligence and machine decision-making.

We should be afraid. Not of intelligent machines. But of machines making decisions that they do not have the intelligence to make. I am far more afraid of machine stupidity than of machine intelligence.

Machine stupidity creates a tail risk. Machines can make many, many good decisions and then one day fail spectacularly on some a tail event that did not appear in their training data. This is the difference between specific and general intelligence.”  (Sendhil Mullainathan)

New research may lead to technology that helps the blind and robots navigate natural environments

Two groups of scientists, working independently, have created artificial intelligence software capable of recognizing and describing the content of photographs and videos with far greater accuracy than ever before, sometimes even mimicking human levels of understanding. (NY Times)

Artificial Intelligence Can’t Replace Hard-Earned Knowledge – Yet

So until the androids take over, smart software and big data are merely very useful tools to help us work. Machines replace many kinds of repetitive work, from flying airplanes to sorting through medical symptoms. And to the extent that deeply smart humans can program potential problems into the software — even relatively rare ones — the system can react faster than a human. Some day robots may have deep smarts. For the present, we would settle for preserving the human variety and continuing to forge ever more productive partnerships with our silicon cousins. (Harvard Business Review)

Looking for a job in A.I.? A sneak peak at what it’s like working inside an A.I. Lab

It’s a compelling time to be working in A.I. to impact a huge number of lives. Baidu Research – Have an Inside Look into Baidu’s Silicon Valley A.I. Lab with learning lunches. (Baidu A.I. Lab Video)

Reigniting the economy with computational thinking

I first saw psychologist Daniel Kahneman in 2001, the year before he won the Nobel Prize for Economics. Dan has since become known as the grandfather of Behavioral Economics and a big influence on computer programmers and researchers developing Artificial Intelligence, smartphones and cognitive computing.

In nine words Dan changed how I think. Those 9 words are:

“We think much less than we think we think.” 

Dan’s most important achievements are in his research into human decision making under uncertainty, showing how human decision making behaviors deviate systematically from standard predicted results following economic theories. How we use biases and heuristics — the mental shortcuts we take to make decisions and form opinions.

Kahneman has frequently confirmed the influence of Herbert Simon, a famous American computer scientist and psychologist, and one of the ‘founders’ of Artificial Intelligence and cognitive science, who won the Turing award in 1975 and the Nobel Prize in Economics in 1973. Like Kahneman, Herb Simon’s Nobel Prize in economics resulted for his work in decision-making.

Both Kahneman and Simon taught us how easy it is to make mistakes and fool ourselves through the way we think – or rather don’t think.

So how do we change this and change our way of thinking to get better results?

To thrive in this new world of human and machine collaboration and get the best out of the advances in technology requires a new way of thinking. Jeanette Wing, a Carnegie Mellon University professor, research analyst at Microsoft and currently assistant director of the US National Science Foundation’s computer programs calls this ‘computational thinking.’

Professor Wing has a vision of computational thinking becoming a fundamental skill, ranking alongside reading, writing and arithmetic. The Singapore Government say: “computational thinking should be taught to all Singaporeans and made a national capability.”

Businesses such as Boeing, Google (who are committed to expose everyone to this key 21st century skill), Microsoft and many others are adopting computational thinking to improve their decision-making. According to Jeanette Wing:

Computational thinking can be understood as a fundamental analytical skill that everyone can use to solve problems, design systems, and understand human behavior.

Earlier this week Chris Giles, the Economics Editor of the Financial Times, tweeted a chart showing the growth of demand in the job markets for people with computational thinking skills and the decline of jobs in financial services based on data from the Office of National Statistics:

Chris Giles tweet

But as Jeanette Wing states Computational Thinking should be a process everyone learns, not just those involved in computer sciences. As computation and information technology becomes more prevalent, individuals competent in computational thinking are better able to understand the ways which technology can improve the choices and decisions we make.

Computational Thinking is not programming, nor is it about more computer power. You can’t just throw more petaflops at a problem and expect it to be solved.  Likewise, you can’t expect machine learning by itself to learn deeply if it isn’t coupled to human debate, reasoning and knowledge.

According to Google specific Computational Thinking techniques include:

Problem decomposition — The ability to break down a task into minute details so that we can clearly explain a process to another person or to a computer, or even to just write notes for ourselves.

Pattern recognition — The ability to notice similarities or common differences that will help us make predictions or lead us to shortcuts. Pattern recognition is frequently the basis for solving problems and designing algorithms. According to some researchers: the secret of the human brain is pattern recognition.

Pattern generalization to define abstractions or models — The ability to filter out information that is not necessary to solve a certain type of problem and generalize the information that is necessary.

Algorithm design — The ability to develop a step-by-step strategy for solving a problem. An algorithm is a series of step-by-step instructions, designed to complete a certain task in a finite amount of time.

Data analysis and visualization

Dan Kahneman’s work is particularly relevant to Computational Thinking as we deal with more and more data and more computing power. He teaches that we see patterns in random data; that we are even more prone to see patterns in random data when in possession of a theory predicting such patterns; that we overweight outcomes we can imagine easily; that even though we prefer more information to less we are hopeless at processing it; that we are hopeless at gauging correlations until they are very obvious.

Economists such as Ricardo Hausmann, a Professor of Economics at Harvard University says:

One idea about which economists agree almost unanimously is that, beyond mineral wealth, the bulk of the huge income difference between rich and poor countries is attributable to neither capital nor education, but rather to ‘technology.’

But what is often missing from this discussion is a key component of technology – knowhow.

Computational Thinking puts that knowhow into the hands of those that choose to learn this important process. Knowhow is an ability to recognize patterns and respond with effective actions.

For those that want to improve their ability to understand and respond to the changing nature of technology, Computational Thinking can be a powerful way to bridge the gap between the problems of big data, robotics, artificial intelligence and cognitive assistants and improve practical decision making.

Tech companies’ competitive advantage –Bayes Rule and Behavioral Economics

Bayes and his theoryThe ‘system’ behind the Google robotic cars that have driven themselves for hundreds of thousands of miles on the streets of several US states without being involved in an accident, or violating any traffic law, whilst analyzing enormous quantities of data fed to a central onboard computer from radar sensors, cameras and laser-range finders and taking the most optimal, efficient and cost effective route, is built upon the 18th-century math theorem known as Bayes’ rule.

In 1996 Microsoft’s Bill Gates described their competitive advantage as its ‘expertise in Bayesian networks,’ patenting a spam filter in 1998 which relied on Bayes Theorem. Other tech companies quickly followed suit and adapted their systems and programming to include Bayes theorem.

During World War II Alan Turing had used Bayes Theorem to crack the Enigma code, potentially saving millions of lives, and is credited with helping the allied forces victory.

Artificial Intelligence was given a new lease of life when in the early 1980’s Professor Judea Pearl of UCLA’s Computer Science Department and Cognitive System Lab introduced Bayesian networks as a representational device. Pearl’s work showed that Bayesian Networks constitute one of the most influential advances in Artificial Intelligence, with applications in a wide range of domains.

Bayes Theorem is based on the work of Thomas Bayes as a solution to a problem of inverse probability. It was presented in “An Essay towards solving a Problem in the Doctrine of Chances” read to the Royal Society in 1763 after Bayes’ death (he died in 1761). Put simply Bayes rule is a mathematical relationship between probabilities, which allows the probabilities to be updated in light of new information.

Before the advent of increased computer power Bayes Theorem was overlooked by most statisticians, scientists and in most industries. Today, thanks to Professor Pearl, Bayes Theorem is used in robotics, artificial intelligence, machine learning, reinforcement learning and big data mining.  IBM’s Watson, perhaps the most well known AI system, in all its intricacies, ultimately relies on the deceivingly simple concept of Bayes’ Rule in negotiating the semantic complexities of natural language.

Bayes Theorem is frequently behind the technology development of many of the multi-billion dollar acquisitions we read about, and certainly a core piece of technology behind the billions in profits at leading tech companies, from Google’s search to LinkedIN, Netflix’s and Amazon’s recommendation engines, and will play an even more important role in future developments within automation, robotics and big data.

Professor Pearl, through his work in the Cognitive System Lab, recognized the problems of human psychology in software development and representation. In 1984 he published a book simply called Heuristics (Intelligent Search Strategies for Computer Problem Solving).

Pearl’s book relied on research by the founder of Behavioral Economics Daniel Kahneman and Amos Tversky and particularly their work with Paul Slovic: Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press, 1982, where they confirmed their own reliance on Bayes Theorem:

Ch.25: Conservatism in human information processing: “Probabilities quantify uncertainty. A probability, according to Bayesians like ourselves; is simply a number between zero and one that represents the extent to which a somewhat idealized person believes a statement to be true…. Since such probabilities describe the person who holds the opinion more than the event the opinion is about, they are called personal probabilities.” (Page 359)

Kahneman (Nobel Prize in Economics) and Tversky showed Bayesian methods more closely reflect how humans perceive their environment, respond to new information, and make decisions.  The theorem is a landmark of logical reasoning and the first serious triumph of statistical inference; Bayesian methods interpret probability as the degree of plausibility of a statement.

Kahneman and Tversky especially highlighted the heuristics and biases where Bayes Rule can overcome our irrational decision-making and this is why so many of the tech companies are seeking to train their engineers and programming staff with behavioral economics knowledge. We use the availability heuristic to assess probabilities rather than Bayesian equations. We all know that this gives way to all sorts of judgmental errors: a belief in the law of small numbers and a tendency towards hindsight bias. We know that we anchor around irrelevant information and that we take too much comfort in ever-more information that seems to provide us confirmation of our beliefs.

The representativeness heuristic

Heuristics are described as “judgmental shortcuts that generally get us where we need to go – and quickly – but at the cost of occasionally sending us off course.

When people rely on representativeness to make judgments, they are likely to judge wrongly because the fact that something is more representative does not make it more likely. This heuristic is used because it is an easy computation (Think Zipf’s law and human behavior – the principle of least effort). The problem is that people overestimate their ability to accurately predict the likelihood of an event. Thus it can result in neglect of relevant base rates (base rate fallacy) and other cognitive biases, especially confirmation bias.

The base rate fallacy describes how people do not take the base rate of an event into account when solving probability problems and is frequently and error in thinking.

Confirmation bias

Confirmation bias is the tendency of people to favor information that confirms their beliefs or hypotheses. Essentially people are prone to misperceive new incoming information as supporting their current beliefs.

It has been found that experts reassess data selectively, depending on their prior hypotheses over time. Bayesian statisticians argue that Bayes’ s theorem is a formally optimal rule about how to revise opinions in the light of evidence. Nevertheless, Bayesian techniques are, so far rarely utilized by management researchers or business practitioners in the wider business world.

Eliezer Yudkowsky of the Machine Intelligence Research Institute has written a detailed introduction of Bayes Theorem using behavioral economics examples and machine learning, which I highly recommend.

Time to think Bayesian and Behavioral Economics

As the major tech companies are showing, Bayesian and Behavioral Economics methods are well suited to address the increasingly complex phenomena and problems faced by 21st-century researchers and organizations, where very complex data abound and the validity of knowledge and methods are often seen as contextually driven and constructed.

Bayesian methods that treat probability as a measure of uncertainty may be a more natural approach to some high-impact management decisions, such as strategy formation, portfolio management, and decisions whether or not to enter risky markets.

If you are not thinking like a Bayesian, perhaps you should be. 

The intersection of behavioral economics and machine learning to understand Big Data

Behavioral economics and big dataI am often asked which jobs will thrive as we move into the next phase of the robot revolution. My answer is that people will need to be multi-skilled. They will need critical thinking and design skills, they will need to be able to think statistically, and they will need a deep knowledge of human behavior.

One area that I see growing in demand is those with machine learning and data science backgrounds, however increasingly, computer programmers and data scientists require dual expertise in both social science and computer science, adding competence in economics, sociology, and psychology – collectively known as Behavioral Economics — to more traditionally recognized requirements like algorithms, interfaces, systems, machine learning, and optimization.

This combined expertise in computer science and behavioral economics helps to bridge the gap between modeling human behavior, data mining and engineering web-scale systems. At Harvard School of Engineering and Applied Sciences they say that: “an emerging area in both artificial intelligence and theoretical computer science, computational mechanism design lies at the interface of computer science, game theory, and economics.” Similarly at Yale School of Management we now find professors working on the intersection of behavioral economics and machine learning.

Many of the major tech companies are recognizing the benefits of combining these skill sets. Microsoft Research call their internal machine learning and behavioral economics department Algorithmic Economics.

A recent paper by Hal Varian, Chief Economist at Google, titled, “Big Data: New Tricks for Econometrics” (Incidentally Hal is author of one of my favorite books: Information Rules), … provides an extremely readable introduction to the collaboration of machine learning, big data and behavioral economics.

Hal also offers a valuable piece of advice:

“I believe that these methods have a lot to offer and should be more widely known and used by economists. In fact, my standard advice to graduate students these days is ‘go to the computer science department and take a class in machine learning’.”

Michael Bailey an Economist at Facebook writes on Quora

I currently (Feb 2014) manage the economics research group on the Core Data Science team. We are a small group of engineer researchers (all PhDs) who study economics, business, and operations problems. As Eric Mayefsky mentioned, there are various folks with formal economics training spread across the company, usually in quantitative or product management roles.

The economics research group focuses on four research areas:

Core Economics – modeling supply and demand, operations research, pricing, forecasting, macroeconomics, econometrics, structural modeling.

Market Design – ad auctions, algorithmic game theory, mechanism design, simulation modeling, crowdsourcing.

Ads and Monetization – ads product and frontend research, advertiser experimentation, social advertising, new products and data, advertising effectiveness, marketing.

Behavioral Economics – user and advertiser behavior, economic networks, incentives, externalities, and decision making under risk and uncertainty.

I think a more interesting question is “what *could* an economist at Facebook do?” because there is a LOT of opportunity. There are incredibly important problems that only people who think carefully about causal analysis and model selection could tackle.  Facebook’s engineer to economist ratio is enormous. Software engineers are great at typical machine learning problems (given a set of parameters and data, make a prediction), but notoriously bad at answering questions out of sample or for which there’s no data. Economists spend a lot of time with observational data since we often don’t have the luxury of running experiments and we’ve honed our tools and techniques for that environment (instrumental variables for example). The most important strategic and business questions often rely on counterfactuals which require some sort of model (structural or otherwise) and that is where the economists step in.

In the following video (Machine Learning Meets Economics: Using Theory, Data, and Experiments to Design Markets) Stanford University’s Susan Athey discusses suggestions about research directions at the intersection of economics and machine learning.

I previously wrote on more of the crossover between Behavioral Economics, Machine Learning and Big Data and will continue to evolve this series of articles in the coming weeks.

If you like this post consider tipping with bitcoin – send to:  1DxpZv7Jq4zf7fhHtkeR4y6ggmqAaPbG6p

The economic impact of the robotic revolution

Meka websiteWhilst the word ‘robot’ generally conjures up visions of humanoids with superior intelligence, this science fiction image tends to forget the other type of robots: machines that carry out complicated motions and tasks, such as automated software processes (1), industrial robots, unmanned vehicles (driverless cars, drones) or even prosthetics. And it is principally the programmable machine robots that are among the robotic advances being acquired by major companies across the globe (2). These are also the robotic technologies that are disrupting commercial production and employment, and will likely continue to do so over the remainder of this decade.

Many economists and technophobes claim automation and technological progress has broad implications for the shape of the production function, inequality, and macroeconomic dynamics. However, robotics is also adding hundreds of thousands of jobs to the payroll across the globe, and it may just be that people have not yet acclimatized to the new jobs and skills required to do them.

Job displacement and skill gaps

In his magical science fiction classic, The Hitchhiker’s Guide to the Galaxy, Douglas Adams wrote about the ‘B’ Ark.

“The ‘B’ Ark was one of three giant space ships built to take people off the ‘doomed’ planet and relocate them on a new one. The inhabitants of ‘B’ Ark included: “tired TV producers, insurance salesmen, personnel officers, security guards, public relations executives, management consultants, account executives, and countless others.”

These were essentially people displaced from the workplace by automation.

Douglas Adams explained that there were three space ships, each designated for a different type of person: “the idea was that into the first ship, the ‘A’ ship, would go all the brilliant leaders, the scientists, the great artists, you know, all the achievers; and into the third, or ‘C’ ship, would go all the people who did the actual work, who made things and did things; and then into the `B’ ship – that’s us – would go everyone else, the middlemen.”

We later discover the planet was not in fact doomed, nor did the other two giant spaceships, ‘A’ Ark and ‘C’ Ark depart the planet.

MIT Economist David Autor and his co-authors echo Adams point that technology is displacing the ‘middle-class,’writing that automation has:

“Fostered a polarization of employment, with job growth concentrated in both the highest and lowest-paid occupations, while jobs in the middle have declined.”

This job polarization has in fact contributed significantly to income inequality.

Research by Lawrence Katz Professor of Economics at Harvard also shows the ‘hollowing out’ of middle skilled jobs due to technological advances.  A recent paper by Carl Frey and Michael Osborne of Oxford University concludes that 47 per cent of US jobs are at high risk from automation.

It’s not all doom and gloom for those with ‘middle skills’ and the MIT and Harvard researchers do allude to an increase in jobs and income for the ‘new artisans,’ a term coined by Professor Katz to refer to those who ‘virtuously combine technical and interpersonal tasks.’

Expanding upon this, Professor Autor expects that ”a significant stratum of middle skill, non-college jobs combining specific vocational skills with foundational middle skills – literacy, numeracy, adaptability, problem-solving and common sense – will persist in coming decades.”

Those skills according to Autor will provide employment for:

“Licensed practical nurses and medical assistants; teachers, tutors and learning guides at all educational levels; kitchen designers, construction supervisors and skilled tradespeople of every variety; expert repair and support technicians; and the many people who offer personal training and assistance, like physical therapists, personal trainers, coaches and guides. These workers will adeptly combine technical skills with interpersonal interaction, flexibility and adaptability to offer services that are uniquely human.”

Skill-biased technological change is not a new phenomenon. Joseph Schumpeter termed it Creative Destruction. Writing at the time of the Great Depression in the 1930’s, he said the prime cause of economic development was entrepreneurial spirit: “Without innovations, no entrepreneurs; without entrepreneurial achievement, no capitalist returns and no capitalist propulsion.”

Many smart people of that time believed that technology had reached its limits and capitalism had passed its peak. Schumpeter believed the exact opposite, and of course he was right. Technology changes, economic principles do not. As demand for one set of labor skills declines, demand for a new set of skills grows, often with better pay.

Why are big corporations buying robotic companies?

Major corporations, and creative destructors, such as Google, Amazon, Apple, Inc. have made headlines recently with their acquisitions of Robot and Deep Learning companies, the use of Machine Learning technology and their Artificial Intelligence aspirations.

What exactly do these corporations want with robots and Artificial Intelligence, and how does it impact society?

Machine learning

Andrew Ng, a Professor at Stanford University and Google fellow who teaches a popular Coursera (online free education) class in Machine Learning, says:

“In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI.”

Machine learning technology helps the machine to learn and remember things, or to act ‘without being explicitly programmed.’ It is the science (or art) of building algorithms that can recognize patterns in data and improve as they learn. For example, it may use your last search queries and current location to improve new search results, effectively providing enhanced search results.

Whilst machine learning is used extensively across companies such as Facebook, Google, LinkedIn, Netflix, Twitter, Apple, Adobe, Microsoft and many more, it’s not just the tech companies that are seeing the benefits. Machine learning technology is much in demand across industry with proven results at Wall Street investment banks, insurance companies and motor manufacturers such as Toyota and Tesla Motors.

Machine learning and health

IBM’s Watson is possibly the most famous example of a system using machine learning through its triumph at the popular TV gameshow Jeopardy. Watson is now aiding researchers and medical practitioners, and is (or will soon be) the world’s best diagnostician for cancer related ailments.

Having machines assist medical practitioners and researchers could significantly improve diagnoses and treatments for patients. Additionally these technologies will become more pervasive through wearable devices, such as Google Glass, Android phones, Apple’s iPhone or maybe a new Apple ‘iHealth’ gadget using its M7 motion sensing technology to monitor our health on the go.

I personally believe that significant improvements will be made in people’s health and wellbeing through improved technology advances, robotic treatments in hospitals, such as the operating theater and prescription services, improvement of assisted devices and prosthetics for those with disabilities and on a very large scale wearable technology. Machine learning and robotic technology will be central to this health revolution.

Machine learning is a game changer for those companies that implement its technologies successfully. Jobs for people with machine learning technology skills are and will continue be much in demand in the coming decade, particularly in industries where ‘Big Data’ factors heavily.

Industrial robots

In March 2012, Amazon announced the $775 million cash acquisition of Kiva Systems, a warehouse automation robot, and some seventeen months later, in October 2013, Amazon CEO Jeff Bezos noted that they had “deployed 1,382 Kiva robots in three Fulfillment Centers.”  Amazon has approximately 52 fulfillment centers spread across 8 countries with at least another 12 announced to be open in the next 9 months.

The rollout of Kiva robots across these fulfillment centers will have a significant strategic benefit to Amazon as it moves towards its goal of becoming the world’s largest retailer. So far this rollout has not reduced the number of employees at Amazon. In fact, Amazon continues to significantly grow its number of employees: last year Amazon added 20,000 full-time employees to its US fulfillment centers alone and this week announced a further recruitment drive of an additional 2,500 full time US fulfillment staff, indicating a 30 percent pay premium over traditional retail jobs. At the end of December 2013 Amazon employed 117,300 full and part-time employees globally (excluding contractors and temporary personnel). This is more than four times the 28,300 employees it reported on June 30th 2010, just three and half years ago. An increase of 89,000 jobs.

Kiva, together with the right qualified employees, provides Amazon the ability to cut its fulfillment costs, double its productivity, and increase its service levels.

Industrial robot manufacturers are reporting between 18 percent and 25 percent growth in orders and revenue year on year. Whilst some jobs will be displaced due to the increased rollout of robots in the manufacturing sector, many will also be created as robot manufactures recruit to meet their growing demand and jobs. Furthermore, jobs that were previously sent offshore are now being brought back to developed countries (for example, Apple manufacturing its Mac Pro in America and spending approximately US$ 10.5 billion in assembly robotics and machinery).

Cognitive machine assistants

There have been recent press speculations that Google intends to enter the industrial robot market after the acquisition of 8 robot companies at the end of 2013. Reports indicate that Andy Rubin, former head of Google’s Android platform, and new head of their robot development, met with Foxconn Chairman Terry Gou to discuss Foxconn’s robot initiatives (replacing 1 million employees with robots).

Whilst I think it highly unlikely Google will become a manufacturer of industrial robots, I do think it could use Mr. Rubin’s experience of creating a telecom industry-standard platform in order to develop a standard for an industrial robot platform, and that Google could lead and license this to other industrial robot manufacturers. If Google does go this route, expect them to announce the collaboration – as it has done with the Android development and the driverless car standard framework.

What does Google want with the robot companies it has acquired?

The immediate need is likely to be related to Google’s localization and mapping strategy. Google spends billions of dollars per year on its mapping program, and due to sophisticated new search technologies (especially mobile-related improvements in cognitive assistants, such as Google Now and Siri), Google must seek ways to stabilize the costs and ward off the threat of competition.

A big part of this requires that the search giant provides the best mapping experience and localization services. Remember that maps are not set in stone, but are constantly evolving; biannual updates of Street View and other associated solutions add considerably to the costs; and Google also wants to map inside major buildings, such as shopping malls, airports, etc, and indeed it has already started. Imagine the advertising opportunities and revenues available through improved localization!

Google knows search is becoming local and whilst its algorithms are some of the most advanced, within the next three years it expects Google Now (its voice-activated ‘cognitive assistant’) to deliver far more useful and relevant data to consumers. Without significant improvements in localization, including traffic data (which Google can deliver through its Waze purchase and integration) Google’s search advertisement revenue will falter. I’ve written more extensively on this here, and recently Google’s Director of Research Peter Norvig mentioned “the global localization and mapping problem” when he was asked a question relating to Google and robots interest (video around the 50 minutes mark). It makes more sense to send robots with high visualization and recording capabilities on mapping expeditions across the world than platoons of people carrying expensive and heavy equipment.

The real value produced by an information provider comes in locating, filtering, and communicating what is useful to the consumer. Google does that better than others, and its robot acquisitions – coupled with its machine learning and AI expertise – are designed to keep it at the forefront.

The fact that major corporations are buying into robotics, artificial intelligence and related technologies is helping to not only preserve but to increase their market share. Yes, jobs will be displaced, but many more will be created in the process. Great opportunities will be available to those with skills to complement and work with the machines.

ENDNOTES

[i]  Automation and robotics are often considered the same in many languages and it is this automation, through advances in machine learning and artificial intelligence that is driving rapid development in our workplaces.

[ii] Of course these programmable machine robots, and their advanced technologies in software and hardware, will eventually lead to the successful development of humanoid robots.

Deep Learning creating jobs in Apps, wearable tech and robotics

At the core of the early 21st century technology, with Internet connectivity and data driven by advances in Machine Learning; a sub-domain of what we call Artificial Intelligence, is integral to innovation advances.

A good definition of Artificial Intelligence (or maybe soft or logical AI), as provided by my friendly assistant, Google Now:

The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

Steve Jurvetson of DFJ said that he believes machine learning, a subset of AI will be one of the most important tech trends over the next 3-5 years for innovation and economic growth. By leveraging big data to allow computers to develop evolving behaviors, machine learning is vastly improving pattern recognition, allowing for broad application such as improved facial and speech recognition for application in many industries, especially national security.

Computer scientists have made significant advances in Machine Learning and soft AI with a particular set of approaches called “deep learning.” Deep Learning algorithms have been extremely successful for applications such as image recognition, speech recognition, and to some extent for natural language processing.

Deep Learning is the application of algorithms and software programming through ‘neural networks’ to develop machines, computers and robots that can do a wide variety of things including driving cars, working in factories, conversing with humans, translating speeches, recognizing and analyzing images and data patterns, and diagnosing complex operational or procedural problems.

One aspect of Deep Learning algorithms, which are also sometimes referred to as learning algorithms, which is receiving much work at major organizations, is providing a machine, computer or robot with the ability to learn from mostly unlabeled data, i.e. to work in a semi-supervised setting, where not all the examples come with complete and correct semantic labels.  This was cleverly shown by Google with its ability to identify cats without labels on the photographs (Google builds a brain that can identify cats).

As Professor Yann Le Cunn, now at Facebook says:

The only way to build intelligent machines these days is to have them crunch lots of data — and build models of that data.

Sometimes it’s not who has the best algorithm that wins; it’s who has the most data.

Many Deep Learning scientists and academics are being recruited by Google, Facebook, Microsoft co-founder Paul Allen’s AI organization, Adobe, Amazon, Microsoft (see e.g. Bing), IBM to name a few.

Some of these recruits led the journalist and TV interviewer to quip: “The best minds of my generation are thinking about how to make people click ads.”

As witty (and sad) as that is, there is a degree of truth in it, however deep learning has a far more significant impact and many employers are seeking out people with deep learning capabilities.

Here are a just a few examples of how deep learning is improving how we use computers, wearable tech and robots.

Google Glass – New York Police Department are beta testing Google Glass programmed with Deep Learning. The officer wearing Glass will have access to a database for facial recognition, be able to record the event in real time. With respect to clearing up misunderstandings for law enforcement agents and citizens I see this as a very good move.

One of my favorite uses of Deep Learning can be seen in Amazon’s new Flow App. Flow recognizes items via their shape, size, color, box text, and general appearance. Hold your iPhone up to a row of items at a store, or in your home, and within seconds of “seeing” it with the iPhone’s camera, every recognizable item is placed in queue that can be added to your Amazon cart. You can use Flow to scan a row of competing products, then compare their prices and Amazon ratings once they land in your queue. Unsurprisingly, physical stores are not fans of this.

Deep Learning will be transformational in robotics. Nao, the companion robot created by Aldebaran Robotics, uses deep learning to improve its emotional intelligence, facial recognition and ability to communicate in multiple languages (see video below).

The real innovation challenge it seems will not be to apply deep learning to replace humans but to use it to create new ideas, products and industries that will continue to generate new jobs and opportunities.