Home » 2014 » February

Monthly Archives: February 2014

Morgan Stanley — the Economic Benefits of Driverless Cars

Audi ACCIn November 2013 Morgan Stanley announced their blue paper report: “Autonomous Cars: Self-Driving the New Auto Industry Paradigm.” The authors predicted trillions in savings but the announcement provided little data on where those savings would come from. However, thanks to a research note released yesterday on Tesla Motors, Inc. (TSLA’s New Path of Disruption) Morgan Stanley provided an extract from the initial report which provides an outline of how they arrived at the annual $1.3 trillion in savings in the United States (with over $5.6 trillions globally).

Nearly every major auto manufacturer has initiated research and development of automated vehicle systems (semi-autonomous) and self-driving cars. Perhaps the most notable example, Google engineers have already recorded hundreds of thousands of miles in vehicles modified with advanced automated vehicle technology.

Preparing for driverless cars and cars with advanced connectivity technology makes up a significant portion of the $100 billion the global auto industry spends on research and development.

The research and development spend is a reflection of the auto industry’s inevitable change towards self-driving cars which Morgan Stanley says:

“Are no longer just the realm of science fiction. They are real and will be on roads sooner than you think. Cars with basic autonomous capability are in showrooms today, semi-autonomous cars are coming in 12-18 months, and completely autonomous cars are set to be available before the end of the decade.”

The total savings of over $5.6 trillion annually are not envisioned until a couple of decades as Morgan Stanley see four phases of adoption of self-driving vehicles. Phase 1 is already underway, Phase 2 will be semi-autonomous, Phase 3 will be within 5 to 10 years, by which time we will see fully self-driving vehicles on the roads – but not widespread usage. The authors say Phase 4, which will have the biggest impact, is when 100% of all vehicles on the roads will be fully autonomous, they say this may take a couple of decades.

The authors do add: “However, Phase 4 could come sooner than we think. If the government, the auto industry and other entities choose to accelerate adoption to access the full socioeconomic benefits of autonomous cars.”

Quantifying the Economic Benefits

The societal and economic benefits of autonomous vehicles include decreased crashes, decreased loss of life, increased mobility for the elderly, disabled and blind and decreases in fuel usage. The large potential savings, which they estimate at $1.3 trillion per year should accelerate the adoption of self-driving vehicles.

They outline five key areas where the cost savings will come from: $158 billion in fuel cost savings, $488 billion in annual savings will come through a reduction of accident costs, $507 billion is likely to be gained through increased productivity, reducing congestion will add a further $11 billion in savings, plus an additional $138 billion in productivity savings from less congestion.

The authors indicate the $1.3 trillion is a base case estimate and indicate a bear case scenario of $0.7 trillion savings per annum in the United States and a Bull case scenario of US$ 2.2 trillion per year.

Bull and bear

This authors are careful to point out that this is a rough estimate and does not account for the cost of implementing autonomous vehicles (one-time), offsetting losses, and investment implications. It also assumes 100% adoption of self-driving vehicles to achieve the potential savings indicated.

Fuel savings: $158 billion per year

Today’s cars, using cruise control and driving smoothly can deliver fuel economy savings of between 20 to 30 percent. Self-driving cars and autonomous vehicles will be more fuel efficient as they will be on cruise control 100 percent of the time, this factor along with improved aerodynamic styling and lighter weight material and other new technological advances cause the authors to conservatively predict:

An autonomous car can be 30% more efficient than an equivalent non-autonomous car… If we were to reduce the nation’s $535 gasoline bill by 30%, that would save us $158 bn.

Fuel

Accident savings (including injuries and fatalities) $563 billion per year

The authors refer to various reports, such as the World Health Organization  estimated 1.24 million deaths globally due to vehicle accidents.

According to the US Census, there were 10.8 million motor vehicle accidents in the US in 2009 (the last year for which data is available).

According to the US DOT, these accidents resulted in over 2 million injuries and 32,000 deaths. Morgan Stanley indicate that human error has been the main determinant in over 90 percent of these accidents.

There is a total cost of $625 billion per year in the US due to motor vehicle-related accidents. If 90% of accidents are caused by driver error, taking the driver out of the equation could theoretically reduce the cost of accidents by 90%. This could save $563 bn (90% of $625 bn) per year.

Injuries

Productivity gains: $422 bn per year

This is the area I consider most subjective. The authors claim that people will improve productivity as they will be able to work in their cars en-route to work, meetings, etc. The report does provide some pretty compelling statistics.

Productivity

Congestion savings: $149 bn per year

Referring to a report by the European Commission that congestion costs 1 percent of GDP, the authors believe there will be less cars on the road, due to traffic pooling and better use of cars which will reduce congestion, freeing us up to be more productive.

Fuel Savings from Vehicle Traffic Congestion Avoidance

Fuel from congestion

In summary — The authors believe that full penetration of autonomous cars could result in social benefits such as saving lives, reducing frustration from traffic jams, and giving people more flexibility with commuting or leisure driving.

“These benefits also have significant potential economic implications. And the implications are truly significant – the $1.3 trillion of value potentially generated by autonomous cars amounts to over 8 percent of the entire US GDP, as well as 152 percent of the US Defense budget and 144 percent of all student loans outstanding.”

There is considerable uncertainty concerning autonomous vehicle benefits, costs and travel impacts, this Morgan Stanley research adds significantly to the debate as we move towards fully automated vehicles.

For the Morgan Stanley report refer to the WSJ (research report from Morgan Stanley reference at “uses the word “utopian” 11 times.”)

 

 

Robotenomics Sponsors

banner 03

 

 

 

 

 

The intersection of behavioral economics and machine learning to understand Big Data

Behavioral economics and big dataI am often asked which jobs will thrive as we move into the next phase of the robot revolution. My answer is that people will need to be multi-skilled. They will need critical thinking and design skills, they will need to be able to think statistically, and they will need a deep knowledge of human behavior.

One area that I see growing in demand is those with machine learning and data science backgrounds, however increasingly, computer programmers and data scientists require dual expertise in both social science and computer science, adding competence in economics, sociology, and psychology – collectively known as Behavioral Economics — to more traditionally recognized requirements like algorithms, interfaces, systems, machine learning, and optimization.

This combined expertise in computer science and behavioral economics helps to bridge the gap between modeling human behavior, data mining and engineering web-scale systems. At Harvard School of Engineering and Applied Sciences they say that: “an emerging area in both artificial intelligence and theoretical computer science, computational mechanism design lies at the interface of computer science, game theory, and economics.” Similarly at Yale School of Management we now find professors working on the intersection of behavioral economics and machine learning.

Many of the major tech companies are recognizing the benefits of combining these skill sets. Microsoft Research call their internal machine learning and behavioral economics department Algorithmic Economics.

A recent paper by Hal Varian, Chief Economist at Google, titled, “Big Data: New Tricks for Econometrics” (Incidentally Hal is author of one of my favorite books: Information Rules), … provides an extremely readable introduction to the collaboration of machine learning, big data and behavioral economics.

Hal also offers a valuable piece of advice:

“I believe that these methods have a lot to offer and should be more widely known and used by economists. In fact, my standard advice to graduate students these days is ‘go to the computer science department and take a class in machine learning’.”

Michael Bailey an Economist at Facebook writes on Quora

I currently (Feb 2014) manage the economics research group on the Core Data Science team. We are a small group of engineer researchers (all PhDs) who study economics, business, and operations problems. As Eric Mayefsky mentioned, there are various folks with formal economics training spread across the company, usually in quantitative or product management roles.

The economics research group focuses on four research areas:

Core Economics – modeling supply and demand, operations research, pricing, forecasting, macroeconomics, econometrics, structural modeling.

Market Design – ad auctions, algorithmic game theory, mechanism design, simulation modeling, crowdsourcing.

Ads and Monetization – ads product and frontend research, advertiser experimentation, social advertising, new products and data, advertising effectiveness, marketing.

Behavioral Economics – user and advertiser behavior, economic networks, incentives, externalities, and decision making under risk and uncertainty.

I think a more interesting question is “what *could* an economist at Facebook do?” because there is a LOT of opportunity. There are incredibly important problems that only people who think carefully about causal analysis and model selection could tackle.  Facebook’s engineer to economist ratio is enormous. Software engineers are great at typical machine learning problems (given a set of parameters and data, make a prediction), but notoriously bad at answering questions out of sample or for which there’s no data. Economists spend a lot of time with observational data since we often don’t have the luxury of running experiments and we’ve honed our tools and techniques for that environment (instrumental variables for example). The most important strategic and business questions often rely on counterfactuals which require some sort of model (structural or otherwise) and that is where the economists step in.

In the following video (Machine Learning Meets Economics: Using Theory, Data, and Experiments to Design Markets) Stanford University’s Susan Athey discusses suggestions about research directions at the intersection of economics and machine learning.

I previously wrote on more of the crossover between Behavioral Economics, Machine Learning and Big Data and will continue to evolve this series of articles in the coming weeks.

If you like this post consider tipping with bitcoin – send to:  1DxpZv7Jq4zf7fhHtkeR4y6ggmqAaPbG6p

The economic impact of the robotic revolution

Meka websiteWhilst the word ‘robot’ generally conjures up visions of humanoids with superior intelligence, this science fiction image tends to forget the other type of robots: machines that carry out complicated motions and tasks, such as automated software processes (1), industrial robots, unmanned vehicles (driverless cars, drones) or even prosthetics. And it is principally the programmable machine robots that are among the robotic advances being acquired by major companies across the globe (2). These are also the robotic technologies that are disrupting commercial production and employment, and will likely continue to do so over the remainder of this decade.

Many economists and technophobes claim automation and technological progress has broad implications for the shape of the production function, inequality, and macroeconomic dynamics. However, robotics is also adding hundreds of thousands of jobs to the payroll across the globe, and it may just be that people have not yet acclimatized to the new jobs and skills required to do them.

Job displacement and skill gaps

In his magical science fiction classic, The Hitchhiker’s Guide to the Galaxy, Douglas Adams wrote about the ‘B’ Ark.

“The ‘B’ Ark was one of three giant space ships built to take people off the ‘doomed’ planet and relocate them on a new one. The inhabitants of ‘B’ Ark included: “tired TV producers, insurance salesmen, personnel officers, security guards, public relations executives, management consultants, account executives, and countless others.”

These were essentially people displaced from the workplace by automation.

Douglas Adams explained that there were three space ships, each designated for a different type of person: “the idea was that into the first ship, the ‘A’ ship, would go all the brilliant leaders, the scientists, the great artists, you know, all the achievers; and into the third, or ‘C’ ship, would go all the people who did the actual work, who made things and did things; and then into the `B’ ship – that’s us – would go everyone else, the middlemen.”

We later discover the planet was not in fact doomed, nor did the other two giant spaceships, ‘A’ Ark and ‘C’ Ark depart the planet.

MIT Economist David Autor and his co-authors echo Adams point that technology is displacing the ‘middle-class,’writing that automation has:

“Fostered a polarization of employment, with job growth concentrated in both the highest and lowest-paid occupations, while jobs in the middle have declined.”

This job polarization has in fact contributed significantly to income inequality.

Research by Lawrence Katz Professor of Economics at Harvard also shows the ‘hollowing out’ of middle skilled jobs due to technological advances.  A recent paper by Carl Frey and Michael Osborne of Oxford University concludes that 47 per cent of US jobs are at high risk from automation.

It’s not all doom and gloom for those with ‘middle skills’ and the MIT and Harvard researchers do allude to an increase in jobs and income for the ‘new artisans,’ a term coined by Professor Katz to refer to those who ‘virtuously combine technical and interpersonal tasks.’

Expanding upon this, Professor Autor expects that ”a significant stratum of middle skill, non-college jobs combining specific vocational skills with foundational middle skills – literacy, numeracy, adaptability, problem-solving and common sense – will persist in coming decades.”

Those skills according to Autor will provide employment for:

“Licensed practical nurses and medical assistants; teachers, tutors and learning guides at all educational levels; kitchen designers, construction supervisors and skilled tradespeople of every variety; expert repair and support technicians; and the many people who offer personal training and assistance, like physical therapists, personal trainers, coaches and guides. These workers will adeptly combine technical skills with interpersonal interaction, flexibility and adaptability to offer services that are uniquely human.”

Skill-biased technological change is not a new phenomenon. Joseph Schumpeter termed it Creative Destruction. Writing at the time of the Great Depression in the 1930’s, he said the prime cause of economic development was entrepreneurial spirit: “Without innovations, no entrepreneurs; without entrepreneurial achievement, no capitalist returns and no capitalist propulsion.”

Many smart people of that time believed that technology had reached its limits and capitalism had passed its peak. Schumpeter believed the exact opposite, and of course he was right. Technology changes, economic principles do not. As demand for one set of labor skills declines, demand for a new set of skills grows, often with better pay.

Why are big corporations buying robotic companies?

Major corporations, and creative destructors, such as Google, Amazon, Apple, Inc. have made headlines recently with their acquisitions of Robot and Deep Learning companies, the use of Machine Learning technology and their Artificial Intelligence aspirations.

What exactly do these corporations want with robots and Artificial Intelligence, and how does it impact society?

Machine learning

Andrew Ng, a Professor at Stanford University and Google fellow who teaches a popular Coursera (online free education) class in Machine Learning, says:

“In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI.”

Machine learning technology helps the machine to learn and remember things, or to act ‘without being explicitly programmed.’ It is the science (or art) of building algorithms that can recognize patterns in data and improve as they learn. For example, it may use your last search queries and current location to improve new search results, effectively providing enhanced search results.

Whilst machine learning is used extensively across companies such as Facebook, Google, LinkedIn, Netflix, Twitter, Apple, Adobe, Microsoft and many more, it’s not just the tech companies that are seeing the benefits. Machine learning technology is much in demand across industry with proven results at Wall Street investment banks, insurance companies and motor manufacturers such as Toyota and Tesla Motors.

Machine learning and health

IBM’s Watson is possibly the most famous example of a system using machine learning through its triumph at the popular TV gameshow Jeopardy. Watson is now aiding researchers and medical practitioners, and is (or will soon be) the world’s best diagnostician for cancer related ailments.

Having machines assist medical practitioners and researchers could significantly improve diagnoses and treatments for patients. Additionally these technologies will become more pervasive through wearable devices, such as Google Glass, Android phones, Apple’s iPhone or maybe a new Apple ‘iHealth’ gadget using its M7 motion sensing technology to monitor our health on the go.

I personally believe that significant improvements will be made in people’s health and wellbeing through improved technology advances, robotic treatments in hospitals, such as the operating theater and prescription services, improvement of assisted devices and prosthetics for those with disabilities and on a very large scale wearable technology. Machine learning and robotic technology will be central to this health revolution.

Machine learning is a game changer for those companies that implement its technologies successfully. Jobs for people with machine learning technology skills are and will continue be much in demand in the coming decade, particularly in industries where ‘Big Data’ factors heavily.

Industrial robots

In March 2012, Amazon announced the $775 million cash acquisition of Kiva Systems, a warehouse automation robot, and some seventeen months later, in October 2013, Amazon CEO Jeff Bezos noted that they had “deployed 1,382 Kiva robots in three Fulfillment Centers.”  Amazon has approximately 52 fulfillment centers spread across 8 countries with at least another 12 announced to be open in the next 9 months.

The rollout of Kiva robots across these fulfillment centers will have a significant strategic benefit to Amazon as it moves towards its goal of becoming the world’s largest retailer. So far this rollout has not reduced the number of employees at Amazon. In fact, Amazon continues to significantly grow its number of employees: last year Amazon added 20,000 full-time employees to its US fulfillment centers alone and this week announced a further recruitment drive of an additional 2,500 full time US fulfillment staff, indicating a 30 percent pay premium over traditional retail jobs. At the end of December 2013 Amazon employed 117,300 full and part-time employees globally (excluding contractors and temporary personnel). This is more than four times the 28,300 employees it reported on June 30th 2010, just three and half years ago. An increase of 89,000 jobs.

Kiva, together with the right qualified employees, provides Amazon the ability to cut its fulfillment costs, double its productivity, and increase its service levels.

Industrial robot manufacturers are reporting between 18 percent and 25 percent growth in orders and revenue year on year. Whilst some jobs will be displaced due to the increased rollout of robots in the manufacturing sector, many will also be created as robot manufactures recruit to meet their growing demand and jobs. Furthermore, jobs that were previously sent offshore are now being brought back to developed countries (for example, Apple manufacturing its Mac Pro in America and spending approximately US$ 10.5 billion in assembly robotics and machinery).

Cognitive machine assistants

There have been recent press speculations that Google intends to enter the industrial robot market after the acquisition of 8 robot companies at the end of 2013. Reports indicate that Andy Rubin, former head of Google’s Android platform, and new head of their robot development, met with Foxconn Chairman Terry Gou to discuss Foxconn’s robot initiatives (replacing 1 million employees with robots).

Whilst I think it highly unlikely Google will become a manufacturer of industrial robots, I do think it could use Mr. Rubin’s experience of creating a telecom industry-standard platform in order to develop a standard for an industrial robot platform, and that Google could lead and license this to other industrial robot manufacturers. If Google does go this route, expect them to announce the collaboration – as it has done with the Android development and the driverless car standard framework.

What does Google want with the robot companies it has acquired?

The immediate need is likely to be related to Google’s localization and mapping strategy. Google spends billions of dollars per year on its mapping program, and due to sophisticated new search technologies (especially mobile-related improvements in cognitive assistants, such as Google Now and Siri), Google must seek ways to stabilize the costs and ward off the threat of competition.

A big part of this requires that the search giant provides the best mapping experience and localization services. Remember that maps are not set in stone, but are constantly evolving; biannual updates of Street View and other associated solutions add considerably to the costs; and Google also wants to map inside major buildings, such as shopping malls, airports, etc, and indeed it has already started. Imagine the advertising opportunities and revenues available through improved localization!

Google knows search is becoming local and whilst its algorithms are some of the most advanced, within the next three years it expects Google Now (its voice-activated ‘cognitive assistant’) to deliver far more useful and relevant data to consumers. Without significant improvements in localization, including traffic data (which Google can deliver through its Waze purchase and integration) Google’s search advertisement revenue will falter. I’ve written more extensively on this here, and recently Google’s Director of Research Peter Norvig mentioned “the global localization and mapping problem” when he was asked a question relating to Google and robots interest (video around the 50 minutes mark). It makes more sense to send robots with high visualization and recording capabilities on mapping expeditions across the world than platoons of people carrying expensive and heavy equipment.

The real value produced by an information provider comes in locating, filtering, and communicating what is useful to the consumer. Google does that better than others, and its robot acquisitions – coupled with its machine learning and AI expertise – are designed to keep it at the forefront.

The fact that major corporations are buying into robotics, artificial intelligence and related technologies is helping to not only preserve but to increase their market share. Yes, jobs will be displaced, but many more will be created in the process. Great opportunities will be available to those with skills to complement and work with the machines.

ENDNOTES

[i]  Automation and robotics are often considered the same in many languages and it is this automation, through advances in machine learning and artificial intelligence that is driving rapid development in our workplaces.

[ii] Of course these programmable machine robots, and their advanced technologies in software and hardware, will eventually lead to the successful development of humanoid robots.

New report — Robotics the fastest growing industry in the world

“Robotics is the fastest growing industry in the world, poised to become the largest in the next Sophie hr robotdecade.”

That’s the opening quote from a new report by Littler Mendelson, the world’s largest labor and employment law firm.

The Report titled: “The Transformation of the Workplace Through Robotics, Artificial Intelligence (AI), and Automation” is focused on How the ‘Robotics Revolution Will Shape the employment and Labor Law Landscape.’ It states that as robotic systems, AI, and 21st century automation are developing at an exponential pace, creating work environments and conditions unimagined a half century or more ago, employers and employees should know their rights and be active participants in the discussion about how labor laws will change.

Outlining what they mean by robotics systems the authors indicate:

A “robotic system” is a computer system that, using intelligent, networked devices, the Internet, big data, AI algorithms, and other advanced computing technology, is capable of: automatically and continually

“Sensing” what is going on in a changing physical or other environment;

“Thinking” by analyzing data it collects from the environment it is monitoring (e.g. detecting occurrences, changes, and anomalies), identifying trends, and reaching conclusions; and autonomously

Acting” by carrying out one or more physical (e.g. navigating through an environment, manipulating an object, etc.) or non-physical (e.g. alerting human operators, recommending potential responses, making decisions, initiating commands, etc.) functions.

Stated more simply, it is any computer system capable of sensing occurrences in a dynamic situation or environment, capturing and analyzing the relevant data, and subsequently reaching conclusions, providing recommendations, making decisions, and otherwise taking action, whether of a physical or non-physical nature.

Displacement and creation of jobs

Whilst historically, the infusion of new technologies into the workplace has greatly increased productivity and human employment. The authors write: “what is different now and over the next decade is the speed of change, the challenge of displaced workers to retrain and quickly adjust to the new economy, and the unprecedented demand for STeM-qualified job candidates.”

They believe that robotics is the next major innovation to transform the workplace, and will have as great — if not greater — impact on how employers operate than the Internet.

Providing solid guidance for employers and employees the authors do offer encouragement:

Many existing jobs will be automated in the next 20 years. Several repetitive, low-skilled jobs are already being supplanted by technology. However, a number of studies have found that in the aggregate, the robotics industry is creating more jobs than robots replace. For example, the International Federation of Robotics (IFR) estimates that robotics directly created four to six million jobs through 2011, with the total rising to eight to 10 million if indirect jobs are counted. The IFR projects that 1.9 to 3.5 million jobs will be created in the next eight years.

Whilst initially US centric, I would encourage people, regardless of global location, to not only download and read the report but provide feedback to the authors as they seek to help shape employment law, something that will have a significant impact on us all.

Picture: Sophie the HR robot

Google’s robot acquisitions likely cost less than $100 million

Andy Rubin

There is much speculation about Google’s intentions with its acquisition of eight robot companies in the fall of last year. What has been missing in this speculation is just how much it has spent on the eight companies.

The acquisitions of the robot companies appear to have been completed in the last quarter of 2013, they were announced in December.

At the end of the third quarter of 2013 Google stated in their financial reporting that they had completed twenty-one acquisitions with a total value of $1,338 billion. They confirmed that $969 million of this was spent on the purchase of mapping company Waze.

The remaining $369 million was spent on purchasing an additional twenty companies.

None of these were material so the company does not have to account for them individually in its regulatory filings.

Some of these acquisitions can be found online. The largest of them was Channel Intelligence for which Google reportedly paid $125 million. Then there are lots of smaller acquisitions ranging from file sharing app, Bump for $35 million, motion detector device Flutter, which cost $40 million. The acquisition of Wavii for $35 million enhances Google’s natural language technology. They boosted their cloud computing capabilities when they brought in Talaria Technologies for around $20 million, their green energy capacity with the addition of Makani Power to the Google X team for around $30 million. They also brought in some great recruits with the purchase of DNNResearch headed up by Machine Learning pioneer Geoff Hinton for around $5 million, and employees from the venture fund Hatter again for around $5 million. Google also bought 217 patents from IBM during the first 9 months although it’s not clear where these have been accounted for and could be expensed in R&D.

This is a nominal selection of the 20 non-material acquisitions in the first three quarters of 2013, accounting for $318 million of the $369 million expenditure, but it helps establish a baseline for the cost of the robot companies they bought.

Yesterday Google posted their annual report, form 10K, on the Securities and Exchange Commission (SEC) website. In the annual report they confirm that during the twelve months ended 31st December 2013 they spent a total of $489 million on acquisitions that were not material and therefore did not require specific details. They also accounted for the Waze acquisition at $969 million – “The (Waze) acquisition is expected to enhance our customers’ user experience by offering real time traffic information to meet users’ daily navigation needs.”

Now we know that they spent $369 million on non-material companies in the first three quarters and $489 million during the whole twelve months we can safely calculate that $120 million was spent in the fourth quarter on non-material acquisitions.

The announcement of French company Flexycore for a rumored $23 million took place in the fourth quarter (end of October) and its not clear when they closed the deal for Flutter on whom they spent $40 million, as the media picked this up on 2nd October, but I’ll ‘assume’ they closed it in the 3rd quarter.

Taking the deal for Flexycore off the table indicates that Google spent an additional $97 million during the last quarter of 2013 on acquisitions, the time they bought the 8 robot companies

It’s likely that somewhere between $50 million and $90 million of the $1,458 billion Google spent on acquisitions in 2013 went on the purchase of those eight robot companies.

It’s fascinating that these acquisitions have created such media speculation given the nominal size of the purchase costs.

Of course this is very good news for the robotic industry as it possibly increases the value of the sector in the eyes of investors, raises awareness about the advances being made in robotics and brings an important subject about the future direction of technology and jobs into the public domain.

As I have said before, Google’s acquisition of robots and Artificial Intelligence technologies is anything but scary.

Photo: Andy Rubin Head of Google’s Robot Revolution, former head of Android

Robot stocks outperforming market averages

By the early part of the 1800’s steam was the driver of all engines, the enabler of industry. The word stood for power and force and all that was vigorous and modern. Formerly, water or wind drove the mills, and most of the world’s work still depended on the strength of people and horses and other livestock. But hot steam, generated by burning coal and brought under control by ingenious inventors, had portability and versatility. It replaced muscles everywhere. Steam became the most powerful transmitter of energy known to humanity.

Fast-forward a little over 200 years and steam has been replaced by a more powerful and ‘intelligent’ force – robotic brawn and energy.

According to leading industry forecaster, Research and Markets, revenue for the global industrial robotics market is expected to cross $37 billion by 2018. Robots are seen most in big industries like automotive, food, aerospace, and pharmaceutical, but with the launch of lower priced products such as Baxter from Rethink Robotics in the US and industrial robots from Universal Robotics in Denmark they are beginning to be used in other sectors.

The $37 billion market for industrial robotics by 2018 may sound insignificant next to Bill Gates prediction of a robot in every home and a $1 trillion global business by 2025. This is where Google’s acquisition of Nest and other robotic manufacturers may earn a big slice of the market, together with the Roomba from iRobots and other manufacturers.

In their 2013 report, the “hype cycle of emerging technologies” for 2013, Gartner Research Vice President Jackie Fenn describes the overriding theme of the year as the “evolving relationship between humans and machines.”

Major manufacturers such as ABB (industrial), Boeing (unmanned air vehicles or drones), Toyota (Driverless cars) and many others are seeking to gain significant market share through robotic devices and technology. Whilst others such as Amazon, and Apple are investing significantly in robotics to improve their processes, reduce costs and increase profits.

The potential market for Robots is starting to whet the appetite of investors, consider Adept Technology and iRobot. Stocks in Adept are up 498.42% over the last 5 years. iRobot stocks are up 386.75% — compare these to the Nasdaq composite which is up 154.88% and The Dow Jones which is up 88.73% over the same period.

robot Share Analysis

Analyzing companies with significant use, design or manufacture of robotics could be a sensible approach for investors.

Deep Learning creating jobs in Apps, wearable tech and robotics

At the core of the early 21st century technology, with Internet connectivity and data driven by advances in Machine Learning; a sub-domain of what we call Artificial Intelligence, is integral to innovation advances.

A good definition of Artificial Intelligence (or maybe soft or logical AI), as provided by my friendly assistant, Google Now:

The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

Steve Jurvetson of DFJ said that he believes machine learning, a subset of AI will be one of the most important tech trends over the next 3-5 years for innovation and economic growth. By leveraging big data to allow computers to develop evolving behaviors, machine learning is vastly improving pattern recognition, allowing for broad application such as improved facial and speech recognition for application in many industries, especially national security.

Computer scientists have made significant advances in Machine Learning and soft AI with a particular set of approaches called “deep learning.” Deep Learning algorithms have been extremely successful for applications such as image recognition, speech recognition, and to some extent for natural language processing.

Deep Learning is the application of algorithms and software programming through ‘neural networks’ to develop machines, computers and robots that can do a wide variety of things including driving cars, working in factories, conversing with humans, translating speeches, recognizing and analyzing images and data patterns, and diagnosing complex operational or procedural problems.

One aspect of Deep Learning algorithms, which are also sometimes referred to as learning algorithms, which is receiving much work at major organizations, is providing a machine, computer or robot with the ability to learn from mostly unlabeled data, i.e. to work in a semi-supervised setting, where not all the examples come with complete and correct semantic labels.  This was cleverly shown by Google with its ability to identify cats without labels on the photographs (Google builds a brain that can identify cats).

As Professor Yann Le Cunn, now at Facebook says:

The only way to build intelligent machines these days is to have them crunch lots of data — and build models of that data.

Sometimes it’s not who has the best algorithm that wins; it’s who has the most data.

Many Deep Learning scientists and academics are being recruited by Google, Facebook, Microsoft co-founder Paul Allen’s AI organization, Adobe, Amazon, Microsoft (see e.g. Bing), IBM to name a few.

Some of these recruits led the journalist and TV interviewer to quip: “The best minds of my generation are thinking about how to make people click ads.”

As witty (and sad) as that is, there is a degree of truth in it, however deep learning has a far more significant impact and many employers are seeking out people with deep learning capabilities.

Here are a just a few examples of how deep learning is improving how we use computers, wearable tech and robots.

Google Glass – New York Police Department are beta testing Google Glass programmed with Deep Learning. The officer wearing Glass will have access to a database for facial recognition, be able to record the event in real time. With respect to clearing up misunderstandings for law enforcement agents and citizens I see this as a very good move.

One of my favorite uses of Deep Learning can be seen in Amazon’s new Flow App. Flow recognizes items via their shape, size, color, box text, and general appearance. Hold your iPhone up to a row of items at a store, or in your home, and within seconds of “seeing” it with the iPhone’s camera, every recognizable item is placed in queue that can be added to your Amazon cart. You can use Flow to scan a row of competing products, then compare their prices and Amazon ratings once they land in your queue. Unsurprisingly, physical stores are not fans of this.

Deep Learning will be transformational in robotics. Nao, the companion robot created by Aldebaran Robotics, uses deep learning to improve its emotional intelligence, facial recognition and ability to communicate in multiple languages (see video below).

The real innovation challenge it seems will not be to apply deep learning to replace humans but to use it to create new ideas, products and industries that will continue to generate new jobs and opportunities.