In Late December 2016 Rethink Robotics, supplier of Co-Bots secured an additional US$ 18 million investment. The new round, despite being somewhat short of the US$ 33 million sought as indicated by their SEC filing, included funding from the Swiss headquartered private equity investment firm, Adveq, as well as contributions from all previous investors, including Bezos Expeditions, CRV, Highland Capital Partners, Sigma Partners, DFJ, Two Sigma Ventures, GE Ventures and Goldman Sachs.
I think that Rethink’s Baxter and Sawyer robots are setting a new standard in advanced robotics for businesses of all sizes – the only downside is that Rethink sub contract the manufacturing of their robots which gives them less control of delivery scheduling and has possibly considerably hindered their over all growth, cash flow outlays and profitably. This could reflect, in a very hot growth market, the less than enthusiastic take up by new investors and indeed appetite for considerably increasing investment from existing investors. However in the coming months I would expect Rethink would secure the additional US$ 15 million they seek, maybe via Asian manufacturing partners, a region that is becoming increasingly important for Rethink as they endeavor to capture a larger share of the co-bot market.
In addition to Rethink’s new investment – a very interesting, relative, new comer to the industrial robotic manufacturing scene, the Advanced Robotics Manufacturing (ARM) Institute, a U.S. national, public-private partnership, has announced funding of US$ 250 million.
The U.S. Department of Defense awarded the public-private Manufacturing USA institute to American Robotics, a nonprofit venture led by Carnegie Mellon, with more than 230 partners in industry, academia, government and the nonprofit sector across the U.S. The institute will receive $80 million from the DOD, and an additional $173 million from the partner organizations.
Based in Pittsburgh, ARM is led by a newly established national nonprofit called American Robotics, which was founded by Carnegie Mellon University and includes a national network of 231 stakeholders from industry, academia, local governments and nonprofits.
The mission of ARM is essentially four-pronged. To 1) empower American workers to compete with low-wage workers abroad; 2) create and sustain new jobs to secure U.S. national prosperity; 3) lower the technical, operational, and economic barriers for small- and medium- sized enterprises as well as large companies to adopt robotics technologies; and 4) assert U.S. leadership in advanced manufacturing.
ARM’s 10-year goals include increasing worker productivity by 30 percent, creating 510,000 new manufacturing jobs in the U.S., ensuring that 30 percent of SMEs adopt robotics technology, and providing the ecosystem where major industrial robotics manufacturers will emerge.
These investments keep robotics on course to be one of the main investment areas for improving manufacturing productivity and indeed increasing jobs and corporate profitability.
The ARM investment sounds very similar to those of the EU’s public / private initiative announced in June 2014, albeit that is a Euro 2.8 billion initiative and less ambitious, but very worthy, target of adding 240,000 new jobs.
Photo: ARM Institute impact
A new report released by the Whitehouse indicates that accelerating Artificial Intelligence (AI) capabilities will enable automation of some tasks that have long required human labor. The report authors indicate that these transformations will open up new opportunities for individuals, the economy, and society, but they will also disrupt the current livelihoods of millions of Americans. At a minimum, some occupations such as drivers and cashiers are likely to face displacement from or a restructuring of their current jobs.
The challenge for policymakers will be to update, strengthen, and adapt policies to respond to the economic effects of AI.
Although it is difficult to predict these economic effects precisely, the report suggests that policymakers should prepare for five primary economic effects:
- Positive contributions to aggregate productivity growth;
- Changes in the skills demanded by the job market, including greater demand for higher-level technical skills;
- Uneven distribution of impact, across sectors, wage levels, education levels, job types, and locations;
- Churning of the job market as some jobs disappear while others are created; and
- The loss of jobs for some workers in the short-run, and possibly longer depending on policy responses.
More generally, the report suggests three broad strategies for addressing the impacts of AI-driven automation across the whole U.S. economy:
- Invest in and develop AI for its many benefits;
- Educate and train Americans for jobs of the future; and
- Aid workers in the transition and empower workers to ensure broadly shared growth.
Key points from the report
The authors state it is unlikely that machines will exhibit broadly-applicable intelligence comparable to or exceeding that of humans in the next 20 years, it is to be expected that machines will continue to reach and exceed human performance on more and more tasks.
AI should be welcomed for its potential economic benefits. However there will be changes in the skills that workers need to succeed in the economy, and structural changes in the economy. Aggressive policy action will be needed to help Americans who are disadvantaged by these changes and to ensure that the enormous benefits of AI and automation are developed by and available to all.
Today, it may be challenging to predict exactly which jobs will be most immediately affected by AI-driven automation. Because AI is not a single technology, but rather a collection of technologies that are applied to specific tasks, the effects of AI will be felt unevenly through the economy. Some tasks will be more easily automated than others, and some jobs will be affected more than others—both negatively and positively. Some jobs may be automated away, while for others, AI-driven automation will make many workers more productive and increase demand for certain skills. Finally, new jobs are likely to be directly created in areas such as the development and supervision of AI as well as indirectly created in a range of areas throughout the economy as higher incomes lead to expanded demand.
Strategy #1: Invest in and develop AI for its many benefits. If care is taken to responsibly maximize its development, AI will make important, positive contributions to aggregate productivity growth, and advances in AI technology hold incredible potential to help the United States stay on the cutting edge of innovation. Government has an important role to play in advancing the AI field by investing in research and development. Among the areas for advancement in AI are cyberdefense and the detection of fraudulent transactions and messages. In addition, the rapid growth of AI has also dramatically increased the need for people with relevant skills from all backgrounds to support and advance the field. Prioritizing diversity and inclusion in STEM fields and in the AI community specifically, in addition to other possible policy responses, is a key part in addressing potential barriers stemming from algorithmic bias. Competition from new and existing firms, and the development of sound pro-competition policies, will increasingly play an important role in the creation and adoption of new technologies and innovations related to AI.
Strategy #2: Educate and train Americans for jobs of the future. As AI changes the nature of work and the skills demanded by the labor market, American workers will need to be prepared with the education and training that can help them continue to succeed. Delivering this education and training will require significant investments. This starts with providing all children with access to high-quality early education so that all families can prepare their students for continued education, as well as investing in graduating all students from high school college- and career- ready, and ensuring that all Americans have access to affordable post-secondary education. Assisting U.S. workers in successfully navigating job transitions will also become increasingly important; this includes expanding the availability of job-driven training and opportunities for lifelong learning, as well as providing workers with improved guidance to navigate job transitions.
Strategy #3: Aid workers in the transition and empower workers to ensure broadly shared growth. Policymakers should ensure that workers and job seekers are both able to pursue the job opportunities for which they are best qualified and best positioned to ensure they receive an appropriate return for their work in the form of rising wages. This includes steps to modernize the social safety net, including exploring strengthening critical supports such as unemployment insurance, Medicaid, Supplemental Nutrition Assistance Program (SNAP), and Temporary Assistance for Needy Families (TANF), and putting in place new programs such as wage insurance and emergency aid for families in crisis. Worker empowerment also includes bolstering critical safeguards for workers and families in need, building a 21st century retirement system, and expanding healthcare access. Increasing wages, competition, and worker bargaining power, as well as modernizing tax policy and pursuing strategies to address differential geographic impact, will be important aspects of supporting workers and addressing concerns related to displacement amid shifts in the labor market.
Finally, if a significant proportion of Americans are affected in the short- and medium-term by AI-driven job displacements, US policymakers will need to consider more robust interventions, such as further strengthening the unemployment insurance system and countervailing job creation strategies, to smooth the transition.
I will add detailed comments and my thoughts as I digest the full report in the coming days.
The document focuses on autonomous vehicles, eldercare, manufacturing and more
The new U.S. Robotics Roadmap calls for better policy frameworks to safely integrate new technologies, such as self-driving cars and commercial drones, into everyday life.
The detailed document also advocates for increased research efforts in the field of human-robot interaction to develop intelligent machines that will empower people to stay in their homes as they age. It calls for increased education efforts in the STEM fields from elementary school to adult learners
The roadmap’s authors, more than 150 researchers from around the nation, also call for research to create more flexible robotics systems to accommodate the need for increased customization in manufacturing, for everything from cars to consumer electronics
The goal of the U.S. Robotics Roadmap is to determine how researchers can make a difference and solve societal problems in the United States. The document provides an overview of robotics in a wide range of areas, from manufacturing to consumer services, healthcare, autonomous vehicles and defense. The roadmap’s authors make recommendation to ensure that the United States will continue to lead in the field of robotics, both in terms of research innovation, technology and policies.
We also want to make sure that research solves real life problems and gets deployed,
said Henrik I. Christensen, a professor computer scientist at the University of California San Diego, and the document’s lead editor.
We need to make sure that we are making an impact on people’s lives.
Unmanned vehicles and policy
The advances in the field of self-driving cars have far outpaced the predictions researchers made in the 2013 edition of the roadmap. But autonomous vehicles still have several obstacles to overcome, the researchers said. “It is important to recognize that human drivers have a performance of100 million miles driven between fatal accidents,” Christensen said. “It is far from trivial to design autonomous systems that have a similar performance.”
Self-driving cars need to become more like industrial robots, which can run autonomously for three years without human intervention, he added. Also, the many methods and technologies used in the field of self-driving vehicles need to be resolved into a single standard. “Systems integration might not get a lot of press, but it is essential,” Christensen said.
Finally, local, state and federal agencies need to formulate policies and regulations that ensure these cars can share the road safely with vehicles driven by people. Regulations and policies also need to be put in place for unmanned aerial vehicles, better known as drones or UAVs. When this is done, UAVs could revolutionize the way we ship goods by air, monitor the environment—and much more. They could help first responders during natural disasters and terrorist attacks.
Researchers also need to get better at controlling swarms of UAVs and robots. “Currently, it takes a small group of people to run complex UAVs. This ratio needs to be inverted so that one person can control a small group of UAVs and other autonomous robots. Human-robot interactions should resemble the relationship between an orchestra conductor and musicians,” Christensen said. “Individual players need to be smart enough to take cues from the conductor and play on their own.”
Health care and home companion robots
A major wave of companion robots is about to enter the market, as the population of developed countries ages. For example, 50 percent of the Japanese population is over 50 years old. “We need to help the elderly stay in their homes,” Christensen said. “And robots can help us get there.”
To reach this goal, robots will need to have a better understanding of their surroundings and become more reliable. Existing systems are equipped with basic navigation methods. But long-term autonomy with little or no human intervention needs to be the goal. In addition, robotic home companions will need to be able to perform a wider range of tasks.
It is also essential that robots be easy enough to control so that they can be used by everyone. That means that home care robots, for example, need user interfaces that are no more complicated than a TV remote.
“This needs to be moon shot for robotics research,” Christensen said.
In recent years, the need to customize products such as cars has increased dramatically. For example, a high-end vehicle can feature millions of different options, from the color of its seats to the configuration of its electronics. As a result, manufacturers have turned to increasingly sophisticated technology to drive assembly lines. This in turn has brought many factories back to the United States. In the past six years, the U.S. manufacturing sector has added 600,000 jobs. “Tremendous growth in robotics doesn’t have to mean job losses,” Christensen said.
But this expansion of robotic systems in industry must overcome two major obstacles, the roadmap states. Researchers need to develop user interfaces that will allow workers to operate robotic systems with little or no training. In other words, user interfaces need to become more like video games, Christensen said.
Also, robots’ manipulation skills need to improve dramatically, to match at least the dexterity of a young child. Right now, the most advanced robots have the grasping abilities of a one-year-old, Christensen said.
An Industrial Internet and the Internet of Things
For all applications, the core challenge is flexible integration of robotic systems with human operators and collaborators. Researchers envision an environment where physical systems are linked wirelessly via smart sensors and smart chips, within an industrial Internet of Things. This will make it easier for robots to navigate their environment and work with people. At the same time it is important to design these systems to be secure so that they cannot be hijacked or used in cyber attacks.
Amazon is at the forefront of this movement and owns 40 percent of application program interfaces, or APIs, related to IoT—which are open source, Christensen said. “This is going to create a whole new economy,” he said.
Robotic systems will dramatically change everyday life both in the home and at work in coming years. As a result, the public and the workforce need to be trained to interact with these systems. Training needs to happen at all levels, from kindergarten to 12th-grade and in trade schools before college. But most education efforts need to be focused on kindergarten through 12th-grade. Too many young people are dropping out of high school and will be left behind by this new economy based on robotics and the Internet of Things, Christensen said.
“We need to empower people to use robots,” he said. “We need to realize that most of the interfaces we design today for robotic systems aren’t easy to use.”
A shared robotics infrastructure
Researchers also are making a call to build a common, shared research infrastructure for robotics in the United States. The research network would expand existing sites, with a focus on testing autonomous driving, medical and health care robotics, micro- and nanorobotics, agriculture robotics, UAVs and underwater robotics. Each site would need about $3 million to be revamped into a shared facility.
In his book Adventures in the Screen Trade, the hugely successful screenwriter William Goldman’s opening sentence is – “Nobody knows anything.” Goldman is talking about predictions of what might and what might not succeed at the box office. He goes on to write: “Why did Universal, the mightiest studio of all, pass on Star Wars? … Because nobody, nobody — not now, not ever — knows the least goddamn thing about what is or isn’t going to work at the box office.” Prediction is hard, “Not one person in the entire motion picture field knows for a certainty what’s going to work. Every time out it’s a guess.” Of course history is often a good predictor of what might work in the future and when, but according to Goldman time and time again predictions have failed miserably in the entertainment business.
It is exactly the same with technology and Artificial Intelligence (AI), probably more than any other technology has fared the worst when it comes to predictions of when it will be available as a truly ‘thinking machine.’ Fei-Fei Li, director of the Stanford Artificial Intelligence Lab even thinks: “today’s machine-learning and AI tools won’t be enough to bring about real AI.” And Demis Hassabis founder of Google’s DeepMind (and in my opinion one of the most advanced AI developers) forecasts: “it’s many decades away for full AI.”
Researchers are however starting to make considerable advances in soft AI. Although with the exception of less than 30 corporations there is very little tangible evidence that this soft AI or Deep Learning is currently being used productively in the workplace.
Some of the companies currently selling or and/or using soft AI or Deep Learning to enhance their services; IBM’s Watson, Google Search and Google DeepMind, Microsoft Azure (and Cortana), Baidu Search led by Andrew Ng, Palantir Technology, maybe Toyota’s new AI R&D lab if it has released any product internally, within Netflix and Amazon for predictive analytics and other services, the insurer and finance company USAA, Facebook (video), General Electric, the Royal Bank of Scotland, Nvidia, Expedia and MobileEye and to some extent the AI light powered collaborative robots from Rethink Robotics.
There are numerous examples of other companies developing AI and Deep Learning products but less than a hundred early-adopter companies worldwide. Essentially soft AI and Deep Learning solutions, such as Apple’s Siri, Drive.ai, Viv, Intel’s AI solutions, Nervana Systems, Sentient Technologies, and many more are still very much in their infancy, especially when it comes to making any significant impact on business transactions and systems processes.
On the other hand, Machine Learning (ML), which is a subfield of AI, which some call light AI, is starting to make inroads into organizations worldwide. There are even claims that: “Machine Learning is becoming so pervasive today that you probably use it dozens of times per day without knowing it.”
Although according to Intel: “less than 10 per cent of servers worldwide were deployed in support of machine learning last year (2015).” It is highly probable Google, Facebook, Salesforce, Microsoft and Amazon would have taken up a large percentage of that 10 percent alone.
ML technologies, such as the location awareness systems like Apple’s iBeacon software, which connects information from a user’s Apple profile to in-store systems and advertising boards, allowing for a ‘personalized’ shopping experience and tracking of (profiled) customers within physical stores. IBM’s Watson and Google DeepMind’s Machine Learning have both shown how their systems can analyze vast amounts of information (data), recognize sophisticated patterns, make significant savings on energy consumption and empower humans with new analytical capabilities.
The promise of Machine Learning is to allow computers to learn from experience and understand information through a hierarchy of concepts. Currently ML is beneficial for pattern and speech recognition and predictive analytics. It is therefore very beneficial in search, data analytics and statistics – when there is lots of data available. Deep Learning helps computers solve problems that humans solve intuitively (or automatically by memory) like recognizing spoken words or faces in images.
Neither Machine Learning nor Deep Learning should be considered as a attempt to simulate the human brain – which is one goal of AI.
Crossing the chasm – not without lots of data
If driverless vehicles can move around with decreasing problems, this is not because AI has finally arrived, it is not that we have machines that are capable of human intelligence, but it is that we have machines that are very useful in dealing with big data and are able to make decisions based on uncertainties regarding the perception and interpretation of their environment – but we are not quite there yet! Today we have systems targeted at narrow tasks and domains, but not that promised by ‘general purpose’ AI, which should be able to accomplish a wide range of tasks, including those not foreseen by the system’s designers.
Essentially there’s nothing in the very recent developments in machine learning that significantly affects our ability to model, understand and make predictions in systems where data is scarce.
Nevertheless companies are starting to take notice, investors are funding ML startups, and corporations recognize that utilizing ML technologies is a good step forward for organizations interested in gaining the benefits promised by Big Data and Cognitive Computing over the long term. Microsoft’s CEO, Satya Nadella, says the company is heavily invested in ML and he is: “very bullish about making machine learning capability available (over the next 5 years) to every developer, every application, and letting any company use these core cognitive capabilities to add intelligence into their core operations.”
The next wave – understanding information
Organizations that have lots of data know that information is always limited, incomplete and possibly noisy. ML algorithms are capable of searching the data and building a knowledge base to provide useful information – for example ML algorithms can separate spam emails from genuine emails. A machine learning algorithm is an algorithm that is able to learn from data, however the performance of machine learning algorithms depends heavily on the representation of the data they are given.
Machine Learning algorithms often work on the principle most widely known as Occam’s razor. This principle states that among competing hypotheses that explain known observations equally well one should choose the “simplest” one. In my opinion this is why we should use machines only to augment human labor and not to replace it.
Machine Learning and Big Data will greatly compliment human ingenuity – a human-machine combination of statistical analysis, critical thinking, inference, persuasion and quantitative reasoning all wrapped up in one.
“Every block of stone has a statue inside it and it is the task of the sculptor to discover it. I saw the angel in the marble and carved until I set him free.” ~ Michelangelo (1475–1564)
The key questions businesses and policy makers need to be concerned with as we enter the new era of Machine Learning and Big Data:
1) who owns the data?
2) how is it used?
3) how is it processed and stored?
Update 16th August 2016
There is a very insightful Quora answer by François Chollet,
“Our successes, which while significant are still very limited in scope, have fueled a narrative about AI being almost solved, a narrative according to which machines can now “understand” images or language. The reality is that we are very, very far away from that.”
Photo credit, this was a screen grab of a conference presentation, now I do not remember the presenter or conference but if I find it I will update the credit!
Two of the current leading researchers in labor economics studying the impact of machines and automation on jobs have released a new National Bureau of Economic Research (NBER) working paper, The Race Between Machine and Man: Implications of Technology for Growth, Factor Shares and Employment.
The authors, Daron Acemoglu and Pascual Restrepo are far from the robot-supporting equivalent of Statler and Waldorf, the Muppets who heckle from the balcony, unless you consider their heckling is about how so many have overstated the argument of robots taking all the jobs without factual support:
Similar claims have been made, but have not always come true, about previous waves of new technologies… Contrary to the increasingly widespread concerns, our model raises the possibility that rapid automation need not signal the demise of labor, but might simply be a prelude to a phase of new technologies favoring labor.
In The Race Between Machine and Man, the researchers set out to build a conceptual framework, which shows which tasks previously performed, by labor are automated, while at the same time more ‘complex versions of existing tasks’ and new jobs or positions in which labor has a comparative advantage are created.
The authors make several key observations that show as ‘low skilled workers’ are automated out of jobs, the creation of new complex tasks always increases wages, employment and the overall share of labor increases. As jobs are eroded, new jobs, or positions are created which require higher skills in the short term:
Whilst “automation always reduces the share of labor in national income and employment, and may even reduce wages. Conversely, the creation of new complex tasks always increases wages, employment and the share of labor.”
They show, through their analysis, that for each decade since 1980, employment growth has been faster in occupations with greater skill requirements
During the last 30 years, new tasks and new job titles account for a large fraction of U.S. employment growth.
In 2000, about 70% of the workers employed as computer software developers (an occupation employing one million people in the US at the time) held new job titles. Similarly, in 1990 a radiology technician and in 1980 a management analyst were new job titles.
Looking at the potential mismatch between new technologies and the skills needed the authors crucially show that these new highly skilled jobs reflect a significant number of the total employment growth over the period measured as shown in Figure 1:
From 1980 to 2007, total employment in the U.S. grew by 17.5%. About half (8.84%) of this growth is explained by the additional employment growth in occupations with new job titles.
Unfortunately we have known for some time that labor markets are “Pareto efficient; ” that is, no one could be made better off without making anyone worse off. Thus Acemoglu and Restrepo point to research that shows when wages are high for low-skill workers this encourage automation. This automation then leads to promotion or new jobs and higher wages for those with ‘high skills.’
Because new tasks are more complex, the creation may favor high-skill workers. The natural assumption that high-skill workers have a comparative advantage in new complex tasks receives support from the data.
The data shows that those classified as high skilled tend to have more years of schooling.
For instance, the left panel of Figure 7 shows that in each decade since 1980, occupations with more new job titles had higher skill requirements in terms of the average years of schooling among employees at the start of each decade (relative to the rest of the economy).
However it is not all bad news for low skilled workers the right panel of the same figure also shows a pattern of “mean reversion” whereby average years of schooling in these occupations decline in each subsequent decade, most likely, reflecting the fact that new job titles became more open to lower-skilled workers over time.
Our estimates indicate that, although occupations with more new job titles tend to hire more skilled workers initially, this pattern slowly reverts over time. Figure 7 shows that, at the time of their introduction, occupations with 10 percentage points more new job titles hire workers with 0.35 more years of schooling). But our estimates in Column 6 of Table B2 show that this initial difference in the skill requirements of workers slowly vanishes over time. 30 years after their introduction, occupations with 10 percentage points more new job titles hire workers with 0.0411 fewer years of education than the workers hired initially.
Essentially low-skill workers gain relative to capital in the medium run from the creation of new tasks.
Overall the study shows what many have said before, there is a skills gap when new technologies are introduced and those with the wherewithal to invest in learning new skills, either through extra education, on the job training, or self-learning are the ones who will be in high demand as new technologies are implemented.
Frank Levy an economist and Professor at MIT and Harvard, who work’s on technology’s impact on jobs and living standards, has written to assay the sensationalized fears of the overhyped study by Frey and Osborne. Levy indicates:
- The General Proposition – Computers will be subsuming an increasing share of current occupations – is unassailable.
- The Paper (Frey and Osborne study) is a set of guesses with lots of padding to increase the appearance of “scientific precision.”
- The authors’ understanding of computer technology appears to be average for economists (= poor for computer scientists). By my personal guess, they are overestimating what current technology can do.
Researchers at the OECD analyzed the Frey and Osborne study and conducted their own research on tasks and jobs and concluded that: “automation was unlikely to destroy large numbers of jobs.”
I have also been quite critical of the Frey and Osborne study based on my understanding of technological advances, which they claim to be way more ahead than it is:
We argue that it is largely already technologically possible to automate almost any task, provided that sufficient amounts of data are gathered for pattern recognition.
With the exception of three bottlenecks, namely:
“Perception and manipulation.”
Frey and Osborne divided the tasks involved in jobs along two dimensions: cognitive vs. manual and non-routine vs. routine. They then identified three aspects (bottlenecks) of a job making it less likely that a computer would be able to replicate the tasks of that job: First, “perception and manipulation” in unpredictable tasks such as handling emergencies, performing medical treatment, and the like. Second, “creative intelligence” such as cooking, drawing, or any other task involving creative values relying on novel combinations of inspiration; Third, “social intelligence”, or the real-time recognition of human emotion.
Race with the machines
Now a new research paper, released in July 2016, by researchers at the Centre for European Economic Research has indicated that technology has in fact had the opposite impact and is a net creator of jobs not destroyer (at least in 27 European countries – and I suspect the same is true for other regions).
The paper, Racing With or Against the Machine? Evidence from Europe by authors Terry Gregory, Anna Salomons, and Ulrich Zierahn (Gregory and Zierahn were also two of the OECD paper authors) looked at the impact of routine replacing technology on jobs and concluded:
Overall, we find that the net effect of routine-replacing technological change (RRTC ) on labor demand has been positive. In particular, our baseline estimates indicate that RRTC has increased labor demand by up to 11.6 million jobs across Europe – a non-negligible effect when compared to a total employment growth of 23 million jobs across these countries over the period considered. Importantly, this does not result from the absence of significant replacement of labor by capital. To the contrary, by performing a decomposition rooted in our theoretical model, we show that RRTC has in fact decreased labor demand by 9.6 million jobs as capital replaces labor in production. However, this has been overcompensated by product demand and spillover effects which have together increased labor demand by some 21 million jobs. As such, fears of technological change destroying jobs may be overstated: at least for European countries over the period considered, we can conclude that labor has been racing with rather than against the machine in spite of these substitution effects.
My research of companies using robots has also categorically shown, through factual evidence, that those companies have created significantly more jobs than have been lost due to technological change. Similarly a detailed analysis prepared for the European Commission Director General of Communications Networks, Content & Technology by Fraunhofer about the impact of robotic systems on employment in the EU found that:
European manufacturing companies do not generally substitute human workforce capital by capital investments in robot technology. On the contrary, it seems that the robots’ positive effects on productivity and total sales are a leverage to stimulate employment growth.
So if robots are not job killers what is the real problem?
We need to fill the skills gap
I have argued before that we have a skills problem. Jobs all over the world are not being filled because of lack of skilled personnel to fill them.
New and emerging technologies both excite and worry. Robotics and Artificial Intelligence (AI) is certainly a minefield for both exuberance and fears.
By definition, there is a knowledge and skills gap during the emerging stages of any new technology, Robotics and AI is no exception: researchers and engineers are still learning about these technologies and their applications. But, in the meantime, hope, fears and hype naturally and irresistibly fill this vacuum of information.
Depending on whom you ask Robots and AI is predicted to help solve the world’s problems. Or by building this devil, these technologies may scorch the earth and fulfill a prophecy of Armageddon.
On the other side, especially with respect to AI, what it will most likely do – if and only if adopted by major corporations and governments — is foster technological and institutional betterment at a frenetic pace through improved health care, solving climate problems, helping those with sight problems, helping to get much needed aid spread more equitably.
We need education and training fitted to a different labour market, with more focus on creativity, flexibility and social skills. We need more Moonshots from Governments and Industry as so well described by Mariana Mazzucato in her book the Entrepreneurial State: Debunking Public vs. Private Sector.
Machines are there to augment human intelligence and ingenuity, to improve our environment and workplace, we need to stop fearing the machines and learn how to better integrate them into our processes, destroy the fears and improve productivity. We are not going to stop technological progress, if we embrace it we are better prepared to gain from it.
Goldman Sachs (“GS”) has released a series of research reports in 2016 centered on The Factory of the Future.
The series which they call ‘Profiles in Innovation’ examines six technologies GS believe is driving transition, from “Cobots” to 3D printing to Virtual and Augmented Reality to the Internet of Things, and how these technologies could yield more than US$500 billion of cost savings.
As part of the GS team’s investigations they hosted a Factory of the Future field-trip for investors at Automatica trade fair in Munich, Germany on June 25, 2016. They subsequently provided a synopsis of their key observations.
Here are the top takeaways from GS’s field trip to Automatica related to Robots.
Universal Robots (“UR”)
- Universal Robots’ cobots have a payback of 6 months and overall installation costs at <2x cost of robots vs. >3x for traditional robots. Cheapest UR cobot costs just €20k.
- Universal Robots believes its sales network, brand and open-source strategy will be important to lock-in and outgrow the cobot market.
- Amidst its own impressive growth, Universal Robots is preparing for tougher competition.
Universal Robots, Teradyne’s market leading collaborative robots business, hosted a booth tour. Key takeaways were:
- With the cobot market growing >50% pa in recent years, Teradyne (owner of UR) is targeting $90 million to $100 million in revenues for Universal Robots for 2016. UR believes this fast growth is unlikely to hit capacity constraints as its current Denmark-based manufacturing set-up can generate $500 million in revenues without the need for significant factory cap expenditure.
- The customer base for Universal Robots consists largely of SME enterprises in a wide range of end markets. As a result, its method-to-market and ease-of-use is key to achieving rapid organic growth. It uses distributors (which pick up servicing margin in return for broad dissemination) and a user-friendly set-up, eliminating the need for third party engineers to program the robot.
- Universal Robots believes that its technology is 2-3 years ahead of competitors (15 other booths at the fair were using UR cobots), however it is aware that the competition is increasing significantly. Leveraging Teradyne’s balance sheet they believe acting quickly and the use of their open-source platform (meaning that a wide range of components are easy to develop, described as an “App store” approach) is key to dominating this quickly evolving market.
- Cobot competition is picking up as Yaskawa entered the race and Fiat Chrysler’s Comau are pioneering solutions to concerns about speed.
- Yaskawa demonstrated five of its new product launches, underpinning our growth expectations and mix improvement as it increases appeal in general industry.
Yaskawa hosted a booth tour and interview with its EU operations management. The company exhibited several new products:
- 10 kg payload collaborative robots
- 7-axis robots with the newest spot welding gun and smaller, low pay-load robots ideal for general industry.
- Motologix software (bridging machine communication between controllers and PLCs (programmable logic controller) – based on VIPA (acquired German company PLC technology).
Goldman Sachs, who said they came away with a great deal of confidence in Yaskawa’s product mix, also offered the following key takeaways:
- Looking at collaborative robots specifically, GS believe the company has strong positioning as one of the ”Big Four” robotics company. They believe pricing is reasonable at €38,000 for 10kg weight handling, with sensors implemented in all axis and easy teaching system. Given that many start-up companies were introducing cobots with, in the GS teams opinion, inferior quality and yet similar pricing (€20-40,000 per unit), GS felt Yaskawa is well positioned to capture the growth of the cobot market.
- Yaskawa sold 25,000 robots in 2015; which GS estimate that Yaskawa has circa 10% market share (note these will be mainly premium robots), bringing Yaskawa’s total installed base to 350k.
Other general observations by the Goldman Sachs team
- Despite the absence of a major global robotics player, the US (where robotics is growing double digit) is still at the forefront in automation, by developing the embedded technologies required.
- Beware of the buzzwords: Most notably, AI and cloud robotics. Association for Advancing Automation thinks it might take decades to get commercializable AI products.
- Machine vision is a >$2 billion market, despite in a current downturn, according to the Association for Advancing Automation.
- Flexibility and efficiency are crucial in leading autos factories, as BMW produces a car in 44 hours with no two likely to be the same each day.
- The average age of workers in BMW’s Welt factory is rising (43 vs. 40 a few years ago) as new technologies, such as exoskeletons, are increasing the longevity of employees.
Check out Goldman Sachs briefings and video for additional information.