Home » Artificial Intelligence

Category Archives: Artificial Intelligence

Self-Healing Graphene Holds Promise for Artificial Skin in Future Robots

With the first ever documented observation of the self-healing phenomena of graphene, researchers hint at future applications for its use in artificial skin.

Graphene, which is, in simple terms, a sheet of pure carbon atoms and currently the world’s strongest material, is one million times thinner than paper; so thin that it is actually considered two dimensional. Notwithstanding its hefty price, graphene has quickly become among the most promising nanomaterials due to its unique properties and versatile prospective applications.

The paper published in Open Physics refers to an extraordinary yet previously undocumented self-healing property of graphene’s, which could lead to the development of flexible sensors that mimic the self-healing properties of human skin.

The largest organ in the human body, skin has been known for its fascinating self-healing ability – but until now, emulating this mechanism proved too much of a challenge as manmade materials lack this aptitude. Due to unprecedented stretching, bending and incidental scratches, artificial skin used in robots is extremely susceptible to ruptures and fissures. The study offers a novel solution where a sub-nano sensor uses graphene to sense a crack as soon as it starts nucleation, and surprisingly, even after the crack has spread a certain distance. According to the authors, this technology could quickly become viable for use in the next generation of electronics.

According to Dr. Swati Ghosh Acharyya, one of the researchers.

We wanted to observe the self-healing behavior of both pristine and defected single layer graphene and its application in sub-nano sensors for crack spotting by using molecular dynamic simulation. We were able to document the self-healing of cracks in graphene without the presence of any external stimulus and at room temperature.

The results revealed that self-healing occurred by spontaneous recombination of the dangling bonds whenever within the limit of critical crack opening displacement.

The researchers subjected single layer graphene containing various defects like pre-existing holes and differently oriented pre-existing cracks to uniaxial tensile loading till fracture. Interestingly enough, once the load was relaxed, the graphene started to heal and the self-healing continued irrespective of the nature of pre-existing defects in the graphene sheet. No matter what length of the crack, the authors say they all healed, provided the critical crack opening distance lied within 0.3 – 0.5 nm for both the pristine sheet as well as for the sheet with pre-existing defects.

Simulating self-healing in artificial skin will open the way to a variety of daily life applications ranging from sensors, through to mobile devices and ultracapacitors. In case of the latter, graphene-based devices would have an advantage of the large surface of graphene to provide increase in the electrical power by storing electrons on graphene sheets. Apparently such supercapacitors would have as much electrical storage capacity as lithium-ion batteries but could be recharged in minutes instead of hours.

The original article is fully open access and available on De Gruyter Online.

 

 

AI not yet but Machine Learning and Big Data are rapidly evolving

Solve problems

In his book Adventures in the Screen Trade, the hugely successful screenwriter William Goldman’s opening sentence is – “Nobody knows anything.” Goldman is talking about predictions of what might and what might not succeed at the box office. He goes on to write: “Why did Universal, the mightiest studio of all, pass on Star Wars? … Because nobody, nobody — not now, not ever — knows the least goddamn thing about what is or isn’t going to work at the box office.” Prediction is hard, “Not one person in the entire motion picture field knows for a certainty what’s going to work. Every time out it’s a guess.” Of course history is often a good predictor of what might work in the future and when, but according to Goldman time and time again predictions have failed miserably in the entertainment business.

It is exactly the same with technology and Artificial Intelligence (AI), probably more than any other technology has fared the worst when it comes to predictions of when it will be available as a truly ‘thinking machine.’ Fei-Fei Li, director of the Stanford Artificial Intelligence Lab even thinks: “today’s machine-learning and AI tools won’t be enough to bring about real AI.” And Demis Hassabis founder of Google’s DeepMind (and in my opinion one of the most advanced AI developers) forecasts: “it’s many decades away for full AI.”

Researchers are however starting to make considerable advances in soft AI. Although with the exception of less than 30 corporations there is very little tangible evidence that this soft AI or Deep Learning is currently being used productively in the workplace.

Some of the companies currently selling or and/or using soft AI or Deep Learning to enhance their services; IBM’s Watson, Google Search and Google DeepMind, Microsoft Azure (and Cortana), Baidu Search led by Andrew Ng, Palantir Technology, maybe Toyota’s new AI R&D lab if it has released any product internally, within Netflix and Amazon for predictive analytics and other services, the insurer and finance company USAA, Facebook (video), General Electric, the Royal Bank of Scotland, Nvidia, Expedia and MobileEye and to some extent the AI light powered collaborative robots from Rethink Robotics.

There are numerous examples of other companies developing AI and Deep Learning products but less than a hundred early-adopter companies worldwide. Essentially soft AI and Deep Learning solutions, such as Apple’s Siri, Drive.ai, Viv, Intel’s AI solutions, Nervana Systems, Sentient Technologies, and many more are still very much in their infancy, especially when it comes to making any significant impact on business transactions and systems processes.

Machine Learning

On the other hand, Machine Learning (ML), which is a subfield of AI, which some call light AI, is starting to make inroads into organizations worldwide. There are even claims that: “Machine Learning is becoming so pervasive today that you probably use it dozens of times per day without knowing it.”

Although according to Intel: “less than 10 per cent of servers worldwide were deployed in support of machine learning last year (2015).” It is highly probable Google, Facebook, Salesforce, Microsoft and Amazon would have taken up a large percentage of that 10 percent alone.

ML technologies, such as the location awareness systems like Apple’s iBeacon software, which connects information from a user’s Apple profile to in-store systems and advertising boards, allowing for a ‘personalized’ shopping experience and tracking of (profiled) customers within physical stores. IBM’s Watson and Google DeepMind’s Machine Learning have both shown how their systems can analyze vast amounts of information (data), recognize sophisticated patterns, make significant savings on energy consumption and empower humans with new analytical capabilities.

The promise of Machine Learning is to allow computers to learn from experience and understand information through a hierarchy of concepts. Currently ML is beneficial for pattern and speech recognition and predictive analytics. It is therefore very beneficial in search, data analytics and statistics – when there is lots of data available. Deep Learning helps computers solve problems that humans solve intuitively (or automatically by memory) like recognizing spoken words or faces in images.

Neither Machine Learning nor Deep Learning should be considered as a attempt to simulate the human brain – which is one goal of AI.

Crossing the chasm – not without lots of data

If driverless vehicles can move around with decreasing problems, this is not because AI has finally arrived, it is not that we have machines that are capable of human intelligence, but it is that we have machines that are very useful in dealing with big data and are able to make decisions based on uncertainties regarding the perception and interpretation of their environment – but we are not quite there yet! Today we have systems targeted at narrow tasks and domains, but not that promised by ‘general purpose’ AI, which should be able to accomplish a wide range of tasks, including those not foreseen by the system’s designers.

Essentially there’s nothing in the very recent developments in machine learning that significantly affects our ability to model, understand and make predictions in systems where data is scarce.

Nevertheless companies are starting to take notice, investors are funding ML startups, and corporations recognize that utilizing ML technologies is a good step forward for organizations interested in gaining the benefits promised by Big Data and Cognitive Computing over the long term. Microsoft’s CEO, Satya Nadella, says the company is heavily invested in ML and he is: “very bullish about making machine learning capability available (over the next 5 years) to every developer, every application, and letting any company use these core cognitive capabilities to add intelligence into their core operations.”

The next wave – understanding information

Organizations that have lots of data know that information is always limited, incomplete and possibly noisy. ML algorithms are capable of searching the data and building a knowledge base to provide useful information – for example ML algorithms can separate spam emails from genuine emails. A machine learning algorithm is an algorithm that is able to learn from data, however the performance of machine learning algorithms depends heavily on the representation of the data they are given.

Machine Learning algorithms often work on the principle most widely known as Occam’s razor. This principle states that among competing hypotheses that explain known observations equally well one should choose the “simplest” one. In my opinion this is why we should use machines only to augment human labor and not to replace it.

Machine Learning and Big Data will greatly compliment human ingenuity – a human-machine combination of statistical analysis, critical thinking, inference, persuasion and quantitative reasoning all wrapped up in one.

“Every block of stone has a statue inside it and it is the task of the sculptor to discover it. I saw the angel in the marble and carved until I set him free.” ~ Michelangelo (1475–1564)

The key questions businesses and policy makers need to be concerned with as we enter the new era of Machine Learning and Big Data:

1) who owns the data?

2) how is it used?

3) how is it processed and stored?

Update 16th August 2016

There is a very insightful Quora answer by François CholletDeep learning researcher at Google where he confirms what I have been saying above:

“Our successes, which while significant are still very limited in scope, have fueled a narrative about AI being almost solved, a narrative according to which machines can now “understand” images or language. The reality is that we are very, very far away from that.”

 

Photo credit, this was a screen grab of a conference presentation, now I do not remember the presenter or conference but if I find it I will update the credit!

 

 

 

 

When machines replace jobs, the net result is normally more new jobs

Two of the current leading researchers in labor economics studying the impact of machines and automation on jobs have released a new National Bureau of Economic Research (NBER) working paper, The Race Between Machine and Man: Implications of Technology for Growth, Factor Shares and Employment.

The authors, Daron Acemoglu and Pascual Restrepo are far from the robot-supporting equivalent of Statler and Waldorf, the Muppets who heckle from the balcony, unless you consider their heckling is about how so many have overstated the argument of robots taking all the jobs without factual support:

Similar claims have been made, but have not always come true, about previous waves of new technologies… Contrary to the increasingly widespread concerns, our model raises the possibility that rapid automation need not signal the demise of labor, but might simply be a prelude to a phase of new technologies favoring labor.

In The Race Between Machine and Man, the researchers set out to build a conceptual framework, which shows which tasks previously performed, by labor are automated, while at the same time more ‘complex versions of existing tasks’ and new jobs or positions in which labor has a comparative advantage are created.

The authors make several key observations that show as ‘low skilled workers’ are automated out of jobs, the creation of new complex tasks always increases wages, employment and the overall share of labor increases. As jobs are eroded, new jobs, or positions are created which require higher skills in the short term:

Whilst “automation always reduces the share of labor in national income and employment, and may even reduce wages. Conversely, the creation of new complex tasks always increases wages, employment and the share of labor.”

They show, through their analysis, that for each decade since 1980, employment growth has been faster in occupations with greater skill requirements

During the last 30 years, new tasks and new job titles account for a large fraction of U.S. employment growth.

In 2000, about 70% of the workers employed as computer software developers (an occupation employing one million people in the US at the time) held new job titles. Similarly, in 1990 a radiology technician and in 1980 a management analyst were new job titles.

Looking at the potential mismatch between new technologies and the skills needed the authors crucially show that these new highly skilled jobs reflect a significant number of the total employment growth over the period measured as shown in Figure 1:

From 1980 to 2007, total employment in the U.S. grew by 17.5%. About half (8.84%) of this growth is explained by the additional employment growth in occupations with new job titles.

Figure 1

Unfortunately we have known for some time that labor markets are “Pareto efficient; ” that is, no one could be made better off without making anyone worse off. Thus Acemoglu and Restrepo point to research that shows when wages are high for low-skill workers this encourage automation. This automation then leads to promotion or new jobs and higher wages for those with ‘high skills.’

Because new tasks are more complex, the creation may favor high-skill workers. The natural assumption that high-skill workers have a comparative advantage in new complex tasks receives support from the data.

The data shows that those classified as high skilled tend to have more years of schooling.

For instance, the left panel of Figure 7 shows that in each decade since 1980, occupations with more new job titles had higher skill requirements in terms of the average years of schooling among employees at the start of each decade (relative to the rest of the economy).

Figure 7

However it is not all bad news for low skilled workers the right panel of the same figure also shows a pattern of “mean reversion” whereby average years of schooling in these occupations decline in each subsequent decade, most likely, reflecting the fact that new job titles became more open to lower-skilled workers over time.

Our estimates indicate that, although occupations with more new job titles tend to hire more skilled workers initially, this pattern slowly reverts over time. Figure 7 shows that, at the time of their introduction, occupations with 10 percentage points more new job titles hire workers with 0.35 more years of schooling). But our estimates in Column 6 of Table B2 show that this initial difference in the skill requirements of workers slowly vanishes over time. 30 years after their introduction, occupations with 10 percentage points more new job titles hire workers with 0.0411 fewer years of education than the workers hired initially.

Essentially low-skill workers gain relative to capital in the medium run from the creation of new tasks.

Overall the study shows what many have said before, there is a skills gap when new technologies are introduced and those with the wherewithal to invest in learning new skills, either through extra education, on the job training, or self-learning are the ones who will be in high demand as new technologies are implemented.

 

 

New research ‘fears of technological change destroying jobs may be overstated’

robochef

Frank Levy an economist and Professor at MIT and Harvard, who work’s on technology’s impact on jobs and living standards, has written to assay the sensationalized fears of the overhyped study by Frey and Osborne. Levy indicates:

  • The General Proposition – Computers will be subsuming an increasing share of current occupations – is unassailable.
  • The Paper (Frey and Osborne study) is a set of guesses with lots of padding to increase the appearance of “scientific precision.”
  • The authors’ understanding of computer technology appears to be average for economists (= poor for computer scientists). By my personal guess, they are overestimating what current technology can do.

Researchers at the OECD analyzed the Frey and Osborne study and conducted their own research on tasks and jobs and concluded that: “automation was unlikely to destroy large numbers of jobs.”

I have also been quite critical of the Frey and Osborne study based on my understanding of technological advances, which they claim to be way more ahead than it is:

We argue that it is largely already technologically possible to automate almost any task, provided that sufficient amounts of data are gathered for pattern recognition.

With the exception of three bottlenecks, namely:

“Perception and manipulation.”

“Creative intelligence.”

“Social intelligence.”

Frey and Osborne divided the tasks involved in jobs along two dimensions: cognitive vs. manual and non-routine vs. routine. They then identified three aspects (bottlenecks) of a job making it less likely that a computer would be able to replicate the tasks of that job: First, “perception and manipulation” in unpredictable tasks such as handling emergencies, performing medical treatment, and the like. Second, “creative intelligence” such as cooking, drawing, or any other task involving creative values relying on novel combinations of inspiration; Third, “social intelligence”, or the real-time recognition of human emotion.

Race with the machines

Now a new research paper, released in July 2016, by researchers at the Centre for European Economic Research has indicated that technology has in fact had the opposite impact and is a net creator of jobs not destroyer (at least in 27 European countries – and I suspect the same is true for other regions).

The paper, Racing With or Against the Machine? Evidence from Europe by authors Terry Gregory, Anna Salomons, and Ulrich Zierahn (Gregory and Zierahn were also two of the OECD paper authors) looked at the impact of routine replacing technology on jobs and concluded:

Overall, we find that the net effect of routine-replacing technological change (RRTC ) on labor demand has been positive. In particular, our baseline estimates indicate that RRTC has increased labor demand by up to 11.6 million jobs across Europe – a non-negligible effect when compared to a total employment growth of 23 million jobs across these countries over the period considered. Importantly, this does not result from the absence of significant replacement of labor by capital. To the contrary, by performing a decomposition rooted in our theoretical model, we show that RRTC has in fact decreased labor demand by 9.6 million jobs as capital replaces labor in production. However, this has been overcompensated by product demand and spillover effects which have together increased labor demand by some 21 million jobs. As such, fears of technological change destroying jobs may be overstated: at least for European countries over the period considered, we can conclude that labor has been racing with rather than against the machine in spite of these substitution effects.

My research of companies using robots has also categorically shown, through factual evidence, that those companies have created significantly more jobs than have been lost due to technological change. Similarly a detailed analysis prepared for the European Commission Director General of Communications Networks, Content & Technology by Fraunhofer about the impact of robotic systems on employment in the EU found that:

European manufacturing companies do not generally substitute human workforce capital by capital investments in robot technology. On the contrary, it seems that the robots’ positive effects on productivity and total sales are a leverage to stimulate employment growth.

So if robots are not job killers what is the real problem?

We need to fill the skills gap

I have argued before that we have a skills problem. Jobs all over the world are not being filled because of lack of skilled personnel to fill them.

New and emerging technologies both excite and worry. Robotics and Artificial Intelligence (AI) is certainly a minefield for both exuberance and fears.

By definition, there is a knowledge and skills gap during the emerging stages of any new technology, Robotics and AI is no exception: researchers and engineers are still learning about these technologies and their applications. But, in the meantime, hope, fears and hype naturally and irresistibly fill this vacuum of information.

Depending on whom you ask Robots and AI is predicted to help solve the world’s problems. Or by building this devil, these technologies may scorch the earth and fulfill a prophecy of Armageddon.

On the other side, especially with respect to AI, what it will most likely do – if and only if adopted by major corporations and governments — is foster technological and institutional betterment at a frenetic pace through improved health care, solving climate problems, helping those with sight problems, helping to get much needed aid spread more equitably.

We need education and training fitted to a different labour market, with more focus on creativity, flexibility and social skills. We need more Moonshots from Governments and Industry as so well described by Mariana Mazzucato in her book the Entrepreneurial State: Debunking Public vs. Private Sector.

Machines are there to augment human intelligence and ingenuity, to improve our environment and workplace, we need to stop fearing the machines and learn how to better integrate them into our processes, destroy the fears and improve productivity. We are not going to stop technological progress, if we embrace it we are better prepared to gain from it.

Whitehouse Chairman of Economic Advisors – Why We Need More Artificial Intelligence

Society is caught between blind faith in technology and resistance to progress, between technological possibilities and fears that it has a negative impact.

Increasingly Artificial Intelligence, the latest buzzword for everything software related, is stirring up much of the fears.

In an interesting paper: Is This Time Different? The Opportunities and Challenges of Artificial Intelligence, Jason Furman, Chairman of President Obama’s Council of Economic Advisers sets out his belief that we need more artificial intelligence but must find a way to prevent the inequality it will inevitably cause. Despite the labor market challenges we may need to navigate, Furman’s bigger worry is that we will not invest enough in AI.

He is more pragmatic than many economists and researchers who have written ‘popular’ books on the subject but calls for more innovation if we are truly to reap the benefits AI and Robotics will bring:

We have had substantial innovation in robotics, AI, and other areas in the last decade. But we will need a much faster pace of innovation in these areas to really move the dial on productivity growth going forward. I do not share Robert Gordon’s (2016) confidently pessimistic predictions or Erik Brynjolfsson and Andrew Mcafee’s (2014) confidently optimistic ones because past productivity growth has been so difficult to predict.

Technology, in other words, is not destiny but it has a price

My worry is not that this time could be different when it comes to AI, but that this time could be the same as what we have experienced over the past several decades. The traditional argument that we do not need to worry about the robots taking our jobs still leaves us with the worry that the only reason we will still have our jobs is because we are willing to do them for lower wages.

Replacing the Current Safety Net with a Universal Basic Income Could Be Counterproductive

Furman says that AI does not create a call for a Universal Basic Income and that the claims for implementing UBI and cancelling other social welfare programs have been greatly overstated:

AI does not call for a completely new paradigm for economic policy—for example, as advocated by proponents of replacing the existing social safety net with a universal basic income (UBI) —but instead reinforces many of the steps we should already be taking to make sure that growth is shared more broadly.

Replacing part or all of that system with a universal cash grant, which would go to all citizens regardless of income, would mean that relatively less of the system was targeted towards those at the bottom—increasing, not decreasing, income inequality.

Instead our goal should be first and foremost to foster the skills, training, job search assistance, and other labor market institutions to make sure people can get into jobs, which would much more directly address the employment issues raised by AI than would UBI.

Past Innovations Have Sometimes Increased Inequality—and the Indications Suggest AI Could Be More of the Same

Relying on the questionable study by Frey and Osborne, Furman says that work by the Council of Economic Advisers, ranked the occupations by wages and found that, according to the Frey and Osbourne analysis, 83 percent of jobs making less than $20 per hour would come under pressure from automation, as compared to 31 percent of jobs making between $20 and $40 per hour and 4 percent of jobs making above $40 per hour (see Figure 1 below).

automationchart

AI has not had a large impact on employment, at least not yet

Furman says the issue is not that automation will render the vast majority of the population unemployable. Instead, it is that workers will either lack the skills or the ability to successfully match with the good, high paying jobs created by automation.

The concern is not that robots will take human jobs and render humans unemployable. The traditional economic arguments against that are borne out by centuries of experience. Instead, the concern is that the process of turnover, in which workers displaced by technology find new jobs as technology gives rise to new consumer demands and thus new jobs, could lead to sustained periods of time with a large fraction of people not working.

AI has the potential—just like other innovations we have seen in past decades—to contribute to further erosion in both the labor force participation rate and the employment rate. This does not mean that we will necessarily see a dramatically large share of jobs replaced by robots, but even continuing on the past trend of a nearly 0.2-percentage-point annual decline in the labor force participation rate for prime-age men would pose substantial problems for millions of people and for the economy as a whole.

Investment in AI

Mentioning the fact that AI has not had a significant macroeconomic impact yet, Furman indicates that the private sector will be the main engine of progress on AI. Citing references that in 2015 the private sector invested US$ 2.4 billion on AI, as compared to the approximately US$ 200 million invested by the National Science Foundation (NSF).[1]

He says the government’s role should include policies that support research, foster the AI workforce, promote competition, safeguard consumer privacy, and enhance cybersecurity

AI does not call for a completely new paradigm for economic policy

AI is one of many areas of innovation in the U.S. economy right now. At least to date, AI has not had a large impact on the aggregate performance of the macroeconomy or the labor market. But it will likely become more important in the years to come, bringing substantial opportunities— and our first impulse should be to embrace it fully.

He indicates that his biggest worry about AI is that we may not get all the breakthroughs we think we can, and that we need to do more to make sure we can continue to make groundbreaking discoveries that will raise productivity growth, improving the lives of people throughout the world.

However, it is also undeniable that like technological innovations in the past, AI will bring challenges in areas like inequality and employment. As I have tried to make clear throughout my remarks, I do not believe that exogenous technological developments solely determine the future of growth, inequality, or employment. Public policy—including public policies to help workers displaced by technology find new and better jobs and a safety net that is responsive to need and ensures opportunity —has a role to play in ensuring that we are able to fully reap the benefits of AI while also minimizing its potentially disruptive effects on the economy and society. And in the process, such policies could also contribute to increased productivity growth—including advances in AI itself.

What are those policies? Truman indicates we need to develop more “human learning and skills,” increase investments in research and development, this includes Government investment and also “expand and simplify the Research and Experimentation tax credit,” “increase the number of visas—which is currently capped by legislation—to allow more high-skilled workers to come into the country.” “Consolidate existing funding initiatives, help retrain workers in skills for which employers are looking,” and more focused initiatives such as the “DARPA Cyber Grand Challenge.”

The bottom line is that AI managed well, with innovate government support, could offer significant benefits to humanity, but those benefits, including earning capacity, can only be achieved if governments and corporations help people up-skill.

[1] For private funding see https://www.cbinsights.com/blog/artificial-intelligence-funding-trends/#funding. For public funding see http://www.nsf.gov/about/budget/fy2017/pdf/18_fy2017.pdf. According to the NSF, in 2015 there was $194.58 million in funding for the NSF Directorate for Computer and Information Science and Engineering’s Division of Information and Intelligent Systems (IIS), much of which is invested in research on AI. These figures do not include investment by other agencies, including Department of Defense.

 

 

 

Robots taking jobs — Maybe we’ve been thinking about this wrong

 Around 1900, most inventions concerned physical reality: cars, airplanes, zeppelins, electric lights, vacuum cleaners, air conditioners, bras, zippers. In 2005, most inventions concern virtual entertainment — We have already shifted from a reality economy to a virtual economy, from physics to psychology. ~Geoffrey Miller

 

Many commentators and researchers have indicated a supposedly imminent end to work, or at least the infamous ‘47% of jobs will be displaced’ within 20 years or so due to the inexorable advance of machines. This is at best a distraction and at worst grossly exaggerated and overhyped, as one of the authors of the infamous papers has noted.

However if we extend the timeframe out and consider the question:

How much could the world of work plausibly change by the end of the 21st century?

Or:

Eighty-four years from now will human’s work to earn a living or will machines do all the labor?

Then I believe we have framed a different vision of the future, one where it may be more plausible to consider that human’s will work 15 hours per week (if at all) as predicted by Keynes in 1930[1].

 

 

[1] John Maynard Keynes, Economic Possibilities for our Grandchildren (1930) – “everybody will need to do some work if (s)he is to be contented – three-hour shifts or a fifteen-hour week may put off the problem for a great while. “

 

 

 

Robots and job fears: Destruction of large numbers of jobs unlikely, says new OECD Study

There is so much doom and gloom associated with robots and jobs it is time to add some common sense to the misunderstandings created by so called experts opinions about robots and jobs – thankfully authors from the OECD may have added some clarity to the debate — ‘finding that on average, across the 21 OECD countries, ‘9% of jobs rather than 47%, as proposed by Frey and Osborne face a high automatibility.’

Capitalism, the term for our global ‘free’ markets, is a uniquely future-oriented economic system in which people invest, make innovations, apply for patents, and in other ways bet on the future. Behind all of this we find the hallmark of humanity, which is our creative intelligence.

It is intelligence that drives these investments and innovations, and intelligence that forges within many of us an intense curiosity of what the future may hold.

It is also intelligence that forges in others an anxiety over what the future holds. For many the future is no longer a promise but a threat!

Pessimism is the easy way out.

This curiosity and anxiety has stirred the same debates in society for generations. On one side there is intense optimism for a future where machines can take over many of the dirty, dangerous, dull and repetitive jobs, opening up new and more ‘interesting and rewarding’ jobs for those that may be displaced.

And on the other side those who are concerned that this time really is different and the machines we are building now, or which we will soon be capable of building, will be so advanced that there really will be no ‘new types’ of jobs for humans – and so they claim the majority of jobs for humans will be eliminated.

To those pessimists I often quote Lord Thomas Babington Macaulay who in 1830 wrote[1] about the prophet’s of gloom:

On what principle is it, that when we see nothing but improvement behind us, we are to expect nothing but deterioration before us?

In his 1995 book Jeremy Rifkin stated that ‘intelligent machines’ were being ‘hurried in to’ work environments, thus ending work for people.[2]

Now, for the first time, human labor is being systematically eliminated from the production process… A new generation of sophisticated information and communication technologies is being hurried into a wide variety of work situations. Intelligent machines are replacing human beings in countless tasks, forcing millions of blue and white-collar workers into unemployment lines, or worse still, breadlines.

It is 21 years since Rifkin made that claim, yet somehow human ingenuity marches on and continues to create more jobs and new industries. Sometimes new technologies eliminate jobs overall, but they also create demand for new capabilities and new jobs.

Looking with both eyes open

Despite the vast improvements we have made as a society, I wonder why it is that we look with one eye open, only seeing the negative aspect of technological change, instead of opening both eyes and seeing the benefits too. Often studies by ‘research scientists’ which receive significant media attention lead to misrepresentation of the potential benefits and impacts of technology and create fears, sometimes as if it is a fait accompli, even if this is not the intention of the study authors.

A new study by Melanie Arntz, Terry Gregory and Ulrich Zierahn[3] for the OECD argues that studies on robots or computerization eradicating jobs, such as that by Frey and Osborne, lead to a severe overestimation of job automatibility, as occupations labelled as high-risk occupations often still contain a substantial share of tasks that are hard to automate.

9 % of jobs could be automatable

The OECD authors provide far more realistic assessments than Frey and Osborne:

In contrast to other studies, we take into account the heterogeneity of workers’ tasks within occupations. Overall, we find that, on average across the 21 OECD countries, 9 % of jobs are automatable. The threat from technological advances thus seems much less pronounced.

Arntz, et al. argue that the estimated share of “jobs at risk” must not be equated with actual or expected employment losses from technological advances for three reasons.

  1. The utilisation of new technologies is a slow process, due to economic, legal and societal hurdles, so that technological substitution often does not take place as expected.
  2. Even if new technologies are introduced, workers can adjust to changing technological endowments by switching tasks, thus preventing technological unemployment.
  3. Technological change also generates additional jobs through demand for new technologies and through higher competitiveness.

Effectively the authors take into account that not whole occupations, but specific jobs are exposed to automatibility, depending on the tasks performed at these particular jobs.

They also demonstrate the necessity to view technological change as substituting or complementing certain tasks rather than whole occupations, which as I have mentioned before in this blog a major flaw in the Frey and Osborne study.

The OECD study authors state:

We find that in the US only 9% of jobs rather than 47%, as proposed by Frey and Osborne face a high automatibility.

We further find heterogeneities across OECD countries: while the share of automatable jobs is 6 % in Korea, the corresponding share is 12 % in Austria. The differences across countries may reflect general differences in workplace organisation, differences in previous investments into automation technologies as well as differences in the education of workers across countries.

Table 1 Automatibility by OECD Countries

Automatibility by OECD Countries

 

The main conclusion from the paper

Automation and digitalisation are unlikely to destroy large numbers of jobs. However, low qualified workers are likely to bear the brunt of the adjustment costs as the automatibility of their jobs is higher compared to highly qualified workers. Therefore, the likely challenge for the future lies in coping with rising inequality and ensuring sufficient (re-)training especially for low qualified workers.

Too many so called research experts have created way too much fear and public perception, which in turn can lead to bad policy recommendations. We need to be thoughtful in our vision, and analytical in our implementation – and realistic in our expectations of technologies capabilities.

Herbert Spencer’s words in “From Freedom to Bondage” are as relevant today as when he wrote them in 1891:

The more things improve the louder become the exclamations about their badness.[4]

 

[1] Thomas Babington Macaulay, Review of Southey’s Colloquies on Society, 1830 Edinburgh Review

[2] Jeremy Rifkin, The End of Work, 1995 Chapter 1.

[3] Arntz, M., T. Gregory and U. Zierahn  (2016), working paper  “The Risk of Automation for Jobs in OECD Countries: A Comparative Analysis”, OECD Social, Employment and Migration Working Papers, No. 189, OECD Publishing, Paris.

DOI: http://dx.doi.org/10.1787/5jlz9h56dvq7-en

[4] Herbert Spencer, The Man Versus the State, With Six Essays on Government, Society, and Freedom (Indianapolis: Liberty Press, 1891), p. 487.