Home » 2016 » August

Monthly Archives: August 2016

AI not yet but Machine Learning and Big Data are rapidly evolving

Solve problems

In his book Adventures in the Screen Trade, the hugely successful screenwriter William Goldman’s opening sentence is – “Nobody knows anything.” Goldman is talking about predictions of what might and what might not succeed at the box office. He goes on to write: “Why did Universal, the mightiest studio of all, pass on Star Wars? … Because nobody, nobody — not now, not ever — knows the least goddamn thing about what is or isn’t going to work at the box office.” Prediction is hard, “Not one person in the entire motion picture field knows for a certainty what’s going to work. Every time out it’s a guess.” Of course history is often a good predictor of what might work in the future and when, but according to Goldman time and time again predictions have failed miserably in the entertainment business.

It is exactly the same with technology and Artificial Intelligence (AI), probably more than any other technology has fared the worst when it comes to predictions of when it will be available as a truly ‘thinking machine.’ Fei-Fei Li, director of the Stanford Artificial Intelligence Lab even thinks: “today’s machine-learning and AI tools won’t be enough to bring about real AI.” And Demis Hassabis founder of Google’s DeepMind (and in my opinion one of the most advanced AI developers) forecasts: “it’s many decades away for full AI.”

Researchers are however starting to make considerable advances in soft AI. Although with the exception of less than 30 corporations there is very little tangible evidence that this soft AI or Deep Learning is currently being used productively in the workplace.

Some of the companies currently selling or and/or using soft AI or Deep Learning to enhance their services; IBM’s Watson, Google Search and Google DeepMind, Microsoft Azure (and Cortana), Baidu Search led by Andrew Ng, Palantir Technology, maybe Toyota’s new AI R&D lab if it has released any product internally, within Netflix and Amazon for predictive analytics and other services, the insurer and finance company USAA, Facebook (video), General Electric, the Royal Bank of Scotland, Nvidia, Expedia and MobileEye and to some extent the AI light powered collaborative robots from Rethink Robotics.

There are numerous examples of other companies developing AI and Deep Learning products but less than a hundred early-adopter companies worldwide. Essentially soft AI and Deep Learning solutions, such as Apple’s Siri, Drive.ai, Viv, Intel’s AI solutions, Nervana Systems, Sentient Technologies, and many more are still very much in their infancy, especially when it comes to making any significant impact on business transactions and systems processes.

Machine Learning

On the other hand, Machine Learning (ML), which is a subfield of AI, which some call light AI, is starting to make inroads into organizations worldwide. There are even claims that: “Machine Learning is becoming so pervasive today that you probably use it dozens of times per day without knowing it.”

Although according to Intel: “less than 10 per cent of servers worldwide were deployed in support of machine learning last year (2015).” It is highly probable Google, Facebook, Salesforce, Microsoft and Amazon would have taken up a large percentage of that 10 percent alone.

ML technologies, such as the location awareness systems like Apple’s iBeacon software, which connects information from a user’s Apple profile to in-store systems and advertising boards, allowing for a ‘personalized’ shopping experience and tracking of (profiled) customers within physical stores. IBM’s Watson and Google DeepMind’s Machine Learning have both shown how their systems can analyze vast amounts of information (data), recognize sophisticated patterns, make significant savings on energy consumption and empower humans with new analytical capabilities.

The promise of Machine Learning is to allow computers to learn from experience and understand information through a hierarchy of concepts. Currently ML is beneficial for pattern and speech recognition and predictive analytics. It is therefore very beneficial in search, data analytics and statistics – when there is lots of data available. Deep Learning helps computers solve problems that humans solve intuitively (or automatically by memory) like recognizing spoken words or faces in images.

Neither Machine Learning nor Deep Learning should be considered as a attempt to simulate the human brain – which is one goal of AI.

Crossing the chasm – not without lots of data

If driverless vehicles can move around with decreasing problems, this is not because AI has finally arrived, it is not that we have machines that are capable of human intelligence, but it is that we have machines that are very useful in dealing with big data and are able to make decisions based on uncertainties regarding the perception and interpretation of their environment – but we are not quite there yet! Today we have systems targeted at narrow tasks and domains, but not that promised by ‘general purpose’ AI, which should be able to accomplish a wide range of tasks, including those not foreseen by the system’s designers.

Essentially there’s nothing in the very recent developments in machine learning that significantly affects our ability to model, understand and make predictions in systems where data is scarce.

Nevertheless companies are starting to take notice, investors are funding ML startups, and corporations recognize that utilizing ML technologies is a good step forward for organizations interested in gaining the benefits promised by Big Data and Cognitive Computing over the long term. Microsoft’s CEO, Satya Nadella, says the company is heavily invested in ML and he is: “very bullish about making machine learning capability available (over the next 5 years) to every developer, every application, and letting any company use these core cognitive capabilities to add intelligence into their core operations.”

The next wave – understanding information

Organizations that have lots of data know that information is always limited, incomplete and possibly noisy. ML algorithms are capable of searching the data and building a knowledge base to provide useful information – for example ML algorithms can separate spam emails from genuine emails. A machine learning algorithm is an algorithm that is able to learn from data, however the performance of machine learning algorithms depends heavily on the representation of the data they are given.

Machine Learning algorithms often work on the principle most widely known as Occam’s razor. This principle states that among competing hypotheses that explain known observations equally well one should choose the “simplest” one. In my opinion this is why we should use machines only to augment human labor and not to replace it.

Machine Learning and Big Data will greatly compliment human ingenuity – a human-machine combination of statistical analysis, critical thinking, inference, persuasion and quantitative reasoning all wrapped up in one.

“Every block of stone has a statue inside it and it is the task of the sculptor to discover it. I saw the angel in the marble and carved until I set him free.” ~ Michelangelo (1475–1564)

The key questions businesses and policy makers need to be concerned with as we enter the new era of Machine Learning and Big Data:

1) who owns the data?

2) how is it used?

3) how is it processed and stored?

Update 16th August 2016

There is a very insightful Quora answer by François CholletDeep learning researcher at Google where he confirms what I have been saying above:

“Our successes, which while significant are still very limited in scope, have fueled a narrative about AI being almost solved, a narrative according to which machines can now “understand” images or language. The reality is that we are very, very far away from that.”

 

Photo credit, this was a screen grab of a conference presentation, now I do not remember the presenter or conference but if I find it I will update the credit!

 

 

 

 

When machines replace jobs, the net result is normally more new jobs

Two of the current leading researchers in labor economics studying the impact of machines and automation on jobs have released a new National Bureau of Economic Research (NBER) working paper, The Race Between Machine and Man: Implications of Technology for Growth, Factor Shares and Employment.

The authors, Daron Acemoglu and Pascual Restrepo are far from the robot-supporting equivalent of Statler and Waldorf, the Muppets who heckle from the balcony, unless you consider their heckling is about how so many have overstated the argument of robots taking all the jobs without factual support:

Similar claims have been made, but have not always come true, about previous waves of new technologies… Contrary to the increasingly widespread concerns, our model raises the possibility that rapid automation need not signal the demise of labor, but might simply be a prelude to a phase of new technologies favoring labor.

In The Race Between Machine and Man, the researchers set out to build a conceptual framework, which shows which tasks previously performed, by labor are automated, while at the same time more ‘complex versions of existing tasks’ and new jobs or positions in which labor has a comparative advantage are created.

The authors make several key observations that show as ‘low skilled workers’ are automated out of jobs, the creation of new complex tasks always increases wages, employment and the overall share of labor increases. As jobs are eroded, new jobs, or positions are created which require higher skills in the short term:

Whilst “automation always reduces the share of labor in national income and employment, and may even reduce wages. Conversely, the creation of new complex tasks always increases wages, employment and the share of labor.”

They show, through their analysis, that for each decade since 1980, employment growth has been faster in occupations with greater skill requirements

During the last 30 years, new tasks and new job titles account for a large fraction of U.S. employment growth.

In 2000, about 70% of the workers employed as computer software developers (an occupation employing one million people in the US at the time) held new job titles. Similarly, in 1990 a radiology technician and in 1980 a management analyst were new job titles.

Looking at the potential mismatch between new technologies and the skills needed the authors crucially show that these new highly skilled jobs reflect a significant number of the total employment growth over the period measured as shown in Figure 1:

From 1980 to 2007, total employment in the U.S. grew by 17.5%. About half (8.84%) of this growth is explained by the additional employment growth in occupations with new job titles.

Figure 1

Unfortunately we have known for some time that labor markets are “Pareto efficient; ” that is, no one could be made better off without making anyone worse off. Thus Acemoglu and Restrepo point to research that shows when wages are high for low-skill workers this encourage automation. This automation then leads to promotion or new jobs and higher wages for those with ‘high skills.’

Because new tasks are more complex, the creation may favor high-skill workers. The natural assumption that high-skill workers have a comparative advantage in new complex tasks receives support from the data.

The data shows that those classified as high skilled tend to have more years of schooling.

For instance, the left panel of Figure 7 shows that in each decade since 1980, occupations with more new job titles had higher skill requirements in terms of the average years of schooling among employees at the start of each decade (relative to the rest of the economy).

Figure 7

However it is not all bad news for low skilled workers the right panel of the same figure also shows a pattern of “mean reversion” whereby average years of schooling in these occupations decline in each subsequent decade, most likely, reflecting the fact that new job titles became more open to lower-skilled workers over time.

Our estimates indicate that, although occupations with more new job titles tend to hire more skilled workers initially, this pattern slowly reverts over time. Figure 7 shows that, at the time of their introduction, occupations with 10 percentage points more new job titles hire workers with 0.35 more years of schooling). But our estimates in Column 6 of Table B2 show that this initial difference in the skill requirements of workers slowly vanishes over time. 30 years after their introduction, occupations with 10 percentage points more new job titles hire workers with 0.0411 fewer years of education than the workers hired initially.

Essentially low-skill workers gain relative to capital in the medium run from the creation of new tasks.

Overall the study shows what many have said before, there is a skills gap when new technologies are introduced and those with the wherewithal to invest in learning new skills, either through extra education, on the job training, or self-learning are the ones who will be in high demand as new technologies are implemented.