Home » Artificial Intelligence

Category Archives: Artificial Intelligence

Our Response to the UK Government request for written evidence on A.I.

This is an abridged version of the final response we submitted to the UK Government request for evidence on Artificial Intelligence. (The numbering is based on the questions we decided we could answer best).

 

1. a) What is the current state of artificial intelligence? There are currently no ‘true’ Artificial Intelligence (A.I.) systems. There are ad-hoc ‘learning’ systems, let’s call them narrow A.I. systems.

Defining A.I. The literature abounds with definitions of A.I. and human intelligence although very little consensus has been reached to date. Our comprehensive research of A.I. practitioners worldwide, Research Survey: Defining (machine) Intelligence (Lewis & Monett, 2017), which has collected over 400 responses, has identified considerable interest in identifying a well defined definition and goal of A.I. We hope that the results of our survey help to overcome a fundamental flaw: “That artificial intelligence lacks a stable, consensus definition or instantiation complicates efforts to develop an appropriate policy infrastructure” (Calo, 2017).

The goal of A.I., closely linked to its definition and highlighted in our survey, should ensure the ‘why’ of Artificial Intelligence; however, very few research papers provide a robust goal with society-in-the-loop. We agree with Hutter (2005): “The goal of A.I. systems should be to be useful to humans.” Or as Norbert Wiener wrote in 1960, “We had better be quite sure that the purpose put into the machine is the purpose which we really desire” (Wiener, 1960).

Whilst there are breakthroughs in narrow A.I. systems that can ‘simulate’ and surpass certain ‘individual’ aspects of human intelligence (for example, specific elements of pattern recognition, quicker at search, calculations, data analysis, and other cognitive attributes), A.I. development is currently some way off from achieving the goal of fully replicating human intelligence. However, the narrow A.I. methods, which are more specifically fields of A.I. research, are making considerable progress as stand alone techniques, namely Machine Learning (ML) and classes of ML algorithms such as Deep Learning (DL), Reinforcement Learning (RL), and Deep Reinforcement Learning (DRL).

Researchers acknowledge that the methodology applied in narrow A.I. systems can be unstable (Mnih et al., 2015). Nevertheless, these A.I. sub-domains are already starting to have considerable economic and social effect, as we outline below, and this impact will accelerate in the near future. Briefly:

  • Machine Learning: Whereas the vast majority of computer programs are hand-coded by humans, Machine Learning algorithms are capable of ‘self-learning,’ improving computability on a specific task against key performance metrics, and enhance output through experience.
  • Deep Learning: The key aspect of deep learning is that its features are not designed by human engineers. Instead, “they are learned from data using a general-purpose learning procedure” (LeCun, Bengio & Hinton, 2015). Deep Learning is defined by the same authors as “computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer.”
  • Reinforcement Learning: An algorithm which learns to control and predict data. The algorithms are reward and goal orientated: “Reinforcement learning is learning what to do –how to map situations to actions– so as to maximize a numerical reward signal. The learner is not told which actions to take, as in most forms of machine learning, but instead must discover which actions yield the most reward by trying them” (Sutton & Barto, 2012). See also below for Deep Reinforcement Learning.

Machine Learning: The most prevalent of these narrow A.I. sub-domains, in an operational context, is Machine Learning. ML algorithm can be either supervised, unsupervised or semi-supervised. The majority of current ML implementations are supervised learning. In supervised learning, the idea is we (humans) teach the computer how to do something. In unsupervised learning the machine learns by itself (Samuel, 1959).

ML systems are being used to help make decisions both large and small in almost all aspects of our lives, whether they involve simple tasks like dispensing money from ATM’s, recommendations for buying books or which movies to watch, email spam filtering, purchasing travel arrangements and insurance policies, to more objective matters like the prognosis of credit rating in loan approval decisions, and even life-altering decisions such as health diagnosis and court sentencing guidelines after a criminal conviction.

Systems utilizing ML information processing techniques are used for profiling individuals by law enforcement agencies, military drones, and other semi-autonomous surveillance applications. They capture information in our smart phones on our daily activities, from exercise and GPS data that tracks our location in real time, to emailing and social media interests and telephone calls. They are increasingly used in our cars and our homes. They are used to manage nuclear reactors and for managing demand across electricity grids, improving energy efficiency, and generally boosting productivity in the business environment.

Deep Learning: Deep learning is emerging as a primary machine learning approach for important, challenging problems such as image classification and speech recognition. Deep Learning methods have dramatically improved machine capabilities in speech recognition, approaching human-level performance on some object recognition benchmarks (He et al., 2016) and object detection (Ba, Mnih, & Kavukcuoglu, 2015). Which can also be very useful for self-driving cars and in many other domains where big data is available such as drug discovery and genomics (Nguyen et al., 2016).

Advances in Deep Learning will have broad implications for consumer and business products that can be significantly augmented by speech recognition. “Deep learning is becoming a mainstream technology for speech recognition at industrial scale” (Deng et al., 2013). This is particularly prevalent in telemarketing, tech help support desks (Vinyals & Le, 2015), and mobile personal assistants such as Apple’s Siri, Microsoft’s Cortana, Google Now, and Amazon Echo. Deep Learning is also being used for negotiations with other chatbots or people (Lewis et al., 2017).

Reinforcement Learning: Reinforcement Learning has gradually become one of the most active research areas in Machine Learning, Artificial Intelligence, and neural network research (Sutton & Barto, 2012). An RL agent interacts with its environment and, upon observing the consequences of its actions, can learn to alter its own behaviour in response to rewards received (Arulkumaran et al., 2017).

Within health, RL is being used for classifying gene-expression patterns from leukaemia patients into subtypes by clinical outcome (Ghahramani, 2015). These models have also contributed to massive savings at multiple Google Data Centers by helping to produce a 40% reduction in energy used for cooling and 15% reduction in overall energy overhead (Evans & Gao, 2016). Other typical examples of uses might include detecting pedestrians in images taken from an autonomous vehicle. As shown in (Shalev-Shwartz, Shammah, & Shashua, 2016), RL is proving to be especially effective in the development of self-driving cars which requires many capabilities such as sensing, vision, mapping, knowledge of driving policies, and regulations.

In robotics, RL is making progress in other seemingly simple tasks such as screwing a cap onto a bottle (Levine et al., 2016) or door opening (Chebotar, 2017).

A well-known successful example of RL is from the Google owned company DeepMind, specifically their AlphaGo, which defeated the human world champion in the game of Go. AlphaGo was comprised of neural networks that were trained using supervised and reinforcement learning in combination with a traditional heuristic search algorithm (Silver et al., 2016).

Deep Reinforcement Learning: One of the driving forces behind Deep Reinforcement Learning is the vision of creating systems that are capable of learning how to adapt in the real world. Further, researchers consider that “DRL will be an important component in constructing general AI systems” (Arulkumaran et al., 2017). As was shown through a single DRL architecture “in a range of different environments with only very minimal prior knowledge” (Mnih et al., 2015).

To date, DRL has been most prevalent in games (Mnih et al., 2013); however, recent development have shown that DRL algorithms have by “far the most complex behaviors yet learned” in a machine algorithm (Christiano et al., 2017).

  1. b) What factors have contributed to this? Historically, developments in A.I. were driven by government investment in research and development within academia and other research institutes. Whilst governments around the world still make large investments into A.I. research, recent major advances have largely been driven by significant investments by leading technology companies relying on techniques that were previously developed through government and other institutions investment.

Furthermore, computing power has increased dramatically. Meanwhile, the growth of the Internet and social media in the last 10 years has provided opportunities to collect, store, and share large amounts of data. Many leading technology companies are amassing huge amounts of ‘Big Data,’ supported in part by cloud computing resources. These companies have invested heavily in A.I. technologies and further seek to develop A.I. techniques to ensure a competitive advantage.

Another major factor is open access of scientific inventions and research in general –sites such as arXiv, provide immediate online publication of research papers, conference proceedings, etc. Additionally, open source frameworks and libraries for the development of ML algorithms have put opportunities for development into the hands of millions, thereby profiting from the advantages of cloud computing and parallel processing on GPUs. Examples include TensorFlow, Theano, CNTK, MXNet, and Keras. They implement model architectures and algorithms for methods, especially deep learning that can be run by calling functions without the need to implement them from scratch nor locally.

c) How is it likely to develop over the next 5, 10 and 20 years. There are several recent surveys of experts opinions on when A.I. will be available and their impact on the workplace. Many uncertainties exist concerning future developments of machine intelligence, one should therefore not consider the ‘expert view’ to be predictive of likely ten and twenty year scenarios.

d) What factors, technical or societal, will accelerate or hinder this development? There are some obvious factors such as a slow-down in investment which would impact research and development and education, creating another ‘A.I. winter’ and skills gap. Other factors such as global instability and government policy, may all hinder the development of A.I

Although the particular narrow A.I. models we outlined above already demonstrate aspects of intelligent abilities in narrow and limited domains, at this point they do not represent a unified model of intelligence and there is much work to be done before true A.I. is ‘amongst us.’

Further, technically there are still many factors that make narrow A.I unstable. Additionally there are technological challenges to overcome such as the curse of dimensionality –Richard Bellman (1957) asserted that high dimensionality of data is a fundamental hurdle in many science and engineering applications. He coined this phenomenon the curse of dimensionality, although recent developments in DRL have made some progress in addressing the curse of dimensionality (Bengio, Courville, & Vincent, 2013; Kulkarni et al., 2016). There are also many safety challenges to overcome such as security, data privacy (see for example (DeepMind, 2017)) and other technological problems still requiring breakthroughs.

Other advances will accelerate A. I. such as Facebook CommaAI (Baroni et al., 2017) and their A.I. roadmap (Mikolov, Joulin, & Baroni, 2015). Together with closer cooperation with Neuroscience and A.I. developers (Hassabis et al., 2017). We also believe the following papers will contribute to the acceleration of narrow A.I. solutions for mainstream uses beyond games and social media analytics: (Kalchbrenner, Danihelka, & Graves, 2015; Lake et al., 2016; Mnih et al., 2015).

2. We recommend the committee consider the findings in the paper by leading A.I. researchers at Microsoft, Ethan Fast and Eric Horvitz, Long-Term Trends in the Public Perception of Artificial Intelligence (Fast & Horvitz, 2017).

3. It is our belief that the goal of A.I must be to support humanity. At the present time it is difficult to predict the short term extent with which A.I. will impact on social and economic institutions but in the long term it could have a major negative consequence the social and economic effects of which could be severe for millions of people. In this case, according to a report to the US President of the United States (Furman et al., 2016), “Aggressive policy action will be needed to help (those) who are disadvantaged by these changes and to ensure that the enormous benefits of AI and automation are developed by and available to all.”

Other commentators such as Andrew Haldane (2015), Chief Economist at the Bank of England, believe it is clear that the introduction of AI machines and more advanced robotics could see a technological change and thus social and economic changes far larger than at any time in human history with massive unemployment of unprecedented scales.

Conversely, machines have been substituting human labor for centuries; yet, historically, technological changes have been associated with productivity growth and expanding rather than contracting total employment and with raising earnings. Research showed that factories that have implemented industrial robots also added over 1.25 million new jobs from 2009 to 2015 (Lewis, 2015).

The challenge for policymakers will be to update, strengthen, and adapt policies to respond to the social and economic effects of A.I. We have created an agenda with key research goals to ensure the development and the outcomes of A.I. and Artificial General Intelligence (AGI) are aligned with the social and economic advancement of all humanity, and how best to close those social and economic gaps through beneficial AI and AGI development.

4. Overall we believe that whilst some large corporations and their shareholders will benefit from the gains of A.I. the potential for artificial intelligence to enhance people’s quality of life in areas including education, transportation, and healthcare is vast. However, we are willing to offer our expertise to the committee so that government, policy makers, and researchers collaborate to develop and champion methodology “for wealth creation in which everyone should be entitled to a portion of the world’s A.I. produced treasures” (Stone et al., 2016).

5. Our research shows that theories of intelligence and the goal of A.I. have been the source of much confusion both within the field and among the general public. To help rectify this we are conducting a research survey: Defining (machine) Intelligence (Lewis & Monett, 2017).

The research survey on definitions of machine and human intelligence is still accepting responses and has an ongoing invitation procedure. However, we are incredibly surprised by the volume of responses together with the high level of comments, opinions, and recommendations concerning the definitions of machine and human intelligence that experts around the world have shared. As of September 6, 2017 we have collected more than 400 responses.

A.I. has a perception problem in the mainstream media even though many researchers indicate that supporting humanity must be the goal of AI. By clarifying the known definitions of intelligence and research goals of Machine Intelligence this should help us and other A.I. practitioners spread a stronger, more coherent message, to the mainstream media, policymakers, and the general public to help dispel myths about A.I.

6. We recommend the committee consider the findings projected through to 2030 in the report, The One Hundred Year Study on Artificial Intelligence (Stone et al., 2016), especially the sections on transportation, healthcare, education, low-resource communities, and public safety and security.

8. Human intellect is the source of many of its own problems. Errors in thinking and biases, which have grown powerful over time, are also showing up in the intelligent machines we program and may become even more prevalent in machines programmed with Artificial Intelligence.

Machines can no more do ethics than they can have psychological breakdowns. They can help to change circumstances, but they cannot reflect on their value or morality. It is the human element and bias that must be considered above all else.

9. For an ‘unbiased’ view see paper by Adrian Weller (2017) where he states “a brief survey, suggesting challenges and related concerns. We highlight and review settings where transparency may cause harm, discussing connections across privacy, multi-agent game theory, economics, fairness and trust.”

The role of the Government

  1. Key questions which governments and policy makers should be addressing are:
  • How do we mitigate the uncertainty and likelihood of massive unemployment?
  • What impact have A.I. systems and robots had in industrial factories? Have companies that employed robots, increased or decreased human employment?
  • What new skills have been required as robots enter the workplace?
  • Which new laws or modifications to laws will need to be implemented to mitigate risk and monitoring of A.I. and A.G.I.?
  • Monitor and provide reporting on emerging technology policy, with a focus on artificial intelligence and automation.
  • Provide research input into FLI’s Asilomar long-term issues (Asilomar AI Principles, 2017) with particular focus on: “23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.”

From: AGISI.org

Dr. Colin W. P. Lewis, A.I. Research Scientist

Prof. Dr. Dagmar Monett, A.I. Research Scientist (AGISI & Berlin School of Economics and Law)

References

Arulkumaran, K. et al. (2017). A Brief Survey of Deep Reinforcement Learning. CoRR, abs/1708.05866, https://arxiv.org/abs/1708.0586.

Asilomar AI Principles (2017). Future of Life Institute, https://futureoflife.org/ai-principle.

Ba, J. L., Mnih, V., and Kavukcuoglu, K. (2015). Multiple Object Recognition with Visual Attention. CoRR, abs/1412.7755, https://arxiv.org/abs/1412.7755.

Baroni, M. et al. (2017). CommAI: Evaluating the first steps towards a useful general AI. CoRR, abs/1701.08954, https://arxiv.org/abs/1701.08954.

Bellman, R. (1957). Dynamic Programming. Princeton, NJ: Princeton Univ. Press.

Bengio, Y., Courville, A., and Vincent, V. (2013). Representation Learning: A Review and New Perspectives. IEEE Trans. on Pattern Analysis and Machine Intelligence, 35(8):1798–1828.

Calo, R. (2017). Artificial Intelligence Policy: A Roadmap, https://ssrn.com/abstract=301535.

Chebotar, Y. et al. (2017). Path integral guided policy search. CoRR, abs/1610.00529, https://arxiv.org/abs/1610.00529.

Christiano, P. F. et al. (2017). Deep Reinforcement Learning from Human Preferences. CoRR, abs/1706.03741, https://arxiv.org/abs/1706.03741.

DeepMind (July 2017). What we’ve learned so far, https://deepmind.com/applied/deepmind-health/transparency-independent-reviewers/what-weve-learned-so-far/.

Deng, L. et al. (2013). Recent advances in deep learning for speech research at Microsoft. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP, pp. 8604–8608, IEEE.

Evans, R. and Gao, J. (2016). DeepMind AI Reduces Google Data Centre Cooling Bill by 40%. DeepMind, https://deepmind.com/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-40.

Fast, E. and Horvitz, E. (2017). Long-Term Trends in the Public Perception of Artificial Intelligence. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI-17, San Francisco, CA, USA, February 4-9, 2017. AAAI Press, pp. 963–969.

Furman, J. et al. (2016). Artificial Intelligence, Automation, and the Economy. Executive Office of the President, Washington, D.C. 20502, https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF.

Ghahramani, Z. (May 2015). Probabilistic machine learning and artificial intelligence. Nature, 521:452–459. DOI: 10.1038/nature14541.

Haldane, A. (2015). Labour’s Share – speech given at the Trades Union Congress, London. Bank of England, http://www.bankofengland.co.uk/publications/Pages/speeches/2015/864.aspx.

Hassabis, D. et al. (July 2017). Neuroscience-Inspired Artificial Intelligence. Neuron, 95(2):245–258.

He, K. et al. (2016). Deep Residual Learning for Image Recognition. In Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016. Las Vegas, NV, USA, pp. 770–778, IEEE.

Hutter, M. (2005). Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability. Berlin: Springer.

Kalchbrenner, N., Danihelka, I., and Graves, A. (2015). Grid Long Short-Term Memory. CoRR, abs/1507.01526, https://arxiv.org/pdf/1507.01526.pdf.
Kulkarni, T. D. et al. (2016). Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation. CoRR, abs/1604.06057, https://arxiv.org/abs/1604.06057.

Lake, B. M. et al. (2016). Building Machines That Learn and Think Like People. Behav Brain Sci., 4:1–101.

LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep Learning. Nature, 521:436–444.

Levine, S. et al. (January 2016). End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 17(1):1334–1373.

Lewis, C. W. P. (2015) Study – Robots are not taking jobs. Robotenomics, https://robotenomics.com/2015/09/16/study-robots-are-not-taking-jobs.

Lewis, C. W. P. and Monett, D. (2017). Research Survey: Defining (machine) Intelligence. Ongoing survey, https://goo.gl/hMjaE1.

Lewis, M. et al. (2017). Deal or No Deal? End-to-End Learning for Negotiation Dialogues. CoRR, abs/1706.05125, https://arxiv.org/abs/1706.05125.

Mikolov, T., Joulin, J., and Baroni, M. (2015). A Roadmap towards Machine Intelligence. CoRR, abs/1511.08130, https://arxiv.org/abs/1511.08130.

Mnih, V. et al. (2013). Playing Atari with Deep Reinforcement Learning. CoRR, abs/1312.5602, https://arxiv.org/abs/1312.5602.

Mnih, V. et al. (2015). Human-level control through deep reinforcement learning. Nature, 518:529–533.

Nguyen, D.-T. et al. (2016). Pharos: Collating protein information to shed light on the druggable genome. Nucleic Acids Research, 45(D1):D995–D1002.

Samuel, A. L. (1959). Some Studies in Machine Learning Using the Game of Checkers. IBM Journal of Research and Development, 3(3):535–554.

Shalev-Shwartz, S., Shammah, S., and Shashua, A. (2016). Safe, Multi-Agent, Reinforcement Learning for Autonomous Driving. CoRR, abs/1708.05866, https://arxiv.org/abs/1708.05866.

Silver, D. et al. (January 2016). Mastering the game of Go with deep neural networks and tree search. Nature, 28;529(7587):484–489.

Stone, P. et al. (September 2016). Artificial Intelligence and Life in 2030. One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA, http://ai100.stanford.edu/2016-report.

Sutton, R. S. and Barto, A. G. (2012). Reinforcement Learning: An Introduction. Second edition. London, UK: The MIT Press.

Vinyals, O. and Le, Q. V. (2015). A Neural Conversational Model. CoRR, abs/1506.05869, https://arxiv.org/abs/1506.05869.

Weller, A. (2017). Challenges for Transparency. CoRR, abs/1708.01870, https://arxiv.org/abs/1708.01870.
Wiener, N. (1960). Some Moral and Technical Consequences of Automation. Science, 131(3410):1355–1358.

Artificial Intelligence and National Security

Artificial Intelligence and National Security

Rapid developments in Artificial Intelligence (AI), especially the sub domains of Reinforcement Learning and Machine Learning are high on the agendas of government policy makers in many countries. Last year the US Government* issued comprehensive reports on AI and its possible benefits and impact on society, likewise the European Union and other agencies are also active in reviewing policies on AI, Robotics and associated technology. As recent as one week ago the UK government initiated a new request for comments to its AI subcommittee – What are the implications of Artificial Intelligence?

On the back of the high level of interest from governments and policy makers around the world a new study, Artificial Intelligence and National Security, by researchers at the Harvard Kennedy Center on behalf of the U.S. Intelligence Advanced Research Projects Activity (IARPA) recommends three goals for developing future policy on AI and national security

  • Preserving U.S. technological leadership,
  • Supporting peaceful and commercial use, and
  • Mitigating catastrophic risk

The authors say their goals for developing policy are developed by lessons learned in nuclear, aerospace, cyber, and biotech and that Advances in AI will affect national security by driving change in three areas: military superiority, information superiority, and economic superiority.

Setting out their position the authors make the case that existing AI developments “have significant potential for national security.”

Existing machine learning technology could enable high degrees of automation in labor-intensive activities such as satellite imagery analysis and cyber defense.

They further emphasize that AI has the potential to be as transformative as other major technologies, stating that future progress in AI has the potential to be a transformative national security technology, on a par with nuclear weapons, aircraft, computers, and biotech.

The changes they see in military superiority, information superiority, and economic superiority are outlined below:

For military superiority, they write progress in AI will both enable new capabilities and make existing capabilities affordable to a broader range of actors.

For example, commercially available, AI-enabled technology (such as long-range drone package delivery) may give weak states and non-state actors access to a type of long-range precision strike capability.

In the cyber domain, activities that currently require lots of high-skill labor, such as Advanced Persistent Threat operations, may in the future be largely automated and easily available on the black market.

For information superiority, they say AI will dramatically enhance capabilities for the collection and analysis of data, and also the creation of data.

In intelligence operations, this will mean that there are more sources than ever from which to discern the truth. However, it will also be much easier to lie persuasively.

AI-enhanced forgery of audio and video media is rapidly improving in quality and decreasing in cost. In the future, AI-generated forgeries will challenge the basis of trust across many institutions.

For economic superiority, they find that advances in AI could result in a new industrial revolution.

Former U.S. Treasury Secretary Larry Summers has predicted that advances in AI and related technologies will lead to a dramatic decline in demand for labor such that the United States “may have a third of men between the ages of 25 and 54 not working by the end of this half century.”

Like the first industrial revolution, this will reshape the relationship between capital and labor in economies around the world. Growing levels of labor automation might lead developed countries to experience a scenario similar to the “resource curse.”

Also like the first industrial revolution, population size will become less important for national power. Small countries that develop a significant edge in AI technology will punch far above their weight.

Due to the significant impacts they see from AI they say that Government must formalize goals for technology safety and provide adequate resources, that government should both support and restrain commercial activity of AI and governments should provide more investment and oversight into the long term-focused strategic analyses on AI technology and its implications.

Noting that we are at an inflection point in Artificial Intelligence and autonomy, the researchers outline multiple areas they believe AI driven technologies will disrupt military capabilities – capabilities, which they say, will have far reaching consequences in warfare.

Policy makers around the world would do well to consider carefully the scenarios outlined in the study to ensure that AI technologies are adequately governed to provide assurances to citizens and ultimately to ensure that AI technologies benefit humanity.

 

*US Government and Agencies recent papers

June 2016—Defense Science Board: “Summer Study on Autonomy”

July 2016—Department of Defense Office of Net Assessment: “Summer Study: (Artificial) Intelligence: What questions should DoD be asking”

October 2016—National Science and Technology Council: “The National Artificial Intelligence Research and Development Strategic Plan”

October 2016—National Science and Technology Council: “Preparing for the Future of Artificial Intelligence”

December 2016—Executive Office of the President: “Artificial Intelligence, Automation, and the Economy”

January 2017—JASON: “Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD

Creating Shareholder Value with AI?

In a wonderfully titled report, Creating Shareholder Value with AI? Not so Elementary, My Dear Watson, the Equity Research company, Jefferies, LLC, take a hard look at IBM’s bet on cognitive computing, or Artificial Intelligence (AI). The 53 page report is well worth reading to understand why the research analysts consider IBM, despite significant investment in to their cognitive computing platform, Watson is losing the opportunity in AI and hence the authors consider IBM stock to under perform.

On a positive note for AI researchers they do acknowledge there is serious business and economic interest in AI, citing Andrew Ng’s Stanford talk on AI as the new Electricity:

AI is the New Electricity….Our checks confirm that a wide range of organizations are exploring incorporating AI in their business, mostly using Machine and Deep Learning for speech and image recognition applications.

And that IBM has an advantage in terms of technology:

IBM’s Watson platform remains one of the most complete cognitive platforms available in the marketplace today.

But IBM fall flat due to hefty service charges and the inability to attract AI talent:

The hefty services component of many AI deployments will be a hindrance to adoption. We also believe IBM appears outgunned in the war for AI talent and will likely see increasing competition.

I’m never a fan of forecasts for market share, forecasts in Robotics have shown how wide off the mark the industrial robotics landscape is from where it was forecast to be, nevertheless the Jefferies numbers are worth looking at, even if much of AI will be in house in organisations such as Google, Facebook, Amazon, etc. Jeffery’s seem to think the value of the market, shown in the chart below, is underestimated “we think these forecasts are unlikely to fully capture the value created by internal use of AI applications such as machine learning. For example, Facebook and Amazon are aggressively using machine learning to improve their offerings, make operations more efficient, and create new embedded services.”

Jefferies research exhibit 8

The analysts do note that the singularity is not near and provide an interesting chart depicting the areas they see growth… interestingly they see a large percentage of growth in algorithmic trading strategies, equivalent to 17% of the market! Yet strangely indicate health care spend will be slightly less, and driverless AI even less, despite this being where much of AI is heading today.

Many AI Apps Will Take Time to Emerge; The Singularity Is Not Near While we are big believers in the long term potential of AI and see rapid adoption of machine learning in the near term, our checks convince us that many AI methods and applications will take time to be adopted.

Jefferies exhibit 9

The analysts emphasise how IBM is losing the talent war and also has less access to the rich data of Google, Apple, Facebook and Amazon. Talent will be a major game changer in AI.

The report also does a good job of showing the current flow of investment by major corporations, in terms of acquisitions, and also investment into AI start-ups. Overall the analysis, except the forecasts, gives a fair overview of the AI market, but omits the major $’s flowing into Academic research and the costs of employing and training AI researchers, which is likely already in the early billions… I do however agree that IBM’s Watson risks not capturing the markets share its technology richly deserves – maybe IBM will end up capitalising by its patent’s as it so often has.

Take a look at the report and judge for yourself (PDF).

Self-Healing Graphene Holds Promise for Artificial Skin in Future Robots

With the first ever documented observation of the self-healing phenomena of graphene, researchers hint at future applications for its use in artificial skin.

Graphene, which is, in simple terms, a sheet of pure carbon atoms and currently the world’s strongest material, is one million times thinner than paper; so thin that it is actually considered two dimensional. Notwithstanding its hefty price, graphene has quickly become among the most promising nanomaterials due to its unique properties and versatile prospective applications.

The paper published in Open Physics refers to an extraordinary yet previously undocumented self-healing property of graphene’s, which could lead to the development of flexible sensors that mimic the self-healing properties of human skin.

The largest organ in the human body, skin has been known for its fascinating self-healing ability – but until now, emulating this mechanism proved too much of a challenge as manmade materials lack this aptitude. Due to unprecedented stretching, bending and incidental scratches, artificial skin used in robots is extremely susceptible to ruptures and fissures. The study offers a novel solution where a sub-nano sensor uses graphene to sense a crack as soon as it starts nucleation, and surprisingly, even after the crack has spread a certain distance. According to the authors, this technology could quickly become viable for use in the next generation of electronics.

According to Dr. Swati Ghosh Acharyya, one of the researchers.

We wanted to observe the self-healing behavior of both pristine and defected single layer graphene and its application in sub-nano sensors for crack spotting by using molecular dynamic simulation. We were able to document the self-healing of cracks in graphene without the presence of any external stimulus and at room temperature.

The results revealed that self-healing occurred by spontaneous recombination of the dangling bonds whenever within the limit of critical crack opening displacement.

The researchers subjected single layer graphene containing various defects like pre-existing holes and differently oriented pre-existing cracks to uniaxial tensile loading till fracture. Interestingly enough, once the load was relaxed, the graphene started to heal and the self-healing continued irrespective of the nature of pre-existing defects in the graphene sheet. No matter what length of the crack, the authors say they all healed, provided the critical crack opening distance lied within 0.3 – 0.5 nm for both the pristine sheet as well as for the sheet with pre-existing defects.

Simulating self-healing in artificial skin will open the way to a variety of daily life applications ranging from sensors, through to mobile devices and ultracapacitors. In case of the latter, graphene-based devices would have an advantage of the large surface of graphene to provide increase in the electrical power by storing electrons on graphene sheets. Apparently such supercapacitors would have as much electrical storage capacity as lithium-ion batteries but could be recharged in minutes instead of hours.

The original article is fully open access and available on De Gruyter Online.

 

 

AI not yet but Machine Learning and Big Data are rapidly evolving

Solve problems

In his book Adventures in the Screen Trade, the hugely successful screenwriter William Goldman’s opening sentence is – “Nobody knows anything.” Goldman is talking about predictions of what might and what might not succeed at the box office. He goes on to write: “Why did Universal, the mightiest studio of all, pass on Star Wars? … Because nobody, nobody — not now, not ever — knows the least goddamn thing about what is or isn’t going to work at the box office.” Prediction is hard, “Not one person in the entire motion picture field knows for a certainty what’s going to work. Every time out it’s a guess.” Of course history is often a good predictor of what might work in the future and when, but according to Goldman time and time again predictions have failed miserably in the entertainment business.

It is exactly the same with technology and Artificial Intelligence (AI), probably more than any other technology has fared the worst when it comes to predictions of when it will be available as a truly ‘thinking machine.’ Fei-Fei Li, director of the Stanford Artificial Intelligence Lab even thinks: “today’s machine-learning and AI tools won’t be enough to bring about real AI.” And Demis Hassabis founder of Google’s DeepMind (and in my opinion one of the most advanced AI developers) forecasts: “it’s many decades away for full AI.”

Researchers are however starting to make considerable advances in soft AI. Although with the exception of less than 30 corporations there is very little tangible evidence that this soft AI or Deep Learning is currently being used productively in the workplace.

Some of the companies currently selling or and/or using soft AI or Deep Learning to enhance their services; IBM’s Watson, Google Search and Google DeepMind, Microsoft Azure (and Cortana), Baidu Search led by Andrew Ng, Palantir Technology, maybe Toyota’s new AI R&D lab if it has released any product internally, within Netflix and Amazon for predictive analytics and other services, the insurer and finance company USAA, Facebook (video), General Electric, the Royal Bank of Scotland, Nvidia, Expedia and MobileEye and to some extent the AI light powered collaborative robots from Rethink Robotics.

There are numerous examples of other companies developing AI and Deep Learning products but less than a hundred early-adopter companies worldwide. Essentially soft AI and Deep Learning solutions, such as Apple’s Siri, Drive.ai, Viv, Intel’s AI solutions, Nervana Systems, Sentient Technologies, and many more are still very much in their infancy, especially when it comes to making any significant impact on business transactions and systems processes.

Machine Learning

On the other hand, Machine Learning (ML), which is a subfield of AI, which some call light AI, is starting to make inroads into organizations worldwide. There are even claims that: “Machine Learning is becoming so pervasive today that you probably use it dozens of times per day without knowing it.”

Although according to Intel: “less than 10 per cent of servers worldwide were deployed in support of machine learning last year (2015).” It is highly probable Google, Facebook, Salesforce, Microsoft and Amazon would have taken up a large percentage of that 10 percent alone.

ML technologies, such as the location awareness systems like Apple’s iBeacon software, which connects information from a user’s Apple profile to in-store systems and advertising boards, allowing for a ‘personalized’ shopping experience and tracking of (profiled) customers within physical stores. IBM’s Watson and Google DeepMind’s Machine Learning have both shown how their systems can analyze vast amounts of information (data), recognize sophisticated patterns, make significant savings on energy consumption and empower humans with new analytical capabilities.

The promise of Machine Learning is to allow computers to learn from experience and understand information through a hierarchy of concepts. Currently ML is beneficial for pattern and speech recognition and predictive analytics. It is therefore very beneficial in search, data analytics and statistics – when there is lots of data available. Deep Learning helps computers solve problems that humans solve intuitively (or automatically by memory) like recognizing spoken words or faces in images.

Neither Machine Learning nor Deep Learning should be considered as a attempt to simulate the human brain – which is one goal of AI.

Crossing the chasm – not without lots of data

If driverless vehicles can move around with decreasing problems, this is not because AI has finally arrived, it is not that we have machines that are capable of human intelligence, but it is that we have machines that are very useful in dealing with big data and are able to make decisions based on uncertainties regarding the perception and interpretation of their environment – but we are not quite there yet! Today we have systems targeted at narrow tasks and domains, but not that promised by ‘general purpose’ AI, which should be able to accomplish a wide range of tasks, including those not foreseen by the system’s designers.

Essentially there’s nothing in the very recent developments in machine learning that significantly affects our ability to model, understand and make predictions in systems where data is scarce.

Nevertheless companies are starting to take notice, investors are funding ML startups, and corporations recognize that utilizing ML technologies is a good step forward for organizations interested in gaining the benefits promised by Big Data and Cognitive Computing over the long term. Microsoft’s CEO, Satya Nadella, says the company is heavily invested in ML and he is: “very bullish about making machine learning capability available (over the next 5 years) to every developer, every application, and letting any company use these core cognitive capabilities to add intelligence into their core operations.”

The next wave – understanding information

Organizations that have lots of data know that information is always limited, incomplete and possibly noisy. ML algorithms are capable of searching the data and building a knowledge base to provide useful information – for example ML algorithms can separate spam emails from genuine emails. A machine learning algorithm is an algorithm that is able to learn from data, however the performance of machine learning algorithms depends heavily on the representation of the data they are given.

Machine Learning algorithms often work on the principle most widely known as Occam’s razor. This principle states that among competing hypotheses that explain known observations equally well one should choose the “simplest” one. In my opinion this is why we should use machines only to augment human labor and not to replace it.

Machine Learning and Big Data will greatly compliment human ingenuity – a human-machine combination of statistical analysis, critical thinking, inference, persuasion and quantitative reasoning all wrapped up in one.

“Every block of stone has a statue inside it and it is the task of the sculptor to discover it. I saw the angel in the marble and carved until I set him free.” ~ Michelangelo (1475–1564)

The key questions businesses and policy makers need to be concerned with as we enter the new era of Machine Learning and Big Data:

1) who owns the data?

2) how is it used?

3) how is it processed and stored?

Update 16th August 2016

There is a very insightful Quora answer by François CholletDeep learning researcher at Google where he confirms what I have been saying above:

“Our successes, which while significant are still very limited in scope, have fueled a narrative about AI being almost solved, a narrative according to which machines can now “understand” images or language. The reality is that we are very, very far away from that.”

 

Photo credit, this was a screen grab of a conference presentation, now I do not remember the presenter or conference but if I find it I will update the credit!

 

 

 

 

When machines replace jobs, the net result is normally more new jobs

Two of the current leading researchers in labor economics studying the impact of machines and automation on jobs have released a new National Bureau of Economic Research (NBER) working paper, The Race Between Machine and Man: Implications of Technology for Growth, Factor Shares and Employment.

The authors, Daron Acemoglu and Pascual Restrepo are far from the robot-supporting equivalent of Statler and Waldorf, the Muppets who heckle from the balcony, unless you consider their heckling is about how so many have overstated the argument of robots taking all the jobs without factual support:

Similar claims have been made, but have not always come true, about previous waves of new technologies… Contrary to the increasingly widespread concerns, our model raises the possibility that rapid automation need not signal the demise of labor, but might simply be a prelude to a phase of new technologies favoring labor.

In The Race Between Machine and Man, the researchers set out to build a conceptual framework, which shows which tasks previously performed, by labor are automated, while at the same time more ‘complex versions of existing tasks’ and new jobs or positions in which labor has a comparative advantage are created.

The authors make several key observations that show as ‘low skilled workers’ are automated out of jobs, the creation of new complex tasks always increases wages, employment and the overall share of labor increases. As jobs are eroded, new jobs, or positions are created which require higher skills in the short term:

Whilst “automation always reduces the share of labor in national income and employment, and may even reduce wages. Conversely, the creation of new complex tasks always increases wages, employment and the share of labor.”

They show, through their analysis, that for each decade since 1980, employment growth has been faster in occupations with greater skill requirements

During the last 30 years, new tasks and new job titles account for a large fraction of U.S. employment growth.

In 2000, about 70% of the workers employed as computer software developers (an occupation employing one million people in the US at the time) held new job titles. Similarly, in 1990 a radiology technician and in 1980 a management analyst were new job titles.

Looking at the potential mismatch between new technologies and the skills needed the authors crucially show that these new highly skilled jobs reflect a significant number of the total employment growth over the period measured as shown in Figure 1:

From 1980 to 2007, total employment in the U.S. grew by 17.5%. About half (8.84%) of this growth is explained by the additional employment growth in occupations with new job titles.

Figure 1

Unfortunately we have known for some time that labor markets are “Pareto efficient; ” that is, no one could be made better off without making anyone worse off. Thus Acemoglu and Restrepo point to research that shows when wages are high for low-skill workers this encourage automation. This automation then leads to promotion or new jobs and higher wages for those with ‘high skills.’

Because new tasks are more complex, the creation may favor high-skill workers. The natural assumption that high-skill workers have a comparative advantage in new complex tasks receives support from the data.

The data shows that those classified as high skilled tend to have more years of schooling.

For instance, the left panel of Figure 7 shows that in each decade since 1980, occupations with more new job titles had higher skill requirements in terms of the average years of schooling among employees at the start of each decade (relative to the rest of the economy).

Figure 7

However it is not all bad news for low skilled workers the right panel of the same figure also shows a pattern of “mean reversion” whereby average years of schooling in these occupations decline in each subsequent decade, most likely, reflecting the fact that new job titles became more open to lower-skilled workers over time.

Our estimates indicate that, although occupations with more new job titles tend to hire more skilled workers initially, this pattern slowly reverts over time. Figure 7 shows that, at the time of their introduction, occupations with 10 percentage points more new job titles hire workers with 0.35 more years of schooling). But our estimates in Column 6 of Table B2 show that this initial difference in the skill requirements of workers slowly vanishes over time. 30 years after their introduction, occupations with 10 percentage points more new job titles hire workers with 0.0411 fewer years of education than the workers hired initially.

Essentially low-skill workers gain relative to capital in the medium run from the creation of new tasks.

Overall the study shows what many have said before, there is a skills gap when new technologies are introduced and those with the wherewithal to invest in learning new skills, either through extra education, on the job training, or self-learning are the ones who will be in high demand as new technologies are implemented.

 

 

New research ‘fears of technological change destroying jobs may be overstated’

robochef

Frank Levy an economist and Professor at MIT and Harvard, who work’s on technology’s impact on jobs and living standards, has written to assay the sensationalized fears of the overhyped study by Frey and Osborne. Levy indicates:

  • The General Proposition – Computers will be subsuming an increasing share of current occupations – is unassailable.
  • The Paper (Frey and Osborne study) is a set of guesses with lots of padding to increase the appearance of “scientific precision.”
  • The authors’ understanding of computer technology appears to be average for economists (= poor for computer scientists). By my personal guess, they are overestimating what current technology can do.

Researchers at the OECD analyzed the Frey and Osborne study and conducted their own research on tasks and jobs and concluded that: “automation was unlikely to destroy large numbers of jobs.”

I have also been quite critical of the Frey and Osborne study based on my understanding of technological advances, which they claim to be way more ahead than it is:

We argue that it is largely already technologically possible to automate almost any task, provided that sufficient amounts of data are gathered for pattern recognition.

With the exception of three bottlenecks, namely:

“Perception and manipulation.”

“Creative intelligence.”

“Social intelligence.”

Frey and Osborne divided the tasks involved in jobs along two dimensions: cognitive vs. manual and non-routine vs. routine. They then identified three aspects (bottlenecks) of a job making it less likely that a computer would be able to replicate the tasks of that job: First, “perception and manipulation” in unpredictable tasks such as handling emergencies, performing medical treatment, and the like. Second, “creative intelligence” such as cooking, drawing, or any other task involving creative values relying on novel combinations of inspiration; Third, “social intelligence”, or the real-time recognition of human emotion.

Race with the machines

Now a new research paper, released in July 2016, by researchers at the Centre for European Economic Research has indicated that technology has in fact had the opposite impact and is a net creator of jobs not destroyer (at least in 27 European countries – and I suspect the same is true for other regions).

The paper, Racing With or Against the Machine? Evidence from Europe by authors Terry Gregory, Anna Salomons, and Ulrich Zierahn (Gregory and Zierahn were also two of the OECD paper authors) looked at the impact of routine replacing technology on jobs and concluded:

Overall, we find that the net effect of routine-replacing technological change (RRTC ) on labor demand has been positive. In particular, our baseline estimates indicate that RRTC has increased labor demand by up to 11.6 million jobs across Europe – a non-negligible effect when compared to a total employment growth of 23 million jobs across these countries over the period considered. Importantly, this does not result from the absence of significant replacement of labor by capital. To the contrary, by performing a decomposition rooted in our theoretical model, we show that RRTC has in fact decreased labor demand by 9.6 million jobs as capital replaces labor in production. However, this has been overcompensated by product demand and spillover effects which have together increased labor demand by some 21 million jobs. As such, fears of technological change destroying jobs may be overstated: at least for European countries over the period considered, we can conclude that labor has been racing with rather than against the machine in spite of these substitution effects.

My research of companies using robots has also categorically shown, through factual evidence, that those companies have created significantly more jobs than have been lost due to technological change. Similarly a detailed analysis prepared for the European Commission Director General of Communications Networks, Content & Technology by Fraunhofer about the impact of robotic systems on employment in the EU found that:

European manufacturing companies do not generally substitute human workforce capital by capital investments in robot technology. On the contrary, it seems that the robots’ positive effects on productivity and total sales are a leverage to stimulate employment growth.

So if robots are not job killers what is the real problem?

We need to fill the skills gap

I have argued before that we have a skills problem. Jobs all over the world are not being filled because of lack of skilled personnel to fill them.

New and emerging technologies both excite and worry. Robotics and Artificial Intelligence (AI) is certainly a minefield for both exuberance and fears.

By definition, there is a knowledge and skills gap during the emerging stages of any new technology, Robotics and AI is no exception: researchers and engineers are still learning about these technologies and their applications. But, in the meantime, hope, fears and hype naturally and irresistibly fill this vacuum of information.

Depending on whom you ask Robots and AI is predicted to help solve the world’s problems. Or by building this devil, these technologies may scorch the earth and fulfill a prophecy of Armageddon.

On the other side, especially with respect to AI, what it will most likely do – if and only if adopted by major corporations and governments — is foster technological and institutional betterment at a frenetic pace through improved health care, solving climate problems, helping those with sight problems, helping to get much needed aid spread more equitably.

We need education and training fitted to a different labour market, with more focus on creativity, flexibility and social skills. We need more Moonshots from Governments and Industry as so well described by Mariana Mazzucato in her book the Entrepreneurial State: Debunking Public vs. Private Sector.

Machines are there to augment human intelligence and ingenuity, to improve our environment and workplace, we need to stop fearing the machines and learn how to better integrate them into our processes, destroy the fears and improve productivity. We are not going to stop technological progress, if we embrace it we are better prepared to gain from it.