Home » Posts tagged 'artificial intelligence'
Tag Archives: artificial intelligence
A new report released by the Whitehouse indicates that accelerating Artificial Intelligence (AI) capabilities will enable automation of some tasks that have long required human labor. The report authors indicate that these transformations will open up new opportunities for individuals, the economy, and society, but they will also disrupt the current livelihoods of millions of Americans. At a minimum, some occupations such as drivers and cashiers are likely to face displacement from or a restructuring of their current jobs.
The challenge for policymakers will be to update, strengthen, and adapt policies to respond to the economic effects of AI.
Although it is difficult to predict these economic effects precisely, the report suggests that policymakers should prepare for five primary economic effects:
- Positive contributions to aggregate productivity growth;
- Changes in the skills demanded by the job market, including greater demand for higher-level technical skills;
- Uneven distribution of impact, across sectors, wage levels, education levels, job types, and locations;
- Churning of the job market as some jobs disappear while others are created; and
- The loss of jobs for some workers in the short-run, and possibly longer depending on policy responses.
More generally, the report suggests three broad strategies for addressing the impacts of AI-driven automation across the whole U.S. economy:
- Invest in and develop AI for its many benefits;
- Educate and train Americans for jobs of the future; and
- Aid workers in the transition and empower workers to ensure broadly shared growth.
Key points from the report
The authors state it is unlikely that machines will exhibit broadly-applicable intelligence comparable to or exceeding that of humans in the next 20 years, it is to be expected that machines will continue to reach and exceed human performance on more and more tasks.
AI should be welcomed for its potential economic benefits. However there will be changes in the skills that workers need to succeed in the economy, and structural changes in the economy. Aggressive policy action will be needed to help Americans who are disadvantaged by these changes and to ensure that the enormous benefits of AI and automation are developed by and available to all.
Today, it may be challenging to predict exactly which jobs will be most immediately affected by AI-driven automation. Because AI is not a single technology, but rather a collection of technologies that are applied to specific tasks, the effects of AI will be felt unevenly through the economy. Some tasks will be more easily automated than others, and some jobs will be affected more than others—both negatively and positively. Some jobs may be automated away, while for others, AI-driven automation will make many workers more productive and increase demand for certain skills. Finally, new jobs are likely to be directly created in areas such as the development and supervision of AI as well as indirectly created in a range of areas throughout the economy as higher incomes lead to expanded demand.
Strategy #1: Invest in and develop AI for its many benefits. If care is taken to responsibly maximize its development, AI will make important, positive contributions to aggregate productivity growth, and advances in AI technology hold incredible potential to help the United States stay on the cutting edge of innovation. Government has an important role to play in advancing the AI field by investing in research and development. Among the areas for advancement in AI are cyberdefense and the detection of fraudulent transactions and messages. In addition, the rapid growth of AI has also dramatically increased the need for people with relevant skills from all backgrounds to support and advance the field. Prioritizing diversity and inclusion in STEM fields and in the AI community specifically, in addition to other possible policy responses, is a key part in addressing potential barriers stemming from algorithmic bias. Competition from new and existing firms, and the development of sound pro-competition policies, will increasingly play an important role in the creation and adoption of new technologies and innovations related to AI.
Strategy #2: Educate and train Americans for jobs of the future. As AI changes the nature of work and the skills demanded by the labor market, American workers will need to be prepared with the education and training that can help them continue to succeed. Delivering this education and training will require significant investments. This starts with providing all children with access to high-quality early education so that all families can prepare their students for continued education, as well as investing in graduating all students from high school college- and career- ready, and ensuring that all Americans have access to affordable post-secondary education. Assisting U.S. workers in successfully navigating job transitions will also become increasingly important; this includes expanding the availability of job-driven training and opportunities for lifelong learning, as well as providing workers with improved guidance to navigate job transitions.
Strategy #3: Aid workers in the transition and empower workers to ensure broadly shared growth. Policymakers should ensure that workers and job seekers are both able to pursue the job opportunities for which they are best qualified and best positioned to ensure they receive an appropriate return for their work in the form of rising wages. This includes steps to modernize the social safety net, including exploring strengthening critical supports such as unemployment insurance, Medicaid, Supplemental Nutrition Assistance Program (SNAP), and Temporary Assistance for Needy Families (TANF), and putting in place new programs such as wage insurance and emergency aid for families in crisis. Worker empowerment also includes bolstering critical safeguards for workers and families in need, building a 21st century retirement system, and expanding healthcare access. Increasing wages, competition, and worker bargaining power, as well as modernizing tax policy and pursuing strategies to address differential geographic impact, will be important aspects of supporting workers and addressing concerns related to displacement amid shifts in the labor market.
Finally, if a significant proportion of Americans are affected in the short- and medium-term by AI-driven job displacements, US policymakers will need to consider more robust interventions, such as further strengthening the unemployment insurance system and countervailing job creation strategies, to smooth the transition.
I will add detailed comments and my thoughts as I digest the full report in the coming days.
In his book Adventures in the Screen Trade, the hugely successful screenwriter William Goldman’s opening sentence is – “Nobody knows anything.” Goldman is talking about predictions of what might and what might not succeed at the box office. He goes on to write: “Why did Universal, the mightiest studio of all, pass on Star Wars? … Because nobody, nobody — not now, not ever — knows the least goddamn thing about what is or isn’t going to work at the box office.” Prediction is hard, “Not one person in the entire motion picture field knows for a certainty what’s going to work. Every time out it’s a guess.” Of course history is often a good predictor of what might work in the future and when, but according to Goldman time and time again predictions have failed miserably in the entertainment business.
It is exactly the same with technology and Artificial Intelligence (AI), probably more than any other technology has fared the worst when it comes to predictions of when it will be available as a truly ‘thinking machine.’ Fei-Fei Li, director of the Stanford Artificial Intelligence Lab even thinks: “today’s machine-learning and AI tools won’t be enough to bring about real AI.” And Demis Hassabis founder of Google’s DeepMind (and in my opinion one of the most advanced AI developers) forecasts: “it’s many decades away for full AI.”
Researchers are however starting to make considerable advances in soft AI. Although with the exception of less than 30 corporations there is very little tangible evidence that this soft AI or Deep Learning is currently being used productively in the workplace.
Some of the companies currently selling or and/or using soft AI or Deep Learning to enhance their services; IBM’s Watson, Google Search and Google DeepMind, Microsoft Azure (and Cortana), Baidu Search led by Andrew Ng, Palantir Technology, maybe Toyota’s new AI R&D lab if it has released any product internally, within Netflix and Amazon for predictive analytics and other services, the insurer and finance company USAA, Facebook (video), General Electric, the Royal Bank of Scotland, Nvidia, Expedia and MobileEye and to some extent the AI light powered collaborative robots from Rethink Robotics.
There are numerous examples of other companies developing AI and Deep Learning products but less than a hundred early-adopter companies worldwide. Essentially soft AI and Deep Learning solutions, such as Apple’s Siri, Drive.ai, Viv, Intel’s AI solutions, Nervana Systems, Sentient Technologies, and many more are still very much in their infancy, especially when it comes to making any significant impact on business transactions and systems processes.
On the other hand, Machine Learning (ML), which is a subfield of AI, which some call light AI, is starting to make inroads into organizations worldwide. There are even claims that: “Machine Learning is becoming so pervasive today that you probably use it dozens of times per day without knowing it.”
Although according to Intel: “less than 10 per cent of servers worldwide were deployed in support of machine learning last year (2015).” It is highly probable Google, Facebook, Salesforce, Microsoft and Amazon would have taken up a large percentage of that 10 percent alone.
ML technologies, such as the location awareness systems like Apple’s iBeacon software, which connects information from a user’s Apple profile to in-store systems and advertising boards, allowing for a ‘personalized’ shopping experience and tracking of (profiled) customers within physical stores. IBM’s Watson and Google DeepMind’s Machine Learning have both shown how their systems can analyze vast amounts of information (data), recognize sophisticated patterns, make significant savings on energy consumption and empower humans with new analytical capabilities.
The promise of Machine Learning is to allow computers to learn from experience and understand information through a hierarchy of concepts. Currently ML is beneficial for pattern and speech recognition and predictive analytics. It is therefore very beneficial in search, data analytics and statistics – when there is lots of data available. Deep Learning helps computers solve problems that humans solve intuitively (or automatically by memory) like recognizing spoken words or faces in images.
Neither Machine Learning nor Deep Learning should be considered as a attempt to simulate the human brain – which is one goal of AI.
Crossing the chasm – not without lots of data
If driverless vehicles can move around with decreasing problems, this is not because AI has finally arrived, it is not that we have machines that are capable of human intelligence, but it is that we have machines that are very useful in dealing with big data and are able to make decisions based on uncertainties regarding the perception and interpretation of their environment – but we are not quite there yet! Today we have systems targeted at narrow tasks and domains, but not that promised by ‘general purpose’ AI, which should be able to accomplish a wide range of tasks, including those not foreseen by the system’s designers.
Essentially there’s nothing in the very recent developments in machine learning that significantly affects our ability to model, understand and make predictions in systems where data is scarce.
Nevertheless companies are starting to take notice, investors are funding ML startups, and corporations recognize that utilizing ML technologies is a good step forward for organizations interested in gaining the benefits promised by Big Data and Cognitive Computing over the long term. Microsoft’s CEO, Satya Nadella, says the company is heavily invested in ML and he is: “very bullish about making machine learning capability available (over the next 5 years) to every developer, every application, and letting any company use these core cognitive capabilities to add intelligence into their core operations.”
The next wave – understanding information
Organizations that have lots of data know that information is always limited, incomplete and possibly noisy. ML algorithms are capable of searching the data and building a knowledge base to provide useful information – for example ML algorithms can separate spam emails from genuine emails. A machine learning algorithm is an algorithm that is able to learn from data, however the performance of machine learning algorithms depends heavily on the representation of the data they are given.
Machine Learning algorithms often work on the principle most widely known as Occam’s razor. This principle states that among competing hypotheses that explain known observations equally well one should choose the “simplest” one. In my opinion this is why we should use machines only to augment human labor and not to replace it.
Machine Learning and Big Data will greatly compliment human ingenuity – a human-machine combination of statistical analysis, critical thinking, inference, persuasion and quantitative reasoning all wrapped up in one.
“Every block of stone has a statue inside it and it is the task of the sculptor to discover it. I saw the angel in the marble and carved until I set him free.” ~ Michelangelo (1475–1564)
The key questions businesses and policy makers need to be concerned with as we enter the new era of Machine Learning and Big Data:
1) who owns the data?
2) how is it used?
3) how is it processed and stored?
Update 16th August 2016
There is a very insightful Quora answer by François Chollet,
“Our successes, which while significant are still very limited in scope, have fueled a narrative about AI being almost solved, a narrative according to which machines can now “understand” images or language. The reality is that we are very, very far away from that.”
Photo credit, this was a screen grab of a conference presentation, now I do not remember the presenter or conference but if I find it I will update the credit!
Two of the current leading researchers in labor economics studying the impact of machines and automation on jobs have released a new National Bureau of Economic Research (NBER) working paper, The Race Between Machine and Man: Implications of Technology for Growth, Factor Shares and Employment.
The authors, Daron Acemoglu and Pascual Restrepo are far from the robot-supporting equivalent of Statler and Waldorf, the Muppets who heckle from the balcony, unless you consider their heckling is about how so many have overstated the argument of robots taking all the jobs without factual support:
Similar claims have been made, but have not always come true, about previous waves of new technologies… Contrary to the increasingly widespread concerns, our model raises the possibility that rapid automation need not signal the demise of labor, but might simply be a prelude to a phase of new technologies favoring labor.
In The Race Between Machine and Man, the researchers set out to build a conceptual framework, which shows which tasks previously performed, by labor are automated, while at the same time more ‘complex versions of existing tasks’ and new jobs or positions in which labor has a comparative advantage are created.
The authors make several key observations that show as ‘low skilled workers’ are automated out of jobs, the creation of new complex tasks always increases wages, employment and the overall share of labor increases. As jobs are eroded, new jobs, or positions are created which require higher skills in the short term:
Whilst “automation always reduces the share of labor in national income and employment, and may even reduce wages. Conversely, the creation of new complex tasks always increases wages, employment and the share of labor.”
They show, through their analysis, that for each decade since 1980, employment growth has been faster in occupations with greater skill requirements
During the last 30 years, new tasks and new job titles account for a large fraction of U.S. employment growth.
In 2000, about 70% of the workers employed as computer software developers (an occupation employing one million people in the US at the time) held new job titles. Similarly, in 1990 a radiology technician and in 1980 a management analyst were new job titles.
Looking at the potential mismatch between new technologies and the skills needed the authors crucially show that these new highly skilled jobs reflect a significant number of the total employment growth over the period measured as shown in Figure 1:
From 1980 to 2007, total employment in the U.S. grew by 17.5%. About half (8.84%) of this growth is explained by the additional employment growth in occupations with new job titles.
Unfortunately we have known for some time that labor markets are “Pareto efficient; ” that is, no one could be made better off without making anyone worse off. Thus Acemoglu and Restrepo point to research that shows when wages are high for low-skill workers this encourage automation. This automation then leads to promotion or new jobs and higher wages for those with ‘high skills.’
Because new tasks are more complex, the creation may favor high-skill workers. The natural assumption that high-skill workers have a comparative advantage in new complex tasks receives support from the data.
The data shows that those classified as high skilled tend to have more years of schooling.
For instance, the left panel of Figure 7 shows that in each decade since 1980, occupations with more new job titles had higher skill requirements in terms of the average years of schooling among employees at the start of each decade (relative to the rest of the economy).
However it is not all bad news for low skilled workers the right panel of the same figure also shows a pattern of “mean reversion” whereby average years of schooling in these occupations decline in each subsequent decade, most likely, reflecting the fact that new job titles became more open to lower-skilled workers over time.
Our estimates indicate that, although occupations with more new job titles tend to hire more skilled workers initially, this pattern slowly reverts over time. Figure 7 shows that, at the time of their introduction, occupations with 10 percentage points more new job titles hire workers with 0.35 more years of schooling). But our estimates in Column 6 of Table B2 show that this initial difference in the skill requirements of workers slowly vanishes over time. 30 years after their introduction, occupations with 10 percentage points more new job titles hire workers with 0.0411 fewer years of education than the workers hired initially.
Essentially low-skill workers gain relative to capital in the medium run from the creation of new tasks.
Overall the study shows what many have said before, there is a skills gap when new technologies are introduced and those with the wherewithal to invest in learning new skills, either through extra education, on the job training, or self-learning are the ones who will be in high demand as new technologies are implemented.
Society is caught between blind faith in technology and resistance to progress, between technological possibilities and fears that it has a negative impact.
Increasingly Artificial Intelligence, the latest buzzword for everything software related, is stirring up much of the fears.
In an interesting paper: Is This Time Different? The Opportunities and Challenges of Artificial Intelligence, Jason Furman, Chairman of President Obama’s Council of Economic Advisers sets out his belief that we need more artificial intelligence but must find a way to prevent the inequality it will inevitably cause. Despite the labor market challenges we may need to navigate, Furman’s bigger worry is that we will not invest enough in AI.
He is more pragmatic than many economists and researchers who have written ‘popular’ books on the subject but calls for more innovation if we are truly to reap the benefits AI and Robotics will bring:
We have had substantial innovation in robotics, AI, and other areas in the last decade. But we will need a much faster pace of innovation in these areas to really move the dial on productivity growth going forward. I do not share Robert Gordon’s (2016) confidently pessimistic predictions or Erik Brynjolfsson and Andrew Mcafee’s (2014) confidently optimistic ones because past productivity growth has been so difficult to predict.
Technology, in other words, is not destiny but it has a price
My worry is not that this time could be different when it comes to AI, but that this time could be the same as what we have experienced over the past several decades. The traditional argument that we do not need to worry about the robots taking our jobs still leaves us with the worry that the only reason we will still have our jobs is because we are willing to do them for lower wages.
Replacing the Current Safety Net with a Universal Basic Income Could Be Counterproductive
Furman says that AI does not create a call for a Universal Basic Income and that the claims for implementing UBI and cancelling other social welfare programs have been greatly overstated:
AI does not call for a completely new paradigm for economic policy—for example, as advocated by proponents of replacing the existing social safety net with a universal basic income (UBI) —but instead reinforces many of the steps we should already be taking to make sure that growth is shared more broadly.
Replacing part or all of that system with a universal cash grant, which would go to all citizens regardless of income, would mean that relatively less of the system was targeted towards those at the bottom—increasing, not decreasing, income inequality.
Instead our goal should be first and foremost to foster the skills, training, job search assistance, and other labor market institutions to make sure people can get into jobs, which would much more directly address the employment issues raised by AI than would UBI.
Past Innovations Have Sometimes Increased Inequality—and the Indications Suggest AI Could Be More of the Same
Relying on the questionable study by Frey and Osborne, Furman says that work by the Council of Economic Advisers, ranked the occupations by wages and found that, according to the Frey and Osbourne analysis, 83 percent of jobs making less than $20 per hour would come under pressure from automation, as compared to 31 percent of jobs making between $20 and $40 per hour and 4 percent of jobs making above $40 per hour (see Figure 1 below).
AI has not had a large impact on employment, at least not yet
Furman says the issue is not that automation will render the vast majority of the population unemployable. Instead, it is that workers will either lack the skills or the ability to successfully match with the good, high paying jobs created by automation.
The concern is not that robots will take human jobs and render humans unemployable. The traditional economic arguments against that are borne out by centuries of experience. Instead, the concern is that the process of turnover, in which workers displaced by technology find new jobs as technology gives rise to new consumer demands and thus new jobs, could lead to sustained periods of time with a large fraction of people not working.
AI has the potential—just like other innovations we have seen in past decades—to contribute to further erosion in both the labor force participation rate and the employment rate. This does not mean that we will necessarily see a dramatically large share of jobs replaced by robots, but even continuing on the past trend of a nearly 0.2-percentage-point annual decline in the labor force participation rate for prime-age men would pose substantial problems for millions of people and for the economy as a whole.
Investment in AI
Mentioning the fact that AI has not had a significant macroeconomic impact yet, Furman indicates that the private sector will be the main engine of progress on AI. Citing references that in 2015 the private sector invested US$ 2.4 billion on AI, as compared to the approximately US$ 200 million invested by the National Science Foundation (NSF).
He says the government’s role should include policies that support research, foster the AI workforce, promote competition, safeguard consumer privacy, and enhance cybersecurity
AI does not call for a completely new paradigm for economic policy
AI is one of many areas of innovation in the U.S. economy right now. At least to date, AI has not had a large impact on the aggregate performance of the macroeconomy or the labor market. But it will likely become more important in the years to come, bringing substantial opportunities— and our first impulse should be to embrace it fully.
He indicates that his biggest worry about AI is that we may not get all the breakthroughs we think we can, and that we need to do more to make sure we can continue to make groundbreaking discoveries that will raise productivity growth, improving the lives of people throughout the world.
However, it is also undeniable that like technological innovations in the past, AI will bring challenges in areas like inequality and employment. As I have tried to make clear throughout my remarks, I do not believe that exogenous technological developments solely determine the future of growth, inequality, or employment. Public policy—including public policies to help workers displaced by technology find new and better jobs and a safety net that is responsive to need and ensures opportunity —has a role to play in ensuring that we are able to fully reap the benefits of AI while also minimizing its potentially disruptive effects on the economy and society. And in the process, such policies could also contribute to increased productivity growth—including advances in AI itself.
What are those policies? Truman indicates we need to develop more “human learning and skills,” increase investments in research and development, this includes Government investment and also “expand and simplify the Research and Experimentation tax credit,” “increase the number of visas—which is currently capped by legislation—to allow more high-skilled workers to come into the country.” “Consolidate existing funding initiatives, help retrain workers in skills for which employers are looking,” and more focused initiatives such as the “DARPA Cyber Grand Challenge.”
The bottom line is that AI managed well, with innovate government support, could offer significant benefits to humanity, but those benefits, including earning capacity, can only be achieved if governments and corporations help people up-skill.
 For private funding see https://www.cbinsights.com/blog/artificial-intelligence-funding-trends/#funding. For public funding see http://www.nsf.gov/about/budget/fy2017/pdf/18_fy2017.pdf. According to the NSF, in 2015 there was $194.58 million in funding for the NSF Directorate for Computer and Information Science and Engineering’s Division of Information and Intelligent Systems (IIS), much of which is invested in research on AI. These figures do not include investment by other agencies, including Department of Defense.
The AI chatbot who learned via the Unabomber Manifesto!
Charles and his team actually created a chatbot called cobot. It was really simple, and it was really dumb. But the users wanted it to be smart, they wanted to talk to it. So Charles and his team had to come up with a quick and easy way to make cobot appear smarter than it actually was. So they showed the robot a bunch of texts (they started, weirdly, with the Unabomber manifesto) and trained it to simply pick a few words that you said to it, search for those words in the things it had read, and spit those sentences back at you.
A fallback plan for when 95% of human labor isn’t valued or needed due to automation
Basic Income is not necessarily my ideal scenario, but Andrew gives a terrific overview of the pro’s and con’s in this excellent article. The 95% figure comes from Y Combinator Manager, Matt Krisiloff!
Silicon Valley techies hope a guaranteed income would cushion the blow as automation replaces human jobs. Those with a more utopian bent, such as the organizers of the Swiss referendum, want to open up more options, to let people create art and free the world of what Daniel Straub calls “bullshit jobs.” (Andrew Flowers at FiveThirtyEight)
Stanford’s robotic diver recovers treasures from King Louis XIV’s wrecked ship
OceanOne looks something like a robo-mermaid. Roughly five feet long from end to end, its torso features a head with stereoscopic vision that shows the pilot exactly what the robot sees, and two fully articulated arms… Every aspect of the robot’s design is meant to allow it to take on tasks that are either dangerous – deep-water mining, oil-rig maintenance or underwater disaster situations like the Fukushima Daiichi power plant – or simply beyond the physical limits of human divers. (Stanford News)
And just think Frey and Osborne said there is only an 18% probability commercial divers will lose their jobs to robots!
Progress in A.I. will affect society profoundly
The first wave of AI is already beginning to pervade our lives inconspicuously, from speech recognition and search engines to image classification. Self-driving cars and applications in health care are within sight, and subsequent waves could transform vast sectors of the economy, science and society. These could offer substantial benefits — but to whom? (Nature – Editorial)
In your old age what happens if your carer just happens to be a robot?
“There’s a pressing requirement for robots in the social care of the elderly, partly because we have fewer people of working age,” says Tony Belpaeme, a professor in intelligent and autonomous control systems at Plymouth University. Traditionally among the poorest paid of the workforce, carers are an ever more scarce resource.
Policy makers have begun to cast their eyes towards robots as a possible source of compliant and cheaper help. (Geoff Watts in The Atlantic)
What are you reading?
Drone Traffic Management
This is actually quite a big deal – could new jobs be created in Drone Traffic Control?
NASA recently successfully demonstrated rural operations of its unmanned aircraft systems (UAS) traffic management (UTM) concept, integrating operator platforms, vehicle performance and ground infrastructure.
With continued development, the Technical Capability Level One system would enable UAS operators to file flight plans reserving airspace for their operations and provide situational awareness about other operations planned in the area. (NASA Ames Research Center)
Bookshelf: Here Come the Robots
Just when I’ve been thinking about creating a robot book for children along come three!
Heavy construction machinery — bulldozers, diggers, tractors and the like — seem to have cornered the market when it comes to mechanical objects that can be made into emotionally responsive, strikingly human characters in children’s books. But what about the robots? Here in the 21st century, when our vacuums are de facto robots and our cars may well soon be too, when certain parents are as likely to dream of their child learning to code as they are to dream of their child learning Mandarin, shouldn’t robots be getting more picture-book love? (New York Times)
Opening Pandora’s AI Box in Oxford
About three months ago, Dr Simon Stringer, a leading scientist in the field of artificial intelligence at the Oxford centre for theoretical neuroscience and Artificial Intelligence, fell down some stairs and broke his leg.
The convalescence period proved unexpectedly fruitful.
Freed from the daily rigmarole of academic life, you see, Dr Stringer’s mind was able to wander. And so it was, when he least expected it, that the solution to one of the biggest challenges in artificial intelligence — the so-called binding problem — struck him out of the blue. (Iza Kaminska at FT Alphaville)
Will artificial intelligence bring us utopia or destruction?
An interesting (long read) discussion featuring Nick Bostrom’s work on AI and SuperIntelligence.
Can a digital god really be contained?
He (Bostrom) imagines machines so intelligent that merely by inspecting their own code they can extrapolate the nature of the universe and of human society, and in this way outsmart any effort to contain them. “Is it possible to build machines that are not like agents—goal-pursuing, autonomous, artificial intelligences?” he asked me. “Maybe you can design something more like an oracle that can only answer yes or no. Would that be safer? It is not so clear. There might be agent-like processes within it.” Asking a simple question—“Is it possible to convert a DeLorean into a time machine and travel to 1955?”—might trigger a cascade of action as the device tests hypotheses. What if, working through a police computer, it impounds a DeLorean that happens to be convenient to a clock tower? “In fairy tales, you have genies who grant wishes,” Bostrom said. “Almost universally, the moral of those is that if you are not extremely careful what you wish for, then what seems like it should be a great blessing turns out to be a curse.” (New Yorker)
Most people most of the time make decisions with little awareness of what they are doing. These decisions include driving on auto-pilot, brushing our teeth, etc. Often we are not ‘mindful’ in such circumstances. However most of our judgments and actions are appropriate, most of the time. But not always!
Whilst we meander along on autopilot, researchers in Artificial Intelligence seek to create human level intelligence for their machines, with some even speaking of human level consciousness as the goal in A.I. machines, and whilst others consider machines are still as mindless as toothpicks, researchers such as Professor Stuart Russell considers his own motivation for the study of A.I. and that of researchers in the field should be:
“To create and understand intelligence as a general property of systems, rather than as a specific attribute of humans. I believe this to be an appropriate goal for the field as a whole.”
Professor Russell, the co-author of the seminal book in Artificial Intelligence with Peter Norving, has released a new paper, Rationality and Intelligence: A Brief Update, which describes his ‘informal conception of intelligence and reduces the gap between theory and practice,’ as well as describing ‘promising recent developments.’
Setting the A.I. scene
In his paper Russell provides a clear goal of early A.I. researchers as:
“The standard (early) conception of an AI system was as a sort of consultant: something that could be fed information and could then answer questions. The out-put of answers was not thought of as an action about which the AI system had a choice, any more than a calculator has a choice about what numbers to display on its screen given the sequence of keys pressed.”
To some extent a recent paper by Facebook Artificial Intelligence Researchers Jason Weston, Sumit Chopra and Antoine Bordes entitled “Memory Networks” demonstrates the concept:
Artificial Intelligent memory networks use a kind associative memory to store and retrieve internal representations of observations. An interesting aspect of Memory Networks is that they can learn simple forms of “common sense” by “observing” the description of events in a simulated world. The system is trained to answer questions about the state of the world after having been told a sequence of events happening in this world. The system automatically learns simple regularities in the world, such as when “Antoine picks up the bottle and walks into the kitchen with it, where does he take the bottle.” The answer could be “the bottle is going to be/will be in the kitchen.”
Here is an example of what the system can do. After having been trained, it was fed the following short story containing key events in JRR Tolkien’s Lord of the Rings:
Bilbo travelled to the cave.
Gollum dropped the ring there.
Bilbo took the ring.
Bilbo went back to the Shire.
Bilbo left the ring there.
Frodo got the ring.
Frodo journeyed to Mount-Doom.
Frodo dropped the ring there.
Frodo went back to the Shire.
Bilbo travelled to the Grey-havens.
After seeing this text, the system was asked a few questions, to which it provided the following answers:
Q: Where is the ring?
Q: Where is Bilbo now?
Q: Where is Frodo now?
Another example of neural-net+memory is the recent Google/Deep Mind paper, “Neural Turing Machine.” Although it’s quite a bit more complicated than Memory Networks, and has not been demonstrated (at least not in public) to work on tasks such as question/answering, however it is fair to ‘assume’ this is one of Google’s goals given their desire to create the Star Trek computer.
Beyond the Turing Test
Setting out his informal conception of intelligence and a definition of artificial intelligence Russell explains that:
“A definition of intelligence needs to be formal—a property of the system’s input, structure, and output—so that it can support analysis and synthesis. The Turing test does not meet this requirement.”
He further lays out the steps the A.I. research community has taken towards defining what machine artificial intelligence is (and by default is not).
Russell then sets out an update to the four areas he has previously outlined as being the core areas of rationality to discuss in order to create artificial intelligence (Russell 1997).
Despite previously given credit to Bounded Rationality, Russell has omitted this in favor of what he calls metalevel rationality. He previously alluded to Herb Simon’s work on Bounded Rationality as:
Bounded rationality. “Herbert Simon rejected the notion of perfect (or even approximately perfect) rationality and replaced it with bounded rationality, a descriptive theory of decision making by real agents.” Simon wrote:
The capacity of the human mind for formulating and solving complex problems is very small compared with the size of the problems whose solution is required for objectively rational behavior in the real world, or even for a reasonable approximation to such objective rationality.
Simon suggested that bounded rationality works primarily by satisficing — that is, deliberating only long enough to come up with an answer that is “good enough.”
Herb Simon won the Nobel Prize in economics for this work. It appears to be a useful model of human behaviors in many cases. But Russell says it is not a formal specification for intelligent agents, however, because the definition of ‘good enough’ is not given by the theory. Furthermore, satisficing seems to be just one of a large range of methods used to cope with bounded resources.
The four areas Russell outlines in his new paper are:
- Perfect rationality. A perfectly rational agent acts at every instant in such a way as to maximize its expected utility, given the information it has acquired from the environment. He says that the calculations necessary to achieve perfect rationality in most environments are too time consuming so perfect rationality is not a realistic goal.
- Calculative rationality. Russell writes that a “calculatively rational agent eventually returns what would have been the rational choice… at the beginning of its deliberation.” This is an interesting property for a system to exhibit, but in most environments, the right answer at the wrong time is of no value. He explains that in practice, “A.l. system designers are forced to compromise on decision quality to obtain reasonable overall performance; unfortunately, the theoretical basis of calculative rationality does not provide a well-founded way to make such compromises.”
- Metalevel rationality, (also called Type II rationality by I. J. Good who was Alan Turing’s long term collaborator) or the capacity to select the optimal combination of computation-sequence-plus-action, under the constraint that the action must be selected by the computation.
- Bounded optimality. Russell writes that: “A bounded optimal agent behaves as well as possible, given its computational resources. That is, the expected utility of the agent program for a bounded optimal agent is at least as high as the expected utility of any other agent program running on the same machine.”
Of these four possibilities, Russell says: “bounded optimality seems to offer the best hope for a strong theoretical foundation for A.I.” It has the advantage of being possible to achieve: there is always at least one best program — something that perfect rationality lacks. Bounded optimal agents are actually useful in the real world, whereas calculatively rational agents usually are not, and satisficing agents might or might not be, depending on how ambitious they are. Russell writes — If a true science of intelligent agent design is to emerge, it will have to operate in the framework of bounded optimality:
“My work with Devika Subramanian placed the general idea of bounded optimality in a formal setting and derived the first rigorous results on bounded optimal programs (Russell and Subramanian, 1995). This required setting up completely specified relationships among agents, programs, machines, environments, and time. We found this to be a very valuable exercise in itself. For example, the informal notions of “real-time environments” and “deadlines” ended up with definitions rather different than those we had initially imagined. From this foundation, a very simple machine architecture was investigated in which the program consists of a collection of decision procedures with fixed execution time and decision quality.”
Professor Russell’s paper offers a very detailed analysis of A.I. work to date and the options in the near future. In a reminder to the A.I. community about the controls we will need to maintain over machines Russell is indicating that the concept of bounded optimality is proposed as a formal task for artificial intelligence research that is both well-defined and feasible. Bounded optimality specifies optimal programs rather than optimal actions. Actions are generated by programs and it is over programs that designers have control – for now!