With so much press around Google’s acquisition of DeepMind (which I wrote about here and here) and the establishment of an ethics board (a good thing in my opinion). I thought I would highlight some text from one of the dominant textbooks in the field of Artificial Intelligence; AI: A Modern Approach. The book is apparently used in 1200 universities, and is currently the 22nd most-cited publication in computer science and 4th most cited publication of the 21st century.
The authors, Stuart Russell, Professor of Computer Science and Smith-Zadeh Professor in Engineering, University of California, Berkeley, Adjunct Professor of Neurological Surgery, and Peter Norvig (director of Research at Google) devote significant space to A.I. dangers and Friendly A.I., “The Ethics and Risks of Developing Artificial Intelligence.”
What initially got my attention whilst reading this chapter was this statement: “AI raises deeper questions than, say, nuclear weapons technology.”
The authors continue outlining various risks. The first 5 risks they discuss are:
- People might lose their jobs to automation.
- People might have too much (or too little) leisure time.
- People might lose their sense of being unique.
- AI systems might be used toward undesirable ends.
- The use of A.I. systems might result in a loss of accountability.
The last subset listed above indicates: “The Success of AI might mean the end of the human race.” Below is an extract:
The question is whether an A.I. system poses a bigger risk than traditional software. We will look at three sources of risk. First, the AI system’s state estimation may be incorrect, causing it to do the wrong thing. For example…a missile defense system might erroneously detect an attack and launch a counterattack, leading to the death of billions.
Second, specifying the right utility function for an A.I. system to maximize is not so easy. For example, we might propose a utility function designed to minimize human suffering, expressed as an additive reward function over time… Given the way humans are, however, we’ll always find a way to suffer even in paradise; so the optimal decision for the AI system is to terminate the human race as soon as possible – no humans, no suffering…
Third, the A.I. system’s learning function may cause it to evolve into a system with unintended behavior. This scenario is the most serious, and is unique to AI systems, so we will cover it in more depth.
They then write:
I.J. Good wrote, “Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then be unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”
The authors also reference that in Computer Power and Human Reason, Joseph Weizenbaum argued that the effect of intelligent machines on human society will be such that continued work on artificial intelligence is perhaps unethical.
Norvig and Russell do leave us with much to think about:
Looking on the bright side, success in AI would provide great opportunities for improving the material circumstances of human life. Whether it would improve the quality of life is an open question. Will intelligent automation give people more fulfilling work and more relaxing leisure time? Or will the pressures of competing in a nanosecond-paced world lead to more stress? Will children gain from instant access to intelligent tutors, multimedia online encyclopedias, and global communication, or will they play ever more realistic war games? Will intelligent machines extend the power of the individual, or of centralized governments and corporations?
The Founders Fund, which was one of the backers of DeepMind has written:
“While we have the computational power to support many versions of AI, the field remains relatively poorly funded, a surprising result given that the development of powerful AIs (even if they aren’t general AIs) would probably be one of the most important and lucrative technological advances in history.”
It is clear that computers with human-level intelligence (or better) would have a huge impact on our everyday lives and on the future course of civilization.
Google is in the business of providing information. Its mission is to organize the world’s information and make it universally accessible and useful.
Google’s acquisition of DeepMind significantly augments its ability to collect and organize data to enhance its services towards its stated mission. The Google executive team knows what the big data evangelists have been claiming for some time – the chance to gather data effectively is a game changer. It also gets patents on improved image search capabilities.
I’ve written before on the 8 robotic acquisitions Google completed in 2013. Maybe we will hear more about the cost of those acquisitions during Google’s Q4, 2013 Earning’s release after the closing bell on Thursday 30th January 2014. I still stand firm that much of those acquisitions are connected to Google’s mapping related activities. As I wrote at the time:
Maps are clearly at the core of Google’s development strategy, from driverless cars, online shopping and search, to wearable technology. Many of the recent robot acquisitions will enhance Google’s mobile strategy and improve its delivery services, hardware capabilities and above all localization experiences. “Google’s geographic data may become its most valuable asset. Not solely because of this data alone, but because location data makes everything else Google does and knows more valuable.”
This week’s acquisition of DeepMind (which I wrote about here) has gathered a huge amount of press attention considering the relatively small amount Google paid ($500 million), compared to the recent Waze acquisition ($ 969 million), Nest acquisition ($3.2 billion) and Motorola ($12.4 billion).
Much of the media, and indeed social media hype, has expressed comments that Google now has the ability to build Skynet, the self-aware artificial intelligence system from the Terminator movies, focusing on the fact that – “the technology could be used to controversial ends,” – hence Google was required to establish an Ethics board as part of the DeepMind acquisition, which: “will devise rules for how Google can and can’t use the technology.”
The DeepMind technology is indeed somewhat impressive and closer to a level of artificial intelligence than many others. Maybe the reinforcement learning of the DeepMind technology can be compared to IBM Watson as the closest other known technology currently available – and that’s a big maybe, but with the team Google has built and its capabilities in Machine Learning, and Artificial Intelligence the DeepMind acquisition certainly could give it similar ‘supercomputing’ capabilities as Watson.
IBM Watson, like Google’s ambitions are not something we should fear, they are developments we should embrace. According to IBM’s John Kelly and Steve Hamm, writing in their book: Smart Machines: IBM’s Watson and the Era of Cognitive Computing:
“The goal isn’t to replicate human brains, though. This isn’t about replacing human thinking with machine thinking. Rather, in the era of cognitive systems, humans and machines will collaborate to produce better results – each bringing their own superior skills to the partnership. The machines will be more rational and analytic – and, of course, possess encyclopedic memories and tremendous computational abilities. People will provide judgment, intuition, empathy, a moral compass and human creativity.”
But let me get to the point – and back to focusing on Google’s mission. Google believes organizing the world’s data will make us more productive and therefore its services will be more useful.
Through its Google Now service it wants to offer us the ability to talk with and have question and answer sessions with our personal assistant, or cybernetic friend. Think the Star Trek computer or ‘assistant.’ Although, personally I see it more as Jarvis, (or more correctly: J.A.R.V.I.S. Just A Rather Very Intelligent System) from the Iron Man franchise, the AI system which ‘acts’ as Tony Stark’s best friend.
Let’s turn to two high-ranking executives within Google for an idea of the big problem that Google could solve with DeepMind’s technology improving Google Now’s service. First if we listen to Astro Teller, the Captain of Moonshots at Google X (a moonshot is a long term project to solve a problem with a radical (often futuristic) solution). Astro said in a video presentation one of the biggest problems to be solved was “having more time.” He talks about one of the biggest issues most people claim is they “don’t have enough time.” And being able to help people have more time, or manage their time better could be ‘building the impossible.’
Now let’s not get carried away Google will not attempt to slow down the rotation of the earth, but through its Google Now assistant service it could work with us to enhance our own neurological limits, which lead us to forgetfulness and oversights by providing an information rich, data system designed to support our needs.
If that sounds far fetched, consider what Google Executive Chairman, Eric Schmidt writes in his latest book: The New Digital Age – Reshaping the future of people, nations and business:
Centralizing the many moving parts of one’s life into an easy to use almost intuitive system of information management and decision-making will give all interactions with technology an effortless feel. These systems will free us of many small burdens, including errands to do list and assorted monitoring tasks – that today add stress and chip away at our mental focus throughout the day. By relying on these integrated systems, which will encompass both the professional and the personal sides of our lives, we’ll be able to use our time more effectively each day.
Suggestion engines that offer alternative terms to help a user find what she is looking for will be a particularly useful aid in efficiency by consistently stimulating our thinking process, ultimately enhancing our creativity, not preempting it. So there will be plenty of ways to procrastinate too but the point is that when you choose to be productive, you can do so with greater capacity.
Mr. Schmidt further adds:
Other advances in the pipeline in areas like robotics, artificial intelligence and voice recognition will introduce efficiency into our lives by providing more seamless forms of engagement with the technology in our daily routines.
This technology will surely save many of us time in our daily affairs.
No, Google does not have ambitions to be Skynet! Its machines are not taking over. It is working on providing an assistant to help us manage the one resource humans have failed so miserably to do for generations, manage our time better with a personal interactive assistant.
On another level, and further technology advances which will have appealed to Google (and perhaps why Facebook was so interested), DeepMind engineers Benjamin Coppin and Mustafa Suleyman recently filed 2 patents which cover intelligent ways to improve the process of “reverse image search,” the ability of uploading a picture to a search engine which allows it to find similar ones. Of course to some extent this is already possible on Google’s image search, but it sometimes returns irrelevant images. The US patent filing 2014/0019484, by the DeepMind engineers reveals a unique approach; it allows the user to input two images, then it lets the algorithm find similarities between the two, and then search for those instead.
The second patent (filed by the same two engineers) enables the user to home in on a small area of two pictures to improve image search still further.
And let’s not forget Google is in the business of providing search.
Photo credit JoC
I recently read a description of economists attributed to Robert Solow: “There are two kinds of economists: those who look for general results and those who look for illuminating examples.” Maybe this is the divide that separates Artificial Intelligence from cognitive science. AI seeks general results while the latter explains illuminating examples. Google’s recently reported $400 to $500 million acquisition of DeepMind, the University of Oxford’s Future of Humanity Institute affiliated company, brings it closer to achieving illuminating examples instead of general results.
DeepMind specializes in an advance form of Machine Learning called Reinforcement Learning. They have effectively developed algorithms to solve high-dimensional uncertain sequential decision-making problems. The more advanced reinforcement learning methods improve mechanisms for knowledge representation, search, and human-level reasoning. (A paper by the DeepMind founders on reinforcement learning can be found here).
So far the developed methods of Machine Learning and AI have mostly been about the task of prediction. With DeepMind Google gets a reinforcement learning tool deep rooted in behavioral psychology and neuroscience to improve on predicted modeling and provide a solution to reduce the amount of human intervention and enhance decision making.
It can be used with Google’s self-driving cars to improve knowledge of routes. Through high-dimensional sensory inputs like vision and speech; reinforcement learning will improve Google Glass and Google Now and perhaps most fundamentally it will improve how Google delivers adverts to Google users.
The academic research by the DeepMind team is extremely complimentary to Google’s products with experts in machine learning of imagery and robotics and people that have worked or studied with Geoffrey Hinton who recently joined Google on their AI development.
Effectively reinforcement learning algorithms can help people make better decisions, as it will provide users with the best data available.
This acquisition brings Google closer to building a “cybernetic friend” that listens in on your phone conversations, reads your e-mail, and tracks your every move — if you let it, of course — so it can tell you things you want to know even before you ask.
A great move if you ask me, which will considerably enhance Google’s services to its advertisers and users.
Below is an interesting presentation by Demis Hassabis, one of the founders of DeepMind:
Speaking at the World Economic Forum in Davos, Eric Schmidt the Chairman of Google has warned that:
“There is quite a bit of research that middle class jobs that are relatively highly skilled are being automated out.”
Mr. Schmidt indicated that the acceleration in technological innovation made the loss of jobs one of the biggest problems the world faces in the next 20 to 30 years.
“The race is between computers and people and the people need to win,” he said. “I am clearly on that side. In this fight, it is very important that we find the things that humans are really good at.”
The Google chief indicated the advances in technologies were creating “lots of part-time work and growth in caring and creative industries, however the problem is that the middle class jobs are being replaced by service jobs,”
In line with what I have been discussing Mr. Schmidt called on governments to invest in education systems to improve skill levels and human cognition.
Retraining and staying abreast of developments in human-computer symbiosis is critical for those who want to advance (or even retain) their careers, as Mr. Schmidt stated:
“It is pretty clear that work is changing and the classic nine to five job is going to have to be redefined. Without significant encouragement (from industry and government), this will get worse and worse.”
Of course Schmidt is correct to warn people and governments to get onboard, learn and adapt otherwise they will be left behind. However, as recent as last September he was indicating:
“Technology will create job opportunities for humans in the future, Innovation is the only solution to global growth,” he said, adding that he “doesn’t see any other path.”
In my opinion, as I have written about many times, yes people need to adapt but It’s not about human versus machine. Rather, it’s about the right kind of cooperation, because what humans are excellent at is where computers are weak, and vice versa. High-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources.
Photo credit Wikipedia
From Robert Peston business editor of the BBC:
The big chatter here is about the current acceleration in the refinement of artificial intelligence and robotics, which will allegedly see 80% of even quite high-skilled jobs replaced by machines within years.
Which would mean that redundancy looms for all jobs that aren’t either desperately menial or creative in a sense that robots can’t replicate.
On the 13th June, 1863 Samuel Butler, the English author, worried that machines might, through Darwinian Selection, develop consciousness, wrote a letter to the Editor of The Press, in Christchurch New Zealand. The letter titled Darwin among the Machines expressed Butler’s fears:
Day by day, however, the machines are gaining ground upon us; day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life. The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants.
In a broadcast of 1945 George Orwell praised the book and said that when Butler wrote Erewhon it needed: “imagination of a very high order to see that machinery could be dangerous as well as useful.”
Like Butler, one of the biggest fears, often repeated, is that machines, especially those with artificial intelligence will be our last invention, as these machines – think VIKI, in iRobot or HAL 9000 in Arthur C. Clarke’s 2001: A Space Odyssey – evolve through Darwinian selection or self-programming and feel they no longer need human’s to survive and take over the world.
As science fiction as that may sound, there is a general concern by scientists, philosophers, educators and governments of its grave possibility. Indeed to a large extent much of the technology already exists, but machines are still unable to think in the way human’s do. Robots need to be capable of learning dynamically how to interpret, and thus understand human multi-modal behavior and emotions. Machines, even those that are capable of ‘machine learning’ require programming language and thus algorithms or predicative models to ‘behave’ or compute.
Experts in Artificial Intelligence consider that sometime between 2025 and 2045 we will have machines with capabilities to think like humans or ‘act’ in a way similar to Samantha in the movie ‘Her.’
In the short term, especially over the next 3 to 5 years, as we see significant advances with ‘assistant devices’ such as Google Now and Apple’s Siri, I believe we will move to “human-computer symbiosis,” a term adapted from J.C.R. Licklider, a psychologist and computer scientist who published a prescient essay on the subject of human-computer symbiosis in 1960.
Human-Computer Symbiosis is the idea that technology should be designed in a way that amplifies human intelligence instead of attempting to replace it.
And this is where people who are analytically minded, statistical thinkers or creative will excel as more and more people are required to meet the growing demand for jobs in big data, design, mobile software engineers, etc.
Machines are good at imitating behaviors that are predictable, and common to large numbers of people. When you start typing a query or search into Google, the search engine is able to auto-complete your sentences because millions of other people have previously searched for the same thing — that’s predictive modeling. When Amazon recommends new items to us based on our past purchases, and also because thousands of other people have bought the same combination of items — that’s predictive modeling.
The reality is; most of us do the same things as other people 90 percent of the time. 90 percent of the ideas we have, somebody else has already had them. 90 percent of the clever remarks we make to our friends, someone has already made those remarks.
The other 10 percent — the unpredictable and original part of human behavior — is the difference and people that can work in the heart of that 10 percent and make sense of the data are certainly one of the groups that will thrive in the new economy.
Development of unmanned aircraft vehicles (UAVs) or unmanned aircraft systems (UAS), more commonly known as drones, is one of the fastest-growing and, yet, controversial sectors of aerospace, yet it is forecast that it could be worth as much as $62 billion a year to the global aerospace industry by 2020, creating hundred of thousands of jobs. The civilian drone market alone is possibly worth more than $400 billion according to a UK research project backed by the government and top aerospace companies.
With such market potential and possible uses for drones, much attention is currently being paid to the challenges of making them smaller, known as micro-air vehicles or MAVs, such as the tiny reconnaissance helicopters being used by the British Military (and also under review by the US Army) to the development of small drones, which mimic the flight action, and the maneuverability, of birds and insects.
Over the next few weeks I’ll be writing about the Small UAVs, such as the T-Hawk, Raven, Dragon Eye, Shadow, Scan Eagle, Silver Fox, Manta, Coyote, Hummingbird and Super Bat, but first let’s take a look at a very special small UAV which is being very effectively used in military operations and will soon be extended to search and rescue, police and fire-services and may other commercial applications.
In February last year the British army revealed its Black Hornet Nano Unmanned Air Vehicle developed by a small Norwegian company just outside Oslo, Prox Dynamics, headed by inventor Petter Muran. The Black Hornet Nano Unmanned Air Vehicle measures around 4 inches by 1 inch (10cm x 2.5cm) and weighs as little as 16 grams, it is equipped with up to 3 tiny camera’s which gives troops reliable full-motion video and still images, essential for reconnaissance and situational awareness. The system also has an advanced radio link and fully integrated GPS, as well an autopilot system.
It was developed as part of a GB £20 million ($32.8 million) contract for 160 units, or (GBP 125,000, US$ 205,000 each).
The UK Minister for Defence Equipment, Support and Technology, Philip Dunne, has indicated:
Intelligence, surveillance and reconnaissance systems are a key component in our 10-year equipment plan. Black Hornet gives our troops the benefits of surveillance in the palm of their hands. It is extremely light and portable whilst out on patrol.
I’ve added a video of Petter Muran demonstrating and describing the incredible technology in the Black Hornet Nano at the end of this post, it is simply a fabulous device and I can imagine it will be used in many domains as the price comes down.
The system has proven very effective for the British Army and subsequently the US Army announced that they have contracted Prox Dynamics with a $2.5 million project to provide a modified version of the Black Hornet Nano.