On the 13th June, 1863 Samuel Butler, the English author, worried that machines might, through Darwinian Selection, develop consciousness, wrote a letter to the Editor of The Press, in Christchurch New Zealand. The letter titled Darwin among the Machines expressed Butler’s fears:
Day by day, however, the machines are gaining ground upon us; day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life. The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants.
In a broadcast of 1945 George Orwell praised the book and said that when Butler wrote Erewhon it needed: “imagination of a very high order to see that machinery could be dangerous as well as useful.”
Like Butler, one of the biggest fears, often repeated, is that machines, especially those with artificial intelligence will be our last invention, as these machines – think VIKI, in iRobot or HAL 9000 in Arthur C. Clarke’s 2001: A Space Odyssey – evolve through Darwinian selection or self-programming and feel they no longer need human’s to survive and take over the world.
As science fiction as that may sound, there is a general concern by scientists, philosophers, educators and governments of its grave possibility. Indeed to a large extent much of the technology already exists, but machines are still unable to think in the way human’s do. Robots need to be capable of learning dynamically how to interpret, and thus understand human multi-modal behavior and emotions. Machines, even those that are capable of ‘machine learning’ require programming language and thus algorithms or predicative models to ‘behave’ or compute.
Experts in Artificial Intelligence consider that sometime between 2025 and 2045 we will have machines with capabilities to think like humans or ‘act’ in a way similar to Samantha in the movie ‘Her.’
In the short term, especially over the next 3 to 5 years, as we see significant advances with ‘assistant devices’ such as Google Now and Apple’s Siri, I believe we will move to “human-computer symbiosis,” a term adapted from J.C.R. Licklider, a psychologist and computer scientist who published a prescient essay on the subject of human-computer symbiosis in 1960.
Human-Computer Symbiosis is the idea that technology should be designed in a way that amplifies human intelligence instead of attempting to replace it.
And this is where people who are analytically minded, statistical thinkers or creative will excel as more and more people are required to meet the growing demand for jobs in big data, design, mobile software engineers, etc.
Machines are good at imitating behaviors that are predictable, and common to large numbers of people. When you start typing a query or search into Google, the search engine is able to auto-complete your sentences because millions of other people have previously searched for the same thing — that’s predictive modeling. When Amazon recommends new items to us based on our past purchases, and also because thousands of other people have bought the same combination of items — that’s predictive modeling.
The reality is; most of us do the same things as other people 90 percent of the time. 90 percent of the ideas we have, somebody else has already had them. 90 percent of the clever remarks we make to our friends, someone has already made those remarks.
The other 10 percent — the unpredictable and original part of human behavior — is the difference and people that can work in the heart of that 10 percent and make sense of the data are certainly one of the groups that will thrive in the new economy.