U.S. Department of Defense Report – Preparing for War in the Robotic age

Preparing for War in a Robotic Age

A recently released U.S. Department of Defense report, DTP 106: Policy Challenges of Accelerating Technological Change, sets out the potential benefits and concerns of Robotics, Artificial Intelligence and associated technologies (as well as advances in information and communications technologies (ICT) and cognitive science, big data, cloud computing, energy and nanotechnologies). Calling for policy choices that need to be made sooner rather than later, the authors, James Kadtke and Linton Wells II indicate:

This paper examines policy, legal, ethical, and strategy implications for national security of the accelerating science, technology, and engineering (ST&E) revolutions underway in five broad areas: biology, robotics, information, nanotechnology, and energy (BRINE), with a particular emphasis on how they are interacting. The paper considers the timeframe between now and 2030 but emphasizes policy and related choices that need to be made in the next few years

Recognizing advances in Robotics and AI the authors state their concerns about maintaining the US Department of Defense’s present technological preeminence and how this will be a difficult challenge. They believe that ‘many dedicated people are addressing the technology issues,’ but policy actions are also crucial to adapt to — and shape — the technology component of the international security environment. With respect to robotics they outline the areas they see advances in and where policy changes are needed:

Progress in robotics, artificial intelligence, and human augmentation is enabling advanced unmanned and autonomous vehicles for battlefield and hazardous operations, low-cost autonomous manufacturing, and automated systems for health and logistics.

Referencing a January 2014 report, Preparing for War in the Robotics Age by The Center for a New American Security, the new DOD report outlines the advantages and concerns should these technologies fall into the hands of adversaries:

Many of these areas, and especially their convergence, will result in disruptive new capabilities for D.o.D. which can improve warfighter performance, reduce health-care and readiness costs, increase efficiency, enhance decision making, reduce human risk… However, U.S. planning must expect that many of these also will be available to adversaries who may use them under very different ethical and legal constraints than we would.

To set the tone for the next 16 years and illustrate the rapid changes in technology they point to the fact that 16 years ago Facebook and Twitter did not exist and Google was just getting started. They remind us of where the world was in robotics 16 years ago and where it is now:

In robotics, few unmanned vehicles were fielded by the U.S. military; today, thousands of unmanned aerial vehicles are routinely employed on complex public and private missions, and unmanned ground and sea vehicles are becoming common.

The amount of change we can expect by 2030 is likely to be much greater than we have experienced since 1998, and it will be qualitatively different as technology areas become more highly integrated and interactive.

U.S. D.oD runs the risk of falling behind

They emphasize the need to mitigate the risks of this rapid development, and effectively exploit its development through carefully deliberated policies ‘to navigate a complex and uncertain future,’ despite the fact that ‘America’s share of global research is steadily declining.’

Focusing on the fact that other countries and the private sector are taking the lead in robotics, A.I. and human augmentation such as exoskeleton’s, they say that the ‘United States must begin to prepare for warfare in the robotic age.’

Robotics, Artificial Intelligence, and Human Augmentation: After decades of research and development, a wide range of technologies is now being commercialized that can augment or replace human physical and intellectual capabilities. Advances in sensors, materials, electronics, human interfaces, control algorithms, and power sources are making robots commercially viable — from personal devices to industrial-scale facilities. Several countries, including the United States, now have large-scale national initiatives aimed at capturing this burgeoning economic sector. Artificial intelligence has also made major advances in recent years, and although still limited to “weak” artificial intelligence, or AI, general-purpose artificial intelligence may be available within a decade.

They say that most of these technologies are, by themselves, merely tools, but these tools are turned into capabilities when adopted and used by people, organizations, societies, and governments.

Policy, legal, ethical and organizational issues

The report outlines 12 sections ‘offering cross-cutting recommendations that address broader policy, legal, ethical, and organizational issues… where there will be opportunities for shaping actions and capacity building within the next 2–3 years.’

One of those sections is concerned with the decline of US manufacturing — the report authors outline their concerns that U.S. manufacturers may not be able to produce U.S. DoD equipment and the technical know how will be in the hands of foreign governments:

The loss of domestic manufacturing capability for cutting-edge technologies means the United States may increasingly need to rely on foreign sources for advanced weapons systems and other critical components, potentially creating serious dependencies. Global supply chain vulnerabilities are already a significant concern, for example, from potential embedded “kill switches,” and these are likely to worsen.

The loss of advanced manufacturing also enhances tech transfer to foreign nations and helps build their Science Technology & Engineering base, which accelerates the loss of U.S. talent and capital. This loss of technological preeminence by the United States would result in a fundamental diminishing of national power.

Another of the 12 recommendations concerns so called KillBots:

Perhaps the most serious issue is the possibility of robotic systems that can autonomously decide when to take human life. The specter of Kill Bots waging war without human guidance or intervention has already sparked significant political backlash, including a potential United Nations moratorium on autonomous weapons systems. This issue is particularly serious when one considers that in the future, many countries may have the ability to manufacture, relatively cheaply, whole armies of Kill Bots that could autonomously wage war. This is a realistic possibility because today a great deal of cutting-edge research on robotics and autonomous systems is done outside the United States, and much of it is occurring in the private sector, including DIY robotics communities. The prospect of swarming autonomous systems represents a challenge for nearly all current weapon systems.

They recommend that the DoD should seek to remain ahead of the curve by developing concepts for new roles and missions and developing operational doctrine for forces made up significantly or even entirely of unmanned or autonomous elements and that government ‘should also be highly proactive in taking steps to ensure that it is not perceived as creating weapons systems without a “human in the loop.”

In the longer term, fully robotic soldiers may be developed and deployed, particularly by wealthier countries, although the political and social ramifications of such systems will likely be significant. One negative aspect of these trends, however, lies in the risks that are possible due to unforeseen vulnerabilities that may arise from the large scale deployment of smar automated systems, for which there is little practical experience. An emerging risk is the ability of small scale or terrorist groups to design and build functionally capable unmanned systems which could perform a variety of hostile missions.

Emphasizing that these technologies enable not only profoundly positive advancements for mankind but also new modes of war-fighting and tools for malicious behavior “the DoD cannot afford to be unprepared for its consequences.”

The report provides research data on various aspects of robotics, including economics, which shows that a large amount of research dollars are being invested in these systems globally by governments and corporations, whilst acknowledging that there are still considerable technical and social hurdles to overcome, principally because of concerns about the safety of human-to-robot interactions. However they believe that their recommendations, together with investments from NSF, DARPA, private sector and other governments, may be a key driver for developing the technical, legal, and sociological tools to make robots commonplace in human society.

Robotics is just one of a number of other new technologies that the report outlines, nevertheless policy makers worldwide would do well to head the advice and look at policy changes which will be needed to address these new systems.

Hat tip to Javier Lopez for a link to the paper.

Photo from Center for a New American Security – Preparing for War in a Robotic Age

A.I. and Bounded Optimality – a driving force for technological development

Artificial Intelligence a modern approachMost people most of the time make decisions with little awareness of what they are doing. These decisions include driving on auto-pilot, brushing our teeth, etc. Often we are not ‘mindful’ in such circumstances. However most of our judgments and actions are appropriate, most of the time. But not always!

Whilst we meander along on autopilot, researchers in Artificial Intelligence seek to create human level intelligence for their machines, with some even speaking of human level consciousness as the goal in A.I. machines, and whilst others consider machines are still as mindless as toothpicks, researchers such as Professor Stuart Russell considers his own motivation for the study of A.I. and that of researchers in the field should be:

“To create and understand intelligence as a general property of systems, rather than as a specific attribute of humans. I believe this to be an appropriate goal for the field as a whole.”

Professor Russell, the co-author of the seminal book in Artificial Intelligence with Peter Norving, has released a new paper, Rationality and Intelligence: A Brief Update, which describes his ‘informal conception of intelligence and reduces the gap between theory and practice,’ as well as describing ‘promising recent developments.’

Setting the A.I. scene

In his paper Russell provides a clear goal of early A.I. researchers as:

“The standard (early) conception of an AI system was as a sort of consultant: something that could be fed information and could then answer questions. The out-put of answers was not thought of as an action about which the AI system had a choice, any more than a calculator has a choice about what numbers to display on its screen given the sequence of keys pressed.”

To some extent a recent paper by Facebook Artificial Intelligence Researchers Jason Weston, Sumit Chopra and Antoine Bordes entitled “Memory Networks” demonstrates the concept:

Artificial Intelligent memory networks use a kind associative memory to store and retrieve internal representations of observations. An interesting aspect of Memory Networks is that they can learn simple forms of “common sense” by “observing” the description of events in a simulated world. The system is trained to answer questions about the state of the world after having been told a sequence of events happening in this world. The system automatically learns simple regularities in the world, such as when “Antoine picks up the bottle and walks into the kitchen with it, where does he take the bottle.” The answer could be “the bottle is going to be/will be in the kitchen.”

Here is an example of what the system can do. After having been trained, it was fed the following short story containing key events in JRR Tolkien’s Lord of the Rings:

Bilbo travelled to the cave.

Gollum dropped the ring there.

Bilbo took the ring.

Bilbo went back to the Shire.

Bilbo left the ring there.

Frodo got the ring.

Frodo journeyed to Mount-Doom.

Frodo dropped the ring there.

Sauron died.

Frodo went back to the Shire.

Bilbo travelled to the Grey-havens.

The End.

After seeing this text, the system was asked a few questions, to which it provided the following answers:

Q: Where is the ring?

A: Mount-Doom

Q: Where is Bilbo now?

A: Grey-havens

Q: Where is Frodo now?

A: Shire

Another example of neural-net+memory is the recent Google/Deep Mind paper, “Neural Turing Machine.” Although it’s quite a bit more complicated than Memory Networks, and has not been demonstrated (at least not in public) to work on tasks such as question/answering, however it is fair to ‘assume’ this is one of Google’s goals given their desire to create the Star Trek computer.

Beyond the Turing Test

Setting out his informal conception of intelligence and a definition of artificial intelligence Russell explains that:

“A definition of intelligence needs to be formal—a property of the system’s input, structure, and output—so that it can support analysis and synthesis. The Turing test does not meet this requirement.”

He further lays out the steps the A.I. research community has taken towards defining what machine artificial intelligence is (and by default is not).

Russell then sets out an update to the four areas he has previously outlined as being the core areas of rationality to discuss in order to create artificial intelligence (Russell 1997).

Despite previously given credit to Bounded Rationality, Russell has omitted this in favor of what he calls metalevel rationality. He previously alluded to Herb Simon’s work on Bounded Rationality as:

Rational historyBounded rationality. “Herbert Simon rejected the notion of perfect (or even approximately perfect) rationality and replaced it with bounded rationality, a descriptive theory of decision making by real agents.” Simon wrote:

The capacity of the human mind for formulating and solving complex problems is very small compared with the size of the problems whose solution is required for objectively rational behavior in the real world, or even for a reasonable approximation to such objective rationality.

Simon suggested that bounded rationality works primarily by satisficing — that is, deliberating only long enough to come up with an answer that is “good enough.”

Herb Simon won the Nobel Prize in economics for this work. It appears to be a useful model of human behaviors in many cases. But Russell says it is not a formal specification for intelligent agents, however, because the definition of ‘good enough’ is not given by the theory. Furthermore, satisficing seems to be just one of a large range of methods used to cope with bounded resources.

The four areas Russell outlines in his new paper are:

  1. Perfect rationality. A perfectly rational agent acts at every instant in such a way as to maximize its expected utility, given the information it has acquired from the environment. He says that the calculations necessary to achieve perfect rationality in most environments are too time consuming so perfect rationality is not a realistic goal.
  2. Calculative rationality. Russell writes that a “calculatively rational agent eventually returns what would have been the rational choice… at the beginning of its deliberation.” This is an interesting property for a system to exhibit, but in most environments, the right answer at the wrong time is of no value. He explains that in practice, “A.l. system designers are forced to compromise on decision quality to obtain reasonable overall performance; unfortunately, the theoretical basis of calculative rationality does not provide a well-founded way to make such compromises.”
  3. Metalevel rationality, (also called Type II rationality by I. J. Good who was Alan Turing’s long term collaborator) or the capacity to select the optimal combination of computation-sequence-plus-action, under the constraint that the action must be selected by the computation.
  4. Bounded optimality. Russell writes that: “A bounded optimal agent behaves as well as possible, given its computational resources. That is, the expected utility of the agent program for a bounded optimal agent is at least as high as the expected utility of any other agent program running on the same machine.”

Of these four possibilities, Russell says: “bounded optimality seems to offer the best hope for a strong theoretical foundation for A.I.” It has the advantage of being possible to achieve: there is always at least one best program — something that perfect rationality lacks. Bounded optimal agents are actually useful in the real world, whereas calculatively rational agents usually are not, and satisficing agents might or might not be, depending on how ambitious they are. Russell writes — If a true science of intelligent agent design is to emerge, it will have to operate in the framework of bounded optimality:

“My work with Devika Subramanian placed the general idea of bounded optimality in a formal setting and derived the first rigorous results on bounded optimal programs (Russell and Subramanian, 1995). This required setting up completely specified relationships among agents, programs, machines, environments, and time. We found this to be a very valuable exercise in itself. For example, the informal notions of “real-time environments” and “deadlines” ended up with definitions rather different than those we had initially imagined. From this foundation, a very simple machine architecture was investigated in which the program consists of a collection of decision procedures with fixed execution time and decision quality.”

Professor Russell’s paper offers a very detailed analysis of A.I. work to date and the options in the near future. In a reminder to the A.I. community about the controls we will need to maintain over machines Russell is indicating that the concept of bounded optimality is proposed as a formal task for artificial intelligence research that is both well-defined and feasible. Bounded optimality specifies optimal programs rather than optimal actions. Actions are generated by programs and it is over programs that designers have control – for now!

Nick Bostrom’s Superintelligence and the Metaphorical A.I. Time Bomb

Superintelligence book coverFrank Knight was an idiosyncratic economist who formalized a distinction between risk and uncertainty in his 1921 book, Risk, Uncertainty, and Profit. As Knight saw it, an ever-changing world brings new opportunities, but also means we have imperfect knowledge of future events. According to Knight, risk applies to situations where we do not know the outcome of a given situation, but can accurately measure the odds. Uncertainty, on the other hand, applies to situations where we cannot know all the information we need in order to set accurate odds in the first place.

“There is a fundamental distinction between the reward for taking a known risk and that for assuming a risk whose value itself is not known,” Knight wrote. A known risk is “easily converted into an effective certainty,” while “true uncertainty,” as Knight called it, is “not susceptible to measurement.”

Sometimes, due to uncertainty, we react too little or too late, but sometimes we overreact. This was perhaps the case of the Millennium Bug (Millennium time bomb) or the 2009 swine flu, a pandemic that never was. Are we perhaps so afraid of epidemics, a legacy from a not so distant past, that we sometimes overreact? Metaphorical ‘time bombs’ don’t explode. 
This follows from the opinion that time bombs are all based on false ceteris paribus assumptions.

Artificial Intelligence may be one of the areas where we overreact. A new book by Oxford Martin’s Nick Bostrom, SuperIntelligence, Paths, Dangers, Strategies, on Artificial intelligence as an existential risk has been in the headlines since Elon Musk, the high-profile CEO of electric car maker Tesla Motors and CEO and co-founder of SpaceX, said in an interview at an MIT symposium that AI is nothing short of a threat to humanity. “With artificial intelligence, we are summoning the demon.” This was on top of an earlier tweet by Musk where he said he had been reading SuperIntelligence and A.I. is “possibly a bigger threat than nukes.” Note: Elon Musk was one of the people that Nick Bostrom thanks in the introduction to his book as a ‘contributor through discussion.’

Perhaps Elon was thinking of Blake’s The Book of Urizen when he described A.I. as ‘summoning the demon’:

Lo, a shadow of horror is risen, In Eternity!  Unknown, unprolific!. Self-closd, all-repelling: what Demon.  Hath form’d this abominable void.  This soul-shudd’ring vacuum? — Some said: “It is Artificial Intelligence (Urizen),” But unknown, abstracted: Brooding secret, the dark power hid.

Professor Stephen Hawking and Stuart Russell (Russell is the co-author along with Peter Norvig of the seminal book on A.I.) have also expressed their reservations about the risks of A.I. indicating its invention “might” be our last “unless we learn how to avoid the risks.”

Hawking and his co-authors were also keen to point out the “incalculable benefits.” of A.I.

The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that A.I. may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.

In 1951, Alan Turing spoke of machines outstripping humans intellectually:

“Once the machine thinking method has started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s Erewhon.”

Leading A.I. Researcher Yann le Cunn commenting on Elon Musk’s recent claim that “AI could be our biggest existential threat,” wrote:

Regarding Elon’s comment: AI is a potentially powerful technology. Every powerful technology can be used for good things (like curing disease, improving road safety, discovering new drugs and treatments, connecting people….) and for bad things (like killing people or spying on them). Like any powerful technology, it must be handled with care. There should be laws and treaties to prevent its misuse. But the dangers of AI robots taking over the world and killing us all is both extremely unlikely and very far in the future.

So what is SuperIntelligence?

Stuart Russell and Peter Norvig in their much cited book, Artificial Intelligence: A Modern Approach consider A.I. to address thought processes and reasoning, as well as behavior, they then subdivide their definition of A.I. into four categories: ‘thinking humanly,’ ‘acting humanly,’ ‘thinking rationally’ and ‘acting rationally.’

In Superintelligence Nick Bostrom says it is:

“Any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.”

Bostrom has taken this further and has previously defined superintelligence as follows:

“By a ‘superintelligence’ we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.

He also indicates that a “human-level artificial intelligence would probably have learning, uncertainty, and concept formation as central features.”

What will this Superintelligence do according to Bostrom?

For a good review of Superintelligence see Ethical Guidelines for A Superintelligence by Ernest Davis (Pdf link above) who writes of Bostrom’s thesis:

“The AI will attain a level of intelligence immensely greater than human. There is then a serious danger that the AI will achieve total dominance of earthly society, and bring about nightmarish, apocalyptic changes in human life. Bostrom describes various horrible scenarios and the paths that would lead to them in grisly detail. He expects that the AI might well then turn to large scale interstellar travel and colonize the galaxy and beyond. He argues, therefore, that ensuring that this does not happen must be a top priority for mankind.”

The Bill Joy Effect

Bill Joy wrote a widely quoted article in Wired magazine in April 2000, with the fear filing title: Why the future doesn’t need us, where he warned:

“If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines.”

Eminent researchers John Seely Brown and Paul Duguid, offered a strong argument to Joy’s pessimistic piece in their paper, A Response to Bill Joy and the Doom-and- Gloom Technofuturists, where they compared the concern’s over A.I. and other technologies to the nuclear weapons crisis and the strong societal controls that were put in place to ‘control’ the risks of nuclear weapons. One of their arguments was that society at large has such a significant vested interest in existential risks it works to mitigate them.

Seely Brown and Duguid indicated that too often people have: “technological tunnel vision, they have trouble bringing other forces into view.” This may be a case in point with Bostrom’s Superintelligence, where people who have worked closely with him, have indicated that there are ‘probably’ only 5 “computer scientists in the world currently working on how to programme the super-smart machines of the not-too-distant future to make sure A.I. remains friendly.” In his book presentation Authors@Google Bostrom claimed that only half a dozen scientists are working full time on the control problem worldwide (last 6 minutes). Which sounds like “technological tunnel vision,” and someone who has “trouble bringing other forces into view.”

Tunnel vision A.I. Bias

Nicholas Taleb warns us to beware of confirmation bias. We focus on the seen and the easy to imagine and use them to confirm our theories while ignoring the unseen. If we had a big blacked out bowl with 999 red balls and 1 black one, for example, our knowledge about the presence of red balls grows each time we take out a red ball. But our knowledge of the absence of black balls grows more slowly.

This is Taleb’s key insight in his book Fooled by Randomness, and it has profound implications. A theory which states that all balls are red will likely be ‘corroborated’ with each observation. Our confidence that all balls are red will increase. Yet the probability that the next ball will be black will be rising all the time. If something hasn’t happened before or hasn’t happened for some time we assume that it can’t happen (hence the‚ this time it’s different syndrome). But we know that it can happen. Worse, we know that eventually it will.

In every tool we create, an idea is embedded that goes beyond the function of the thing itself. Just like the human brain, every technology has an inherent bias. It has within its physical form a predisposition toward being used in certain ways and not others.

It may be this bias that caused Professor Sendhil Mullainathan, whilst commenting on the Myth of A.I., to say he is:

“More afraid of machine stupidity than of machine intelligence.”

Bostrom is highly familiar with human bias having written Anthropic Bias, a book that since its first publication in 2002 has achieved the status of a classic.

A.I. Black Swan

In 2002, Nick Bostrom wrote of A.I. and SuperIntelligence Existential Risks:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question. [“Existential Risks”, 2002]

With my behavioral economics hat on I know that the evidence that we can’t forecast is overwhelming, however we must also always plan for and do our best to mitigate risks or ‘black swan’ events as best we can… it appears that the artificial intelligence community are doing a pretty good job of that.

MIT has an entire division, the Engineering Systems Division, that brings together researchers from engineering, the hard sciences, and the social sciences to identify and solve problems of complex engineered systems. One promising technique for engineering complex systems is known as axiomatic design, an approach conceived by Nam Suh, the former head of MIT’s Department of Mechanical Engineering. The idea of axiomatic design is to minimize the information content of the engineered system while maintaining its ability to carry out its functional requirements. Properly applied, axiomatic design results in airplanes, software, and toasters all just complex enough, and no more, to attain their design goals. Axiomatic design minimizes the effective complexity of the engineered system while maintaining the system’s effectiveness.

Professor Joanna Bryson has a trove of good information and research papers showing some of the efforts researchers are taking when it comes to mitigating A.I. risks.

The UK Government’s Chief Science Officer is addressing A.I. risk and what will be needed to govern such risk. The Association for the Advancement of Artificial Intelligence (AAAI) has a panel of leading A.I. researchers addressing the impact and influences of A.I on society. There are many others.

unPredictable

Of course I am aware that one counterintuitive result of a computer’s or A.I.’s fundamentally logical operation is that its future behavior is intrinsically unpredictable. However, I have a hard time believing an A.I. will want to destroy humanity and as much as I take the long-term risk of A.I. seriously I doubt it (Superintelligent A.I.) will happen in 5 or 10 years. We’re still not a paperless society. I can’t see a programmer, or mad scientist for that matter, capable of inventing a super intelligent A.I. programming it with: “Your mission, should you choose to accept it, is to eliminate all humans, wherever they may rear their head.”

I have gleamed many good insights from reading Superintelligence and recommend Bostrom’s book. I do not think human ingenuity will merely allow us to become lumbering robots, survival-machines entirely controlled by these super-machines. There is still something about being wiped out by a superintelligent A.I. that’s like squaring the circle. It doesn’t quite add up.

5 reads in robotics for elder care, artificial intelligence research and new jobs

How Cost Effective Is a Robotic Solution for Elder Care

Robots serving various tasks and purposes in the medical/health and social care sectors beyond the traditional scope of surgical and rehabilitation robots are poised to become one of the most important technological innovations of the 21st century. Nevertheless, unresolved issues for these platforms are: patient safety, as the robots are necessarily quite powerful and rigid and the cost effectiveness of these solutions. (PDF)

Be more afraid of machine stupidity than of machine intelligence

“I would make a distinction between machine intelligence and machine decision-making.

We should be afraid. Not of intelligent machines. But of machines making decisions that they do not have the intelligence to make. I am far more afraid of machine stupidity than of machine intelligence.

Machine stupidity creates a tail risk. Machines can make many, many good decisions and then one day fail spectacularly on some a tail event that did not appear in their training data. This is the difference between specific and general intelligence.”  (Sendhil Mullainathan)

New research may lead to technology that helps the blind and robots navigate natural environments

Two groups of scientists, working independently, have created artificial intelligence software capable of recognizing and describing the content of photographs and videos with far greater accuracy than ever before, sometimes even mimicking human levels of understanding. (NY Times)

Artificial Intelligence Can’t Replace Hard-Earned Knowledge – Yet

So until the androids take over, smart software and big data are merely very useful tools to help us work. Machines replace many kinds of repetitive work, from flying airplanes to sorting through medical symptoms. And to the extent that deeply smart humans can program potential problems into the software — even relatively rare ones — the system can react faster than a human. Some day robots may have deep smarts. For the present, we would settle for preserving the human variety and continuing to forge ever more productive partnerships with our silicon cousins. (Harvard Business Review)

Looking for a job in A.I.? A sneak peak at what it’s like working inside an A.I. Lab

It’s a compelling time to be working in A.I. to impact a huge number of lives. Baidu Research – Have an Inside Look into Baidu’s Silicon Valley A.I. Lab with learning lunches. (Baidu A.I. Lab Video)

Five weekend reads in Robotics, AI and economics

Most of the time, most of us have absolutely no idea what robots are thinking

In an experiment, MIT researchers used their AR system to place obstacles — like human pedestrians — in the path of robots, which had to navigate through a virtual city. The robots had to detect the obstacles and then compute the optimal route to avoid running into them. As the robots did that, a projection system displayed their “thoughts” on the ground, so researchers could visualize them in real time.

Automation Is Taking Over, and That’s Bad News for the World’s Poor

While we have always heard of a future in which robots would be handling most of the labor, it’s hard to think that most people pictured it in the way that things seem to be heading. Sure, automated work forces will be handling many of the world’s tasks in a relatively short amount of time, ushering in a new era of prosperity and leisure for the masses. The problem is that that prosperity hasn’t been shared, and many of the world’s poor and middle classes will end up scrambling to make ends meet as a result.

RoboLaw: Why and how to regulate robotics

Even a robot that can perform complex tasks without human supervision and take decisions towards that end may still not be deemed an agent in a philosophical sense, let alone a legal one. The robot is still an object, a product, a device, not bearing rights but meant to be used. What would justify a shift on a purely ontological basis (thus forcing us to consider the robot as a being provided with rights and duties) is what Gutman, Rathgeber and Syed call ‘strong autonomy’ – namely the ability to decide for one’s self and set one’s own goals. However, at present this belongs to the realm of science fiction, and it can be argued that this is not the direction we desire to take with robots in any case.

Elon Musk wades in — again: Talking at MIT’s Aeronautics and Astronautics Department’s Centennial Symposium last week, Musk said, “With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like… yeah, he’s sure he can control the demon—it doesn’t work out.” Mike Loukides counters that:

Artificial intelligence: summoning the demon

David Ferrucci and the other IBMers who built Watson understand that Watson’s potential in medical diagnosis isn’t to have the last word, or to replace a human doctor. It’s to be part of the conversation, offering diagnostic possibilities that the doctor hasn’t considered, and the reasons one might accept (or reject) those diagnoses. That’s a healthy and potentially important step forward in medical treatment, but do the doctors using an automated service to help make diagnoses understand that? Does our profit-crazed health system understand that? When will your health insurance policy say “you can only consult a doctor after the AI has failed”? Or “Doctors are a thing of the past, and if the AI is wrong 10% of the time, that’s acceptable; after all, your doctor wasn’t right all the time, anyway”? The problem isn’t the tool; it’s the application of the tool.

As fears of robots eliminating jobs persist the call for a Basic Income Guarantee grows

The prospect of a jobless economy certainly seems daunting. But if we can successfully manage it and put our machines to work, we could enter into an unprecedented era of material abundance while dramatically extending our leisure time. Rather than be tied to menial and demeaning work, we’d be free to engage in activities that truly interest us.

Five weekend reads in robotics, AI, driverless cars and the economy

  1. The Phenomenology of Self-Driving Cars — why I imagine driverless cars are going to hit a much bigger obstacle than most. (Next New Deal – The Roosevelt Institute, H/T @RobertWent)
  2. Robots that understand — DeepMind, the UK artificial intelligence group purchased by Google earlier this year, has revealed plans to create a broad alliance with the University of Oxford after acquiring two companies spun out of computer science projects at the elite academic institution. According to the Financial Times one of those companies: “is developing systems capable of the visual recognition of objects in the real word. This means, for example, giving robots three-dimensional awareness that can allow them to understand how a cup sits on a table.”
  3. CyPhy Works’ New Drone Fits in Your Pocket, Flies for Two Hours. Anybody who’s ever flown a rotary wing drone will look at the stats of CyPhy Works’ new Pocket Flyer drone and be amazed. It fits in your pocket and weights a mere 80 grams. It’ll fly continuously for two hours or more, sending back high quality HD video the entire time. What’s the catch? There isn’t one, except for the clever thing that grants all of CyPhy’s UAVs their special powers: a microfilament tether that unspools the drone and keeps it constantly connected to communications and power. (I’m a huge admirer of CyPhyWorks)
  4. The first example of a robot automating surgical tasks involving soft tissue. “There are no bad robots, there are just bad surgeons.” New Research Center Aims to Develop Second Generation of Surgical Robots.
  5. Robot project envisions factories where more people want to work. Rather than taking jobs, robots will one day soon join people on the factory floor, as co-workers and collaborators. That’s the vision of a EUR 6.5 million project led by Stockholm’s KTH Royal Institute of Technology. (PHYS.org)

Japan’s government holds first “robotics revolution council” meeting

The Japanese government has held the first meeting of a new panel focused on its goal of a “robotics revolution,” a key item in the government’s economic growth strategy adopted in June.

The robot revolution panel is tasked with promoting measures to increase the use of robots and related technologies in various fields, extending out of the manufacturing sector and into hotel, distribution, medical and elderly nursing-care services. The appropriate use of robots will be a key to solving these problems, according to Prime Minister Shinzo Abe who instigated the robot panel.

Despite Japan being a leader in the field of industrial robots, companies still rely heavily on human labor, making it difficult to secure enough workers and blocking efforts to improve productivity. Prime Minister Shinzo Abe instructed ‘the robot revolution council:’

“To work out a strategy for using robots as the key means to solve labor shortages amid the declining birthrate and aging population, low productivity of the services sector and other challenges plaguing Japan and for developing the robot industry into a growth sector to explore global markets.

Adding his hope that the government will seek to make Japan a showcase for robots in service for various areas ahead of other countries by 2020.

The government said Japan will double its robot-related market to ¥1.2 trillion (US$11.3 billion) by 2020 in the manufacturing sector and achieve a 20-fold jump in the non manufacturing sector, also to ¥1.2 trillion (US$11.3 billion).

A government paper lays out the factors behind the robot revolution with respect to manufacturing, stating:

The Government will seek to improve (factory) productivity through the utilization of robot technology, thereby improving the profitability of companies and helping to raise wages.

The panel, chaired by Mitsubishi Electric Corp. consultant Tamotsu Nomakuchi, will work out a five-year plan to be presented by the end of 2014, with details on how they will achieve the numerical targets.

The robot council will also discuss the legal regulations needed to promote the use of robots and related technologies.