Robotics – here’s to the crazy ones

Tags

, , , , , , ,

Screen Shot 2014-12-19 at 3.48.27 PM

In the early 1970s the UK Government commissioned a special report Artificial Intelligence: A General Survey authored by James Lighthill on behalf of the Science & Engineering Research Council (the infamous Lighthill Report) which damned A.I. and was “highly critical” of basic research in foundational areas such as robotics. The report recommendations led to the withdrawal of research funding from all but three UK universities. The same kind of official doubts which the Lighthill Report made explicit in the UK lay behind a less extreme slow down in research funding in the US. This is sometimes referred to generically as the “first A.I. winter.”

This was changed, some ten years later, in the late 1980’s under, the then, Prime Minister Margaret Thatcher after the publication of The Alvey Report; Britain’s strategic computing initiative, recommended putting a lot of money into A.I. research, which they renamed Knowledge Based Systems. Robotics, closely associated with A.I., was also a recipient of the new flow of government support, which was meant to help improve Britain’s lagging fortunes against the growing success of the Japanese economy. Folklore tells us that despite her agreement to proceed with the funding Mrs. Thatcher still considered the scientists and engineers, or the “Artificial intelligentsia,” to be seriously deranged.

However the investment flows were not significant on a global scale. As a result in the second decade of the 21st century, despite the promises of a robot revolution, there are still less than approximately 1.3 million industrial robots in active service worldwide, and, whilst we are seeing some progress in ‘soft’ A.I., most notably products such as Google Now and Siri along with IBM’s Watson, Peter Thiel, known for investing in several A.I. companies, such as UK based DeepMind (sold to Google for circa US$ 500 million) and Vicarious wrote in his Founder’s Fund manifesto:

While we have the computational power to support many versions of A.I., the field remains relatively poorly funded, a surprising result given that the development of powerful A.I.s (even if they aren’t general A.I.s) would probably be one of the most important and lucrative technological advances in history.

Things do however appear to have changed. Investments in robotics and A.I. seem to be surging once again, the US National Science Foundation has invested at least US$ 89 million in robotics labs in the last few years and earlier this year the European Commission formally announced the world’s largest investment into Robotics. Other countries and businesses are also investing heavily into the sector.

On the 9th December I attended the robotics Brokerage Day held by euRobotics in Brussels, Belgium. The Brokerage day was essentially an education and networking day aimed at assisting Robotic research labs and industry partner up to apply for grants under one of the EU’s grant calls (ICT 2015), where a total grant budget of 561 million Euros is available. Around 300 participants from 30 countries attended the event with over 50 teaser presentations given.

Health care

The area of health care robotics, including robots to help the elderly and disabled was particularly prominent.

Research Professor Dr. Sven Behnke of the University of Bonn discussed his labs work on ‘Cognitive robots,’ which he believes represents the “next step in the fusion of machines, computing, sensing, and software to create intelligent systems capable of interacting with the complexities of the real world.” This included smart mobile manipulation of every day care duties, such as cleaning the floor or handling a bottle.

Several research labs and companies such as Antonio Frisoli of Wearable Robotics from Italy and Volkan Patoglu, of Sabanci University, İstanbul, Turkey discussed work on exoskeleton’s to provide people with the ability to be mobile after losing a limb or other disability.

Enrico Castelli of the Children’s Hospital in Rome presented their pioneering work on exoskeleton’s for children with neurological disorders.

Elvan Gunduz spoke of SciRobots approach to building care robots to help people with dementia, helping sufferers live a ‘good life.’

Hazardous environments

Other researchers outlined their work on providing robots to work in hazardous environments, think fire fighting, underground mining or nuclear disasters.

Cloud robotics

It was very clear that labs and industry shared a common goal that: “No robot is an island.” Believing that advances in artificial intelligence and robot software can be greatly enhanced by the ability of researchers and robots to access a local network to improve self-driving cars, logistics and factory planning.

Child like curiosity, not deranged 

Mrs. Thatcher may have been impressed with the advances on display although the child-like curiosity by so many adult robot enthusiasts, me included, may not have changed her mind about how crazy one needs to be to believe you can change the world with robotics – she may not have been familiar with Steve Jobs toast to the crazy ones, who see things differently and make a difference in the world through their visions and creations.

What was clear in this child like derangement, roboticists genuinely believe they are building some of the most important tools of the 21st century – I agree.

Photo credit: Dr. Sven Behnke of the University of Bonn 

Why Your Employees Should Be Playing With Lego Robots

Tags

, , , ,

Two years ago, Swedish communications technology giant Ericsson found itself looking for a way to explain the value it saw in the Internet of Things. Rather than publish another whitepaper on the topic, the company struck on a different communication tool: Legos. More specifically, Lego robots.

Ericsson used Lego Mindstorm robots in a demonstration at the 2012 Mobile World Congress to bring to life its vision of how connected machines might change the way we live. A laundry-robot sorted socks by color and placed them in different baskets while it chatted with the washing machine. A gardening-robot watered the plants when the plants said they were thirsty. A cleaner-robot collapsed and trashed empty cardboard coffee cups that it collected from the table, and a dog-like robot fetched the newspaper when the alarm clock rang.

Social Networking for LEGO Mindstorms Robots

Rather than merely talking or writing about its vision, Ericsson saw robots as a perfect medium for explaining its ideas. This is more than just a smart marketing campaign. As a variety of researchers have argued, it may offer a way to better equip workers with the skills they need to succeed in the 21st century. Training programs that encourage the use of robots to achieve goals – not just by playing with them, but by building them — encourage participants to use their creativity and natural curiosity to overcome problems through hands-on experiences.

Lego’s Mindstorm robots (or education and innovation kits as they are sometimes known) were developed in collaboration with MIT Media Lab as a solution for education and training in the mid to late 90’s. The work was an outcome of research by Professor Seymour Papert, who was co-founder of the MIT Artificial Intelligence Lab with Marvin Minsky. Papert later co-founded the Epistemology and Learning Group within the MIT Media Lab. Papert’s work has had a major impact on how people develop knowledge, and is especially relevant for building twenty-first century skills.

Papert and his collaborators’ research indicates that training programs using robotics influences participants’ ability to learn numerous essential skills, especially creativity, critical thinking, and learning to learn or “metacognition”. They also emphasize important approaches to modern work, like collaboration and communication.

This form of learning is called constructionism, and it is premised on the idea that people learn by actively constructing new knowledge, not by having information “poured” into their heads. Moreover, constructionism asserts that people learn with particular effectiveness when they are engaged in “constructing” personally meaningful artifacts. People don’t get ideas; theymake them.

Papert’s influential book Mindstorms: Children, Computers and Powerful Ideas as well as extensive scientific research into fields such as cognition, psychology, evolutionary psychology, and epistemology illustrate how this pedagogy can be combined with robotics to yield a powerful, hands-on method of training.

In training courses that use robotics, the program leader sets problems to be solved. Teams are presented with a box of pieces and simple programs that can run on iPads, iPhones, or Android tablets and phones. They are given basic training in the simple programming skills required and then set free to solve the problem presented.

Problems can be as ‘simple’ as building a robot to pass through a maze in a certain time frame, which requires trial and error and lots of critical thinking. What size wheels to use for speed and maneuverability, what drain on battery power, which sensors to use for guidance around walls. One team may decide to build a small drone to view and map out the terrain of the maze, this would require theorizing on the weight of the robotic drone and relaying data filmed to a mapping system which the on-ground robot could use to negotiate through the maze.

It is an entirely goal-driven process.

Participants get to design, program, and fully control functional robotic models. They use software to plan, test, and modify sequences of instructions for a variety of robotic behaviors. And they learn to collect and analyze data from sensors, using data logging functionalities embedded in the software. They gain the confidence to author algorithms, which taps critical thinking skills, and to creatively configure the robot to pursue goals.

Participants from all backgrounds gain key team building skills through collaborating closely at every stage of ideation, innovation, deployment, evaluation and scaling. At the end of the training teams are required to present their ideas and results, building effective communication skills.

It is quite astonishing to see how teams have developed robots to achieve tasks such as solving Rubik’s cubes in seconds, playing Sudoku and drawing portraits, creating braille printers, taking part in soccer and basketball games. These robots have even been used for improving ATM security.

Using robots in training programs to overcome challenges pushes participants out of their comfort zone. It deepens their awareness of complexity and builds ownership and responsibility.

The array of skills and work techniques that this kind of training offers is more in need today than ever, as technology is rapidly changing the skills demanded in the workplace.

Instead of programming people to act like robots, why not teach them to become programmers, creative thinkers, architects, and engineers? For companies seeking to develop these skills in their employees, hands-on goal-focused training using robots can help.

This post initially appeared on Harvard Business Review

Is package delivery using drones feasible?

Tags

, , , , , ,

Amazon Prime Air Drone

Co-founder of Kiva Systems Robotics indicates drone delivery could be as low as 20 cents per package.*

*Updated Friday 5th December after email correspondence with Professor D’Andrea

In the early summer I wrote about the economics of Amazon’s drones, the post highlighted the cost of logistics to Amazon and some back of the envelope calculations about the likely costs of drone delivery. In the article I indicated that Amazon would require pilot’s for their drones, especially in the early years of operation, it seems far fetched that we will have fully autonomous delivery drones in our cities without some sort of human oversight, at least in the next ten years or so. Backing up my calculations a recent job advert by Amazon indicates that they are looking for drone pilots, whilst similar jobs attract annual salaries of approximately $100,000 per year.

Earlier this year Helen Greiner CEO of CyPhy Works outlined her vision of delivery drones in 5 years. Meanwhile DHL have started testing the use of delivery drones to transport medicine to a small the North Sea island of Juist.

So is it economically feasible to deliver packages by drones?

Professor of robotics and autonomous vehicles Raffaelo D’Andrea of ETH Zurich, who is responsible for the Flying Machine Arena (“a space where flying robots live and learn”), and co-founder of Kiva Systems (robotics company acquired by Amazon, Inc. for US$ 775 million in cash), thinks it is economically feasible to deliver small packages by drone.

In a guest editorial for IEEE Automation Science and Engineering (Pdf), Professor D’Andrea detailed some calculations he had previously used to assess the costs of drone delivery for Matternet whose ‘vision was to create a transportation network based on flying machines, and to initially address niche markets such as medicine delivery in underdeveloped and hard to reach areas,’ and also compared to those that they had previously used for Kiva systems “business plan for the total cost of delivery.”

To assess the costs Raff initially uses two assumptions:

  • Payload of up to 2 kg.
  • Range of 10 km with headwinds of up to 30 km/h.

In his calculations, to arrive at the likely costs of drone delivery, D’Andrea analyzes the power consumption in kW, payload mass of 2 kg; a vehicle mass of 4 kg (battery weight of 2kg); the lift-to-drag ratio; 
power transfer efficiency for motor and propeller; power consumption of electronics, in kW, electricity costs and cruising velocity, in km/h, air speed and headwinds.

After analyzing the weight of the drone, payload (parcel) drag, headwinds, etc. Professor D’Andrea states:

“So, is package delivery using flying machines feasible? From a cost perspective, the numbers do not look unreasonable: the operating costs directly associated with the vehicle are on the order of 10 cents for a 2 kg payload and a 10 km range. I compare this to the 60 cents per item that we used over a decade ago in our Kiva business plan for the total cost of delivery, and it does not seem outlandish.”

Updated — Via email correspondence, D’Andrea points out that the ten cent cost described in the IEEE guest editorial was for energy (including battery replacement), and not for the amortized cost of the vehicles and vehicle maintenance. Assuming a vehicle cost of $1000 per unit (this is reasonable if Amazon is buying in the thousands), adding 20% per year for maintenance, and amortizing this over 5 years, this would amount to an additional $400/per year, or roughly $1/day. If each vehicle ran 10 missions per day, that’s an additional 10 cents per package on top of the 10 cents in energy costs calculated previously, for a total cost of about 20 cents per package.

20 cents per delivery is far less than what Amazon is currently paying. According to shipping-industry analysts Amazon typically pays between about $2 and $8 to ship each package, and the possible cost of drone delivery as laid out by Professor D’Andrea would go a long way to reducing Amazon’s annual losses of $3.538 billion related to shipping costs they incurred in 2013 and US$ 8.829 billion in cumulative shipping losses between financial year ended 2011 and financial year ending 2013.

Professor D’Andrea also outlines the obstacles to drone delivery, in addition to regulatory hurdles, privacy concerns, etc., indicating that additional automation research is needed to address three main challenges:

  • vehicle design,
  • localization navigation,
  • vehicle coordination.

Answering these points he provides a very good overview of the obstacles:

Vehicle design encompasses creating machines that are efficient, (most probably) can hover, can operate in a wide range of conditions, and whose reliability rivals that of commercial airliners; this is a significant undertaking that will require many iterations, and the ingenuity and contributions from folks in diverse areas.

Localization and navigation may seem like solved problems because of the many GPS-enabled platforms that already exist, but delivering packages reliably, in different operating conditions, in unstructured and changing environments, will require the integration of low-cost sensors and positioning systems that either do not yet exist, or are still in development.

Finally, thousands of autonomous agents in the air, sharing resources such as charging stations, will require robust co-ordination which can be studied in simulation.

He expects that delivery by drones is a real probability and concludes his opinion piece by writing that for better or for worse ‘goods being delivered by flying machines will result in packages flying above our heads in the not so distant future.’

U.S. Department of Defense Report – Preparing for War in the Robotic age

Tags

, , , , , , , ,

Preparing for War in a Robotic Age

A recently released U.S. Department of Defense report, DTP 106: Policy Challenges of Accelerating Technological Change, sets out the potential benefits and concerns of Robotics, Artificial Intelligence and associated technologies (as well as advances in information and communications technologies (ICT) and cognitive science, big data, cloud computing, energy and nanotechnologies). Calling for policy choices that need to be made sooner rather than later, the authors, James Kadtke and Linton Wells II indicate:

This paper examines policy, legal, ethical, and strategy implications for national security of the accelerating science, technology, and engineering (ST&E) revolutions underway in five broad areas: biology, robotics, information, nanotechnology, and energy (BRINE), with a particular emphasis on how they are interacting. The paper considers the timeframe between now and 2030 but emphasizes policy and related choices that need to be made in the next few years

Recognizing advances in Robotics and AI the authors state their concerns about maintaining the US Department of Defense’s present technological preeminence and how this will be a difficult challenge. They believe that ‘many dedicated people are addressing the technology issues,’ but policy actions are also crucial to adapt to — and shape — the technology component of the international security environment. With respect to robotics they outline the areas they see advances in and where policy changes are needed:

Progress in robotics, artificial intelligence, and human augmentation is enabling advanced unmanned and autonomous vehicles for battlefield and hazardous operations, low-cost autonomous manufacturing, and automated systems for health and logistics.

Referencing a January 2014 report, Preparing for War in the Robotics Age by The Center for a New American Security, the new DOD report outlines the advantages and concerns should these technologies fall into the hands of adversaries:

Many of these areas, and especially their convergence, will result in disruptive new capabilities for D.o.D. which can improve warfighter performance, reduce health-care and readiness costs, increase efficiency, enhance decision making, reduce human risk… However, U.S. planning must expect that many of these also will be available to adversaries who may use them under very different ethical and legal constraints than we would.

To set the tone for the next 16 years and illustrate the rapid changes in technology they point to the fact that 16 years ago Facebook and Twitter did not exist and Google was just getting started. They remind us of where the world was in robotics 16 years ago and where it is now:

In robotics, few unmanned vehicles were fielded by the U.S. military; today, thousands of unmanned aerial vehicles are routinely employed on complex public and private missions, and unmanned ground and sea vehicles are becoming common.

The amount of change we can expect by 2030 is likely to be much greater than we have experienced since 1998, and it will be qualitatively different as technology areas become more highly integrated and interactive.

U.S. D.oD runs the risk of falling behind

They emphasize the need to mitigate the risks of this rapid development, and effectively exploit its development through carefully deliberated policies ‘to navigate a complex and uncertain future,’ despite the fact that ‘America’s share of global research is steadily declining.’

Focusing on the fact that other countries and the private sector are taking the lead in robotics, A.I. and human augmentation such as exoskeleton’s, they say that the ‘United States must begin to prepare for warfare in the robotic age.’

Robotics, Artificial Intelligence, and Human Augmentation: After decades of research and development, a wide range of technologies is now being commercialized that can augment or replace human physical and intellectual capabilities. Advances in sensors, materials, electronics, human interfaces, control algorithms, and power sources are making robots commercially viable — from personal devices to industrial-scale facilities. Several countries, including the United States, now have large-scale national initiatives aimed at capturing this burgeoning economic sector. Artificial intelligence has also made major advances in recent years, and although still limited to “weak” artificial intelligence, or AI, general-purpose artificial intelligence may be available within a decade.

They say that most of these technologies are, by themselves, merely tools, but these tools are turned into capabilities when adopted and used by people, organizations, societies, and governments.

Policy, legal, ethical and organizational issues

The report outlines 12 sections ‘offering cross-cutting recommendations that address broader policy, legal, ethical, and organizational issues… where there will be opportunities for shaping actions and capacity building within the next 2–3 years.’

One of those sections is concerned with the decline of US manufacturing — the report authors outline their concerns that U.S. manufacturers may not be able to produce U.S. DoD equipment and the technical know how will be in the hands of foreign governments:

The loss of domestic manufacturing capability for cutting-edge technologies means the United States may increasingly need to rely on foreign sources for advanced weapons systems and other critical components, potentially creating serious dependencies. Global supply chain vulnerabilities are already a significant concern, for example, from potential embedded “kill switches,” and these are likely to worsen.

The loss of advanced manufacturing also enhances tech transfer to foreign nations and helps build their Science Technology & Engineering base, which accelerates the loss of U.S. talent and capital. This loss of technological preeminence by the United States would result in a fundamental diminishing of national power.

Another of the 12 recommendations concerns so called KillBots:

Perhaps the most serious issue is the possibility of robotic systems that can autonomously decide when to take human life. The specter of Kill Bots waging war without human guidance or intervention has already sparked significant political backlash, including a potential United Nations moratorium on autonomous weapons systems. This issue is particularly serious when one considers that in the future, many countries may have the ability to manufacture, relatively cheaply, whole armies of Kill Bots that could autonomously wage war. This is a realistic possibility because today a great deal of cutting-edge research on robotics and autonomous systems is done outside the United States, and much of it is occurring in the private sector, including DIY robotics communities. The prospect of swarming autonomous systems represents a challenge for nearly all current weapon systems.

They recommend that the DoD should seek to remain ahead of the curve by developing concepts for new roles and missions and developing operational doctrine for forces made up significantly or even entirely of unmanned or autonomous elements and that government ‘should also be highly proactive in taking steps to ensure that it is not perceived as creating weapons systems without a “human in the loop.”

In the longer term, fully robotic soldiers may be developed and deployed, particularly by wealthier countries, although the political and social ramifications of such systems will likely be significant. One negative aspect of these trends, however, lies in the risks that are possible due to unforeseen vulnerabilities that may arise from the large scale deployment of smar automated systems, for which there is little practical experience. An emerging risk is the ability of small scale or terrorist groups to design and build functionally capable unmanned systems which could perform a variety of hostile missions.

Emphasizing that these technologies enable not only profoundly positive advancements for mankind but also new modes of war-fighting and tools for malicious behavior “the DoD cannot afford to be unprepared for its consequences.”

The report provides research data on various aspects of robotics, including economics, which shows that a large amount of research dollars are being invested in these systems globally by governments and corporations, whilst acknowledging that there are still considerable technical and social hurdles to overcome, principally because of concerns about the safety of human-to-robot interactions. However they believe that their recommendations, together with investments from NSF, DARPA, private sector and other governments, may be a key driver for developing the technical, legal, and sociological tools to make robots commonplace in human society.

Robotics is just one of a number of other new technologies that the report outlines, nevertheless policy makers worldwide would do well to head the advice and look at policy changes which will be needed to address these new systems.

Hat tip to Javier Lopez for a link to the paper.

Photo from Center for a New American Security – Preparing for War in a Robotic Age

A.I. and Bounded Optimality – a driving force for technological development

Tags

, , , , ,

Artificial Intelligence a modern approachMost people most of the time make decisions with little awareness of what they are doing. These decisions include driving on auto-pilot, brushing our teeth, etc. Often we are not ‘mindful’ in such circumstances. However most of our judgments and actions are appropriate, most of the time. But not always!

Whilst we meander along on autopilot, researchers in Artificial Intelligence seek to create human level intelligence for their machines, with some even speaking of human level consciousness as the goal in A.I. machines, and whilst others consider machines are still as mindless as toothpicks, researchers such as Professor Stuart Russell considers his own motivation for the study of A.I. and that of researchers in the field should be:

“To create and understand intelligence as a general property of systems, rather than as a specific attribute of humans. I believe this to be an appropriate goal for the field as a whole.”

Professor Russell, the co-author of the seminal book in Artificial Intelligence with Peter Norving, has released a new paper, Rationality and Intelligence: A Brief Update, which describes his ‘informal conception of intelligence and reduces the gap between theory and practice,’ as well as describing ‘promising recent developments.’

Setting the A.I. scene

In his paper Russell provides a clear goal of early A.I. researchers as:

“The standard (early) conception of an AI system was as a sort of consultant: something that could be fed information and could then answer questions. The out-put of answers was not thought of as an action about which the AI system had a choice, any more than a calculator has a choice about what numbers to display on its screen given the sequence of keys pressed.”

To some extent a recent paper by Facebook Artificial Intelligence Researchers Jason Weston, Sumit Chopra and Antoine Bordes entitled “Memory Networks” demonstrates the concept:

Artificial Intelligent memory networks use a kind associative memory to store and retrieve internal representations of observations. An interesting aspect of Memory Networks is that they can learn simple forms of “common sense” by “observing” the description of events in a simulated world. The system is trained to answer questions about the state of the world after having been told a sequence of events happening in this world. The system automatically learns simple regularities in the world, such as when “Antoine picks up the bottle and walks into the kitchen with it, where does he take the bottle.” The answer could be “the bottle is going to be/will be in the kitchen.”

Here is an example of what the system can do. After having been trained, it was fed the following short story containing key events in JRR Tolkien’s Lord of the Rings:

Bilbo travelled to the cave.

Gollum dropped the ring there.

Bilbo took the ring.

Bilbo went back to the Shire.

Bilbo left the ring there.

Frodo got the ring.

Frodo journeyed to Mount-Doom.

Frodo dropped the ring there.

Sauron died.

Frodo went back to the Shire.

Bilbo travelled to the Grey-havens.

The End.

After seeing this text, the system was asked a few questions, to which it provided the following answers:

Q: Where is the ring?

A: Mount-Doom

Q: Where is Bilbo now?

A: Grey-havens

Q: Where is Frodo now?

A: Shire

Another example of neural-net+memory is the recent Google/Deep Mind paper, “Neural Turing Machine.” Although it’s quite a bit more complicated than Memory Networks, and has not been demonstrated (at least not in public) to work on tasks such as question/answering, however it is fair to ‘assume’ this is one of Google’s goals given their desire to create the Star Trek computer.

Beyond the Turing Test

Setting out his informal conception of intelligence and a definition of artificial intelligence Russell explains that:

“A definition of intelligence needs to be formal—a property of the system’s input, structure, and output—so that it can support analysis and synthesis. The Turing test does not meet this requirement.”

He further lays out the steps the A.I. research community has taken towards defining what machine artificial intelligence is (and by default is not).

Russell then sets out an update to the four areas he has previously outlined as being the core areas of rationality to discuss in order to create artificial intelligence (Russell 1997).

Despite previously given credit to Bounded Rationality, Russell has omitted this in favor of what he calls metalevel rationality. He previously alluded to Herb Simon’s work on Bounded Rationality as:

Rational historyBounded rationality. “Herbert Simon rejected the notion of perfect (or even approximately perfect) rationality and replaced it with bounded rationality, a descriptive theory of decision making by real agents.” Simon wrote:

The capacity of the human mind for formulating and solving complex problems is very small compared with the size of the problems whose solution is required for objectively rational behavior in the real world, or even for a reasonable approximation to such objective rationality.

Simon suggested that bounded rationality works primarily by satisficing — that is, deliberating only long enough to come up with an answer that is “good enough.”

Herb Simon won the Nobel Prize in economics for this work. It appears to be a useful model of human behaviors in many cases. But Russell says it is not a formal specification for intelligent agents, however, because the definition of ‘good enough’ is not given by the theory. Furthermore, satisficing seems to be just one of a large range of methods used to cope with bounded resources.

The four areas Russell outlines in his new paper are:

  1. Perfect rationality. A perfectly rational agent acts at every instant in such a way as to maximize its expected utility, given the information it has acquired from the environment. He says that the calculations necessary to achieve perfect rationality in most environments are too time consuming so perfect rationality is not a realistic goal.
  2. Calculative rationality. Russell writes that a “calculatively rational agent eventually returns what would have been the rational choice… at the beginning of its deliberation.” This is an interesting property for a system to exhibit, but in most environments, the right answer at the wrong time is of no value. He explains that in practice, “A.l. system designers are forced to compromise on decision quality to obtain reasonable overall performance; unfortunately, the theoretical basis of calculative rationality does not provide a well-founded way to make such compromises.”
  3. Metalevel rationality, (also called Type II rationality by I. J. Good who was Alan Turing’s long term collaborator) or the capacity to select the optimal combination of computation-sequence-plus-action, under the constraint that the action must be selected by the computation.
  4. Bounded optimality. Russell writes that: “A bounded optimal agent behaves as well as possible, given its computational resources. That is, the expected utility of the agent program for a bounded optimal agent is at least as high as the expected utility of any other agent program running on the same machine.”

Of these four possibilities, Russell says: “bounded optimality seems to offer the best hope for a strong theoretical foundation for A.I.” It has the advantage of being possible to achieve: there is always at least one best program — something that perfect rationality lacks. Bounded optimal agents are actually useful in the real world, whereas calculatively rational agents usually are not, and satisficing agents might or might not be, depending on how ambitious they are. Russell writes — If a true science of intelligent agent design is to emerge, it will have to operate in the framework of bounded optimality:

“My work with Devika Subramanian placed the general idea of bounded optimality in a formal setting and derived the first rigorous results on bounded optimal programs (Russell and Subramanian, 1995). This required setting up completely specified relationships among agents, programs, machines, environments, and time. We found this to be a very valuable exercise in itself. For example, the informal notions of “real-time environments” and “deadlines” ended up with definitions rather different than those we had initially imagined. From this foundation, a very simple machine architecture was investigated in which the program consists of a collection of decision procedures with fixed execution time and decision quality.”

Professor Russell’s paper offers a very detailed analysis of A.I. work to date and the options in the near future. In a reminder to the A.I. community about the controls we will need to maintain over machines Russell is indicating that the concept of bounded optimality is proposed as a formal task for artificial intelligence research that is both well-defined and feasible. Bounded optimality specifies optimal programs rather than optimal actions. Actions are generated by programs and it is over programs that designers have control – for now!

Nick Bostrom’s Superintelligence and the Metaphorical A.I. Time Bomb

Tags

, , , , ,

Superintelligence book coverFrank Knight was an idiosyncratic economist who formalized a distinction between risk and uncertainty in his 1921 book, Risk, Uncertainty, and Profit. As Knight saw it, an ever-changing world brings new opportunities, but also means we have imperfect knowledge of future events. According to Knight, risk applies to situations where we do not know the outcome of a given situation, but can accurately measure the odds. Uncertainty, on the other hand, applies to situations where we cannot know all the information we need in order to set accurate odds in the first place.

“There is a fundamental distinction between the reward for taking a known risk and that for assuming a risk whose value itself is not known,” Knight wrote. A known risk is “easily converted into an effective certainty,” while “true uncertainty,” as Knight called it, is “not susceptible to measurement.”

Sometimes, due to uncertainty, we react too little or too late, but sometimes we overreact. This was perhaps the case of the Millennium Bug (Millennium time bomb) or the 2009 swine flu, a pandemic that never was. Are we perhaps so afraid of epidemics, a legacy from a not so distant past, that we sometimes overreact? Metaphorical ‘time bombs’ don’t explode. 
This follows from the opinion that time bombs are all based on false ceteris paribus assumptions.

Artificial Intelligence may be one of the areas where we overreact. A new book by Oxford Martin’s Nick Bostrom, SuperIntelligence, Paths, Dangers, Strategies, on Artificial intelligence as an existential risk has been in the headlines since Elon Musk, the high-profile CEO of electric car maker Tesla Motors and CEO and co-founder of SpaceX, said in an interview at an MIT symposium that AI is nothing short of a threat to humanity. “With artificial intelligence, we are summoning the demon.” This was on top of an earlier tweet by Musk where he said he had been reading SuperIntelligence and A.I. is “possibly a bigger threat than nukes.” Note: Elon Musk was one of the people that Nick Bostrom thanks in the introduction to his book as a ‘contributor through discussion.’

Perhaps Elon was thinking of Blake’s The Book of Urizen when he described A.I. as ‘summoning the demon’:

Lo, a shadow of horror is risen, In Eternity!  Unknown, unprolific!. Self-closd, all-repelling: what Demon.  Hath form’d this abominable void.  This soul-shudd’ring vacuum? — Some said: “It is Artificial Intelligence (Urizen),” But unknown, abstracted: Brooding secret, the dark power hid.

Professor Stephen Hawking and Stuart Russell (Russell is the co-author along with Peter Norvig of the seminal book on A.I.) have also expressed their reservations about the risks of A.I. indicating its invention “might” be our last “unless we learn how to avoid the risks.”

Hawking and his co-authors were also keen to point out the “incalculable benefits.” of A.I.

The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that A.I. may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.

In 1951, Alan Turing spoke of machines outstripping humans intellectually:

“Once the machine thinking method has started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s Erewhon.”

Leading A.I. Researcher Yann le Cunn commenting on Elon Musk’s recent claim that “AI could be our biggest existential threat,” wrote:

Regarding Elon’s comment: AI is a potentially powerful technology. Every powerful technology can be used for good things (like curing disease, improving road safety, discovering new drugs and treatments, connecting people….) and for bad things (like killing people or spying on them). Like any powerful technology, it must be handled with care. There should be laws and treaties to prevent its misuse. But the dangers of AI robots taking over the world and killing us all is both extremely unlikely and very far in the future.

So what is SuperIntelligence?

Stuart Russell and Peter Norvig in their much cited book, Artificial Intelligence: A Modern Approach consider A.I. to address thought processes and reasoning, as well as behavior, they then subdivide their definition of A.I. into four categories: ‘thinking humanly,’ ‘acting humanly,’ ‘thinking rationally’ and ‘acting rationally.’

In Superintelligence Nick Bostrom says it is:

“Any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.”

Bostrom has taken this further and has previously defined superintelligence as follows:

“By a ‘superintelligence’ we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.

He also indicates that a “human-level artificial intelligence would probably have learning, uncertainty, and concept formation as central features.”

What will this Superintelligence do according to Bostrom?

For a good review of Superintelligence see Ethical Guidelines for A Superintelligence by Ernest Davis (Pdf link above) who writes of Bostrom’s thesis:

“The AI will attain a level of intelligence immensely greater than human. There is then a serious danger that the AI will achieve total dominance of earthly society, and bring about nightmarish, apocalyptic changes in human life. Bostrom describes various horrible scenarios and the paths that would lead to them in grisly detail. He expects that the AI might well then turn to large scale interstellar travel and colonize the galaxy and beyond. He argues, therefore, that ensuring that this does not happen must be a top priority for mankind.”

The Bill Joy Effect

Bill Joy wrote a widely quoted article in Wired magazine in April 2000, with the fear filing title: Why the future doesn’t need us, where he warned:

“If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines.”

Eminent researchers John Seely Brown and Paul Duguid, offered a strong argument to Joy’s pessimistic piece in their paper, A Response to Bill Joy and the Doom-and- Gloom Technofuturists, where they compared the concern’s over A.I. and other technologies to the nuclear weapons crisis and the strong societal controls that were put in place to ‘control’ the risks of nuclear weapons. One of their arguments was that society at large has such a significant vested interest in existential risks it works to mitigate them.

Seely Brown and Duguid indicated that too often people have: “technological tunnel vision, they have trouble bringing other forces into view.” This may be a case in point with Bostrom’s Superintelligence, where people who have worked closely with him, have indicated that there are ‘probably’ only 5 “computer scientists in the world currently working on how to programme the super-smart machines of the not-too-distant future to make sure A.I. remains friendly.” In his book presentation Authors@Google Bostrom claimed that only half a dozen scientists are working full time on the control problem worldwide (last 6 minutes). Which sounds like “technological tunnel vision,” and someone who has “trouble bringing other forces into view.”

Tunnel vision A.I. Bias

Nicholas Taleb warns us to beware of confirmation bias. We focus on the seen and the easy to imagine and use them to confirm our theories while ignoring the unseen. If we had a big blacked out bowl with 999 red balls and 1 black one, for example, our knowledge about the presence of red balls grows each time we take out a red ball. But our knowledge of the absence of black balls grows more slowly.

This is Taleb’s key insight in his book Fooled by Randomness, and it has profound implications. A theory which states that all balls are red will likely be ‘corroborated’ with each observation. Our confidence that all balls are red will increase. Yet the probability that the next ball will be black will be rising all the time. If something hasn’t happened before or hasn’t happened for some time we assume that it can’t happen (hence the‚ this time it’s different syndrome). But we know that it can happen. Worse, we know that eventually it will.

In every tool we create, an idea is embedded that goes beyond the function of the thing itself. Just like the human brain, every technology has an inherent bias. It has within its physical form a predisposition toward being used in certain ways and not others.

It may be this bias that caused Professor Sendhil Mullainathan, whilst commenting on the Myth of A.I., to say he is:

“More afraid of machine stupidity than of machine intelligence.”

Bostrom is highly familiar with human bias having written Anthropic Bias, a book that since its first publication in 2002 has achieved the status of a classic.

A.I. Black Swan

In 2002, Nick Bostrom wrote of A.I. and SuperIntelligence Existential Risks:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question. [“Existential Risks”, 2002]

With my behavioral economics hat on I know that the evidence that we can’t forecast is overwhelming, however we must also always plan for and do our best to mitigate risks or ‘black swan’ events as best we can… it appears that the artificial intelligence community are doing a pretty good job of that.

MIT has an entire division, the Engineering Systems Division, that brings together researchers from engineering, the hard sciences, and the social sciences to identify and solve problems of complex engineered systems. One promising technique for engineering complex systems is known as axiomatic design, an approach conceived by Nam Suh, the former head of MIT’s Department of Mechanical Engineering. The idea of axiomatic design is to minimize the information content of the engineered system while maintaining its ability to carry out its functional requirements. Properly applied, axiomatic design results in airplanes, software, and toasters all just complex enough, and no more, to attain their design goals. Axiomatic design minimizes the effective complexity of the engineered system while maintaining the system’s effectiveness.

Professor Joanna Bryson has a trove of good information and research papers showing some of the efforts researchers are taking when it comes to mitigating A.I. risks.

The UK Government’s Chief Science Officer is addressing A.I. risk and what will be needed to govern such risk. The Association for the Advancement of Artificial Intelligence (AAAI) has a panel of leading A.I. researchers addressing the impact and influences of A.I on society. There are many others.

unPredictable

Of course I am aware that one counterintuitive result of a computer’s or A.I.’s fundamentally logical operation is that its future behavior is intrinsically unpredictable. However, I have a hard time believing an A.I. will want to destroy humanity and as much as I take the long-term risk of A.I. seriously I doubt it (Superintelligent A.I.) will happen in 5 or 10 years. We’re still not a paperless society. I can’t see a programmer, or mad scientist for that matter, capable of inventing a super intelligent A.I. programming it with: “Your mission, should you choose to accept it, is to eliminate all humans, wherever they may rear their head.”

I have gleamed many good insights from reading Superintelligence and recommend Bostrom’s book. I do not think human ingenuity will merely allow us to become lumbering robots, survival-machines entirely controlled by these super-machines. There is still something about being wiped out by a superintelligent A.I. that’s like squaring the circle. It doesn’t quite add up.

5 reads in robotics for elder care, artificial intelligence research and new jobs

Tags

, , , , , ,

How Cost Effective Is a Robotic Solution for Elder Care

Robots serving various tasks and purposes in the medical/health and social care sectors beyond the traditional scope of surgical and rehabilitation robots are poised to become one of the most important technological innovations of the 21st century. Nevertheless, unresolved issues for these platforms are: patient safety, as the robots are necessarily quite powerful and rigid and the cost effectiveness of these solutions. (PDF)

Be more afraid of machine stupidity than of machine intelligence

“I would make a distinction between machine intelligence and machine decision-making.

We should be afraid. Not of intelligent machines. But of machines making decisions that they do not have the intelligence to make. I am far more afraid of machine stupidity than of machine intelligence.

Machine stupidity creates a tail risk. Machines can make many, many good decisions and then one day fail spectacularly on some a tail event that did not appear in their training data. This is the difference between specific and general intelligence.”  (Sendhil Mullainathan)

New research may lead to technology that helps the blind and robots navigate natural environments

Two groups of scientists, working independently, have created artificial intelligence software capable of recognizing and describing the content of photographs and videos with far greater accuracy than ever before, sometimes even mimicking human levels of understanding. (NY Times)

Artificial Intelligence Can’t Replace Hard-Earned Knowledge – Yet

So until the androids take over, smart software and big data are merely very useful tools to help us work. Machines replace many kinds of repetitive work, from flying airplanes to sorting through medical symptoms. And to the extent that deeply smart humans can program potential problems into the software — even relatively rare ones — the system can react faster than a human. Some day robots may have deep smarts. For the present, we would settle for preserving the human variety and continuing to forge ever more productive partnerships with our silicon cousins. (Harvard Business Review)

Looking for a job in A.I.? A sneak peak at what it’s like working inside an A.I. Lab

It’s a compelling time to be working in A.I. to impact a huge number of lives. Baidu Research – Have an Inside Look into Baidu’s Silicon Valley A.I. Lab with learning lunches. (Baidu A.I. Lab Video)

Five weekend reads in Robotics, AI and economics

Tags

, , , , , , , ,

Most of the time, most of us have absolutely no idea what robots are thinking

In an experiment, MIT researchers used their AR system to place obstacles — like human pedestrians — in the path of robots, which had to navigate through a virtual city. The robots had to detect the obstacles and then compute the optimal route to avoid running into them. As the robots did that, a projection system displayed their “thoughts” on the ground, so researchers could visualize them in real time.

Automation Is Taking Over, and That’s Bad News for the World’s Poor

While we have always heard of a future in which robots would be handling most of the labor, it’s hard to think that most people pictured it in the way that things seem to be heading. Sure, automated work forces will be handling many of the world’s tasks in a relatively short amount of time, ushering in a new era of prosperity and leisure for the masses. The problem is that that prosperity hasn’t been shared, and many of the world’s poor and middle classes will end up scrambling to make ends meet as a result.

RoboLaw: Why and how to regulate robotics

Even a robot that can perform complex tasks without human supervision and take decisions towards that end may still not be deemed an agent in a philosophical sense, let alone a legal one. The robot is still an object, a product, a device, not bearing rights but meant to be used. What would justify a shift on a purely ontological basis (thus forcing us to consider the robot as a being provided with rights and duties) is what Gutman, Rathgeber and Syed call ‘strong autonomy’ – namely the ability to decide for one’s self and set one’s own goals. However, at present this belongs to the realm of science fiction, and it can be argued that this is not the direction we desire to take with robots in any case.

Elon Musk wades in — again: Talking at MIT’s Aeronautics and Astronautics Department’s Centennial Symposium last week, Musk said, “With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like… yeah, he’s sure he can control the demon—it doesn’t work out.” Mike Loukides counters that:

Artificial intelligence: summoning the demon

David Ferrucci and the other IBMers who built Watson understand that Watson’s potential in medical diagnosis isn’t to have the last word, or to replace a human doctor. It’s to be part of the conversation, offering diagnostic possibilities that the doctor hasn’t considered, and the reasons one might accept (or reject) those diagnoses. That’s a healthy and potentially important step forward in medical treatment, but do the doctors using an automated service to help make diagnoses understand that? Does our profit-crazed health system understand that? When will your health insurance policy say “you can only consult a doctor after the AI has failed”? Or “Doctors are a thing of the past, and if the AI is wrong 10% of the time, that’s acceptable; after all, your doctor wasn’t right all the time, anyway”? The problem isn’t the tool; it’s the application of the tool.

As fears of robots eliminating jobs persist the call for a Basic Income Guarantee grows

The prospect of a jobless economy certainly seems daunting. But if we can successfully manage it and put our machines to work, we could enter into an unprecedented era of material abundance while dramatically extending our leisure time. Rather than be tied to menial and demeaning work, we’d be free to engage in activities that truly interest us.

Five weekend reads in robotics, AI, driverless cars and the economy

  1. The Phenomenology of Self-Driving Cars — why I imagine driverless cars are going to hit a much bigger obstacle than most. (Next New Deal – The Roosevelt Institute, H/T @RobertWent)
  2. Robots that understand — DeepMind, the UK artificial intelligence group purchased by Google earlier this year, has revealed plans to create a broad alliance with the University of Oxford after acquiring two companies spun out of computer science projects at the elite academic institution. According to the Financial Times one of those companies: “is developing systems capable of the visual recognition of objects in the real word. This means, for example, giving robots three-dimensional awareness that can allow them to understand how a cup sits on a table.”
  3. CyPhy Works’ New Drone Fits in Your Pocket, Flies for Two Hours. Anybody who’s ever flown a rotary wing drone will look at the stats of CyPhy Works’ new Pocket Flyer drone and be amazed. It fits in your pocket and weights a mere 80 grams. It’ll fly continuously for two hours or more, sending back high quality HD video the entire time. What’s the catch? There isn’t one, except for the clever thing that grants all of CyPhy’s UAVs their special powers: a microfilament tether that unspools the drone and keeps it constantly connected to communications and power. (I’m a huge admirer of CyPhyWorks)
  4. The first example of a robot automating surgical tasks involving soft tissue. “There are no bad robots, there are just bad surgeons.” New Research Center Aims to Develop Second Generation of Surgical Robots.
  5. Robot project envisions factories where more people want to work. Rather than taking jobs, robots will one day soon join people on the factory floor, as co-workers and collaborators. That’s the vision of a EUR 6.5 million project led by Stockholm’s KTH Royal Institute of Technology. (PHYS.org)

Japan’s government holds first “robotics revolution council” meeting

Tags

,

The Japanese government has held the first meeting of a new panel focused on its goal of a “robotics revolution,” a key item in the government’s economic growth strategy adopted in June.

The robot revolution panel is tasked with promoting measures to increase the use of robots and related technologies in various fields, extending out of the manufacturing sector and into hotel, distribution, medical and elderly nursing-care services. The appropriate use of robots will be a key to solving these problems, according to Prime Minister Shinzo Abe who instigated the robot panel.

Despite Japan being a leader in the field of industrial robots, companies still rely heavily on human labor, making it difficult to secure enough workers and blocking efforts to improve productivity. Prime Minister Shinzo Abe instructed ‘the robot revolution council:’

“To work out a strategy for using robots as the key means to solve labor shortages amid the declining birthrate and aging population, low productivity of the services sector and other challenges plaguing Japan and for developing the robot industry into a growth sector to explore global markets.

Adding his hope that the government will seek to make Japan a showcase for robots in service for various areas ahead of other countries by 2020.

The government said Japan will double its robot-related market to ¥1.2 trillion (US$11.3 billion) by 2020 in the manufacturing sector and achieve a 20-fold jump in the non manufacturing sector, also to ¥1.2 trillion (US$11.3 billion).

A government paper lays out the factors behind the robot revolution with respect to manufacturing, stating:

The Government will seek to improve (factory) productivity through the utilization of robot technology, thereby improving the profitability of companies and helping to raise wages.

The panel, chaired by Mitsubishi Electric Corp. consultant Tamotsu Nomakuchi, will work out a five-year plan to be presented by the end of 2014, with details on how they will achieve the numerical targets.

The robot council will also discuss the legal regulations needed to promote the use of robots and related technologies.

Follow

Get every new post delivered to your Inbox.

Join 233 other followers