In an experiment, MIT researchers used their AR system to place obstacles — like human pedestrians — in the path of robots, which had to navigate through a virtual city. The robots had to detect the obstacles and then compute the optimal route to avoid running into them. As the robots did that, a projection system displayed their “thoughts” on the ground, so researchers could visualize them in real time.
While we have always heard of a future in which robots would be handling most of the labor, it’s hard to think that most people pictured it in the way that things seem to be heading. Sure, automated work forces will be handling many of the world’s tasks in a relatively short amount of time, ushering in a new era of prosperity and leisure for the masses. The problem is that that prosperity hasn’t been shared, and many of the world’s poor and middle classes will end up scrambling to make ends meet as a result.
RoboLaw: Why and how to regulate robotics
Even a robot that can perform complex tasks without human supervision and take decisions towards that end may still not be deemed an agent in a philosophical sense, let alone a legal one. The robot is still an object, a product, a device, not bearing rights but meant to be used. What would justify a shift on a purely ontological basis (thus forcing us to consider the robot as a being provided with rights and duties) is what Gutman, Rathgeber and Syed call ‘strong autonomy’ – namely the ability to decide for one’s self and set one’s own goals. However, at present this belongs to the realm of science fiction, and it can be argued that this is not the direction we desire to take with robots in any case.
Elon Musk wades in — again: Talking at MIT’s Aeronautics and Astronautics Department’s Centennial Symposium last week, Musk said, “With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like… yeah, he’s sure he can control the demon—it doesn’t work out.” Mike Loukides counters that:
David Ferrucci and the other IBMers who built Watson understand that Watson’s potential in medical diagnosis isn’t to have the last word, or to replace a human doctor. It’s to be part of the conversation, offering diagnostic possibilities that the doctor hasn’t considered, and the reasons one might accept (or reject) those diagnoses. That’s a healthy and potentially important step forward in medical treatment, but do the doctors using an automated service to help make diagnoses understand that? Does our profit-crazed health system understand that? When will your health insurance policy say “you can only consult a doctor after the AI has failed”? Or “Doctors are a thing of the past, and if the AI is wrong 10% of the time, that’s acceptable; after all, your doctor wasn’t right all the time, anyway”? The problem isn’t the tool; it’s the application of the tool.
The prospect of a jobless economy certainly seems daunting. But if we can successfully manage it and put our machines to work, we could enter into an unprecedented era of material abundance while dramatically extending our leisure time. Rather than be tied to menial and demeaning work, we’d be free to engage in activities that truly interest us.