Science

Tufts Roboticist Calls for Machine Morals: Robots Must "Disobey in Order to Obey"

Don't expect a Scheutzian robot to help you download new music illegally.

Tufts Now

Robots are becoming more central to our everyday lives, and soon they will have to begin confronting some of the tough moral choices we have to make on a regular basis. Matthias Scheutz, a computer science professor in Tufts University’s Human-Robotics Interactions lab, is particularly concerned that we have not equipped even our most advanced prototype robots with the capabilities to deal with some of these situations. Scheutz believes that robots will have to learn to disobey humans if we ever want them to really serve our needs.

In a commentary piece on Today Online, provocatively titled “Robots Must Be Able to Disobey In Order to Obey,” Scheutz asks the reader to determine what a robot should do if confronted with a set of situations that could even prove confusing to a human. Some of them:

An elder-care robot tasked by a forgetful owner to wash the ‘dirty clothes’, even though the clothes have just come out of the washer.
A student commanding her robot tutor to do all the homework, instead of doing it herself.
A household robot instructed by its busy and distracted owner to run the garbage disposal, even though spoons and knives are stuck in it.

What Scheutz is aiming to describe in the article is the simple fact that we are going to need robots that know how to say “no” to humans. Many of his examples have to do with the reliability of the person directing the robot, but it is harder than it sounds to program a robot that knows to take instructions from anyone who isn’t too young or too old or obviously trying to cause trouble.

Some of these issues are already being hotly debated in the headquarters of the world’s most powerful tech companies, as they look for new ways that technology can take over parts of our lives. Scheutz raises the example of an autonomous car that is directed to back up but notices a dog lying in its path. Most of us agree that even the human driving the car would want the car to stop, but what if the animal was a squirrel or raccoon? What if the car was driving 45 mph down a country road and suddenly braking for a dog risked exposing the car to collision with another vehicle behind it?

“In either case, it is essential for both autonomous machines to detect the potential harm their actions could cause and to react to it by either attempting to avoid it, or if harm cannot be avoided, by refusing to carry out the human instruction,” Scheutz writes regarding a similar dilemma.

Scheutz’s team has been working on these issues with its prototypes and found the hardest part of these questions is differentiating the slight degrees of harm that might occur when a robot follows a certain order. So they assigned their robot a very simple but powerful moral lesson that would put even the great philosophers to shame.

“If you are instructed to perform an action and it is possible that performing the action could cause harm, then you are allowed to not perform it.”

In a subtle way, that’s a revolutionary prognosis from Scheutz. Sure, we all try to avoid harm, but isn’t it possible that sometimes we think it’s okay to take risks? Jaywalking on the street, eating a second helping of dessert, or even copying a friend’s notes from class could violate Scheutz’s axiom.

The notion of robotic interference also suggests Scheutz wants his robots to follow the law. Don’t expect your Scheutzian robot assistant to help you download new music illegally. That would cause harm to the stockholders of EMI!

It’s a complicated area of moral, legal, and technical thought, and Scheutz doesn’t shy away from attacking it head on.

Related Tags