Science

"Robotic Nudging" Might Make Us Happier and Less Embarrassed

IEEE's Ron Arkin explains how robots are going to make us nicer.

Netflix/YouTube

Robots are going to start pushing you to be nicer to people, through a technique known as “robotic nudging.” That’s according to Ron Arkin, who chairs the Affective Computing Committee (along with Joanna Bryson) of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, created by the IEEE Standards Association.

The IEEE Global Initiative is working on the second draft of its ethics guidelines, Ethically Aligned Design, for artificial intelligence and autonomous systems, due later this year, which sometimes involves grappling with technologies that barely exist yet. Robotic nudging is one such area.

Arkin has been a roboticist for the past 30 years, and he spoke to Inverse about this little-known area that’s on the rise.

What is robotic nudging? I understand that it’s a technology that doesn’t exist just yet.

You’re right, although the opportunity for nudging does exist. The question is, could we use robotic systems to be able to accomplish the notion of more social justice within society? A nudge in itself is an attempt to mold or guide behavior without relying on legal or regulatory mechanisms. In robotic nudging, we’re trying to use robotic technology to provide that attempt to guide behavior.

I worked with Sony for 10 years about 15 years ago on their Aibo robot dog and Qrio humanoid robot. The goals of those projects were to create companions for humans and to provide a need for companionship, but also to ensure this is an affective system displaying emotion and listening to emotion from a human being, so a bond could be strongly developed.

There are certain techniques that robots can use, by virtue of being embodied, compared to a cellphone or computer screen. They can physically interact with humans, and they can reside in locations where humans are and take note to affect. People behave differently in those circumstances.

One could argue, if you’ve seen the old movies where there’s an angel and devil on each shoulder, it’s the same kind of thing except we’re trying to omit the devil and have the robot strive to move the human into a more either charitable or socially just type of position.

There are ethical questions about the underlying morality that these systems would implement. Imagine a fundamental difference between a democratic society as ours, or one that might be working for ISIS, encouraging you, “you really wanna behead that guy.” The notion of underlying morality is an issue, and whether we should be doing this at all.

The Sony Aibo ERS 210.

Brett Jordan/Flickr

How is this different from when someone uses a wearable to nudge them to work out more?

The difference is that’s where you start to become the cyborg. Technology is merged onto your frame, and you serve as the vehicle for it.

The robot not only has the ability to enact physical interactions with the human. For example, if someone’s unhappy, they’ll stand a little further away. These are very subtle non-verbal cues.

A lot of the work we did with Sony fit that bill. We’re doing other work now with the National Science Foundation, in dealing with patient-caregiver relationships using a small humanoid in early stage Parkinson’s disease management.

If we can use the robot to subtly nudge the caregiver to show more empathy, or the patient to lessen their shame or embarrassment, those are the sort of things we would like to do. Not just to assist in human-robot interactions, but also human-human interaction. Currently, it’s patient-caregiver, but it could be generalizable to parent-child, or even husband-wife in marriage therapy. We don’t have that yet, but we’re working on the first stage and we’re halfway through that project.

For non-verbal cues to work, doesn’t the human have to empathize with the robot?

In the Parkinson’s case, we’re looking at computational models of embarrassment and shame. The robot has to be able to extract that through observation, or physiological signals, or parsing of dialogue. Then the robot has to make those decisions.

It’s far easier for the human to infer empathic response to a robot artifact than it is for the robot to figure out what the human’s doing. It’s been well known, from Clifford Nass’ work at Stanford many years ago, The Media Equation, that he talks about how people form an affinity with all kinds of artifacts, and it doesn’t matter if we know that they are computational. Just doing appropriate gestures at appropriate times will lead people to believe the robot is expressing empathy.

Pepper, pictured above, is a humanoid robot used in SoftBank stores in Japan to provide customer assistance.

Getty Images / Charles Pertwee

How close are we to robots understanding what a human is feeling?

There are ways in highly constrained circumstances that you can parse out language. Natural language understanders can pull out curse words, for example. People, when they’re happy, usually don’t start cursing at each other!

So you don’t believe forming attachment to robots is unethical?

No, but there’s a philosopher, Rob Sparrow, who is a member of our committee and has written eloquently against these kinds of companion robots are completely unethical because they create this illusion of life, and that’s a bad thing. But there are others who believe in appropriate circumstances, including myself, such as in healthcare, where it may have an appropriate use.

This Q&A has been edited for clarity and brevity.

Related Tags