Science

Researchers Find That a Flawless Robot Creeps People Out

Unsplash / Alex Knight

It turns out people would think more highly of robots if robots stopped being so damn perfect all the time.

According to research recently published in Frontiers in Robotics and AI in May, study participants rated robots that occasionally made mistakes as more likable than those who embodied the exemplary machines portrayed in movies.

“Our results show that participants liked the faulty robot significantly better than the robot that interacted flawlessly,” the researchers found.

And even when the little humanoid robots messed up, people didn’t see them as any less intelligent or anthropomorphic.

“I do not like it less because of the mistakes,” responded one person in a post-study survey. “It would be scary if all went smooth because that would be too humanlike.” Another participant said that they liked how the robot didn’t make it look like they made mistakes.

The “Pratfall Effect”

The authors of the paper explain their findings with what’s called the Pratfall Effect, a scientific phenomenon from the 1960s where people who make mistakes are seen as more attractive, in part because the behavior disarms them and others perceive them as more relatable. According to those studies, those who are seen as flawless are perceived as distant ideals rather than real people.

Well, the roboticists from Asutria’s University of Salzburg who conducted this research argue that the Pratfall Effect actually applies to any social agent, human or mechanical.

Participants didn't know the robot was programmed to mess up. When it failed to grab the sheet, most people tried again and again.

University of Salzburg

To test their ideas, the scientists had participants answer questions posed by the robot. Then, in a separate session, the participants had to build things out of Legos as per the robot’s instructions. For some people, the robot would perform perfectly. For others, the robot would make a couple mistakes. Some were designed to come across as technical failures like getting stuck in a loop and repeating the same word indefinitely. Other mistakes were programmed to come across as if the robot was violating social norms — sometimes it would deliberately cut off whatever the participant was saying.

Even when error-prone instructions seemed questionable, like when the robot violated a social norm by telling the participant to throw the pieces of Lego on the ground, people were willing to play along.

What's not to love?

As robots are poised to become more and more integrated into our daily lives, with many different companies working on in-home robots to assist us, a crucial step is to find a way to make their presence welcome. As the saying goes, to err is human. Rather than creating a perfect machine that never makes errors, adding a small, innocuous glitch here and there could really help human-robot relations move along.

Related Tags