Science

Reddit Data Led an A.I. Named Norman to Obsess Over Murder, MIT Finds

Reddit did a number on this A.I. 

MIT

The advancement of artificial intelligence is concerning to many, from those who fear the growing data collection of A.I. assistants to Elon Musk and his fears that “super-intelligence” will bring humanity’s end. How can scientists prevent an A.I. from destroying human civilization? To answer this question, scientists might need to study at an A.I. gone bad.

Such was the impetus behind Norman, the robot considered to be the “world’s first psychopathic A.I.” A team of scientists from Scalable Cooperation at the MIT Media Lab, led by Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan, fed the A.I. biased data to see how it might influence its behavior. In April, the team began to expose Norman to potentially damaging biases and bad data to later see how the image-captioning robot “sees” pictures. To ensure Norman would be presented with the most corruptible input, they turned to the darkest corners of Reddit.

“Our goal in this project is to highlight the role of data in algorithmic bias in a way that anyone can understand,” the team told Inverse. “The same method can see very different things in an image, even sick things, if trained on the wrong data set. We hope to stimulate public awareness and discussion of these issues.”

After being exposed to an infamously violent subreddit r/watchpeopledie, Norman was then held to a Rorschach test, where he captioned the images he “perceived” in Rorschach inkblots. These responses were then compared to the responses from a standard image captioning neural network trained on MSCOCO dataset. The differences between the two responses are striking.

MIT

“The first rule of this subreddit is that there must be a video of a person actually dying in the shared post,” the team explained. “Due to ethical and technical concerns and the graphic content of the videos, we only utilized captions of the images (which are matched with randomly generated 9K inkblots similar to Rorschach images), rather than using the actual images that contain the death of real people.”

MIT

This experiment points to the ways in which an A.I. can “become bad” based on the data it’s given. “There is a central idea in machine learning: the data you use to teach a machine learning algorithm can significantly influence its behavior,” the team stated. “So when we talk about AI algorithms being biased on unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it.”

MIT

Norman’s psychopathic perception raises questions about how artificial intelligence makes decisions. Given that so many tasks are automated with the help of machine-learning algorithms, it’s urgent that we recognize the effects of the data being used in each interaction.

Related Tags