Science

Facebook Hate Speech: Users Roast Facebook for One Strange New Feature

Right when you though Facebook couldn't be roasted harder.

Facebook’s effort to detect and curb hate speech had a false start this week. Some users saw the accidental release of a new policing tool that asks if certain posts contain hate speech. This prompt appeared under pretty much every post on affected users’ News Feeds, regardless of content: pictures of dogs, non-offensive memes, and even text posts asking what the hell was going on with Facebook.

Was it a mistake? This feature was only visible to some users for 20 minutes according to Facebook’s vice president of product management, Guy Rosen, who called it “a test and a bug.” Regardless of how momentary this mixup was, the internet did what it does best when it catches someone slipping.

Twitter soon became inundated with screenshots of ridiculous Facebook posts with the “Does this post contain hate speech?” prompt.

During his testimony in Washington in April, CEO Mark Zuckerberg said that developing a system to automate hate speech detection is “very linguistically nuanced,” and that the company has been relying on users and employees flagging offensive comments. However, he said, Facebook was developing artificial intelligence tools to proactively identify hate speech — tools that could be ready in five to 10 years.

In order to pull something like this off, Facebook needs to teach an A.I. system what people consider hate speech — with all the context and nuance that comes with it. Words can have very different connotations depending on who’s saying them: your best friend or some alt-right goon with a Pepe avatar.

Accurate machine learning also demands lots and lots of data. A prompt like “Does this contain hate speech?” may seem silly when it’s under something benign, but it could be a way for the company to train an algorithm to think more like a human.

Facebook’s sister platform, Instagram, already makes use of an A.I. component, called DeepText, to flag spammy and harassing comments for removal. However, Instagram’s CEO Kevin Systrom told Wired in June 2017 that this system has run into issues with misclassifying comments. Accidentally removing comments or posts that should not have been banned could result in backlash from users, which could explain why Facebook has been so careful about releasing something like this.

Facebook has not made a statement when — or even if — this feature will be officially released. But it does appear to be a sign of progress: Maybe Facebook is finally taking moderation as seriously as it should.

Related Tags