Culture

Facebook Will Use Artificial Intelligence to Prevent Suicides

The company wants to build a safer community.

by Gabe Bergado
A woman sitting in the dark with the light of her laptop illuminating her face
Getty Images / Chris Jackson

Facebook will use artificial intelligence to implement new suicide prevention tools to build a safer community. There may not be a perfect solution to keep these situations from happening, but the company is taking a step toward helping people.

It’s being described by a team of Facebook employees as “streamlined reporting for suicide, assisted by artificial intelligence.” The initiative was announced in a Facebook blog post along with two other resources: “Integrated suicide prevention tools to help people in real time on Facebook Live” and “live chat support from crisis support organizations through Messenger.”

These improvements come at time when people have wondered what Facebook’s responsibility should be in these situations. There have been numerous users who broadcasted taking their own lives on Facebook Live, including a 14-year-old in Florida and a 33-year-old actor in Los Angeles.

The artificial intelligence will be able to recognize when someone posts something similar to posts reported for suicide and self-harm in the past. By learning what this sort of content looks like, Facebook hopes that the pattern recognition will become more refined. When the A.I. recognizes content like this, it’ll also make the option to report the post more prominent.

Facebook

Facebook Product Manager Vanessa Callison-Burchold, who helped author the blog announcement, told BuzzFeed News that the A.I. is more accurate than human reporting posts that contain this sort of content. Users who posted about suicide or self-harm will be shown prompts with prevention resources. In more dire situations, the A.I. will alert Facebook’s community team.

Furthermore, Facebook is rolling out updates that will allow those in distress to directly message someone on the pages of organizations including the Crisis Text Line, the National Eating Disorder Association, and the National Suicide Prevention Lifeline. Prevention tools that have already existed for Facebook posts will also now be available during Facebook Live streams.

However, these tools aren’t always that easy to find. In order to report a post as possibly being violent or suicidal, it’s something of a journey. Users must first click on the down arrow in the right corner of Facebook posts, select “Report post,” click “I think it shouldn’t be on Facebook,” and then that’s where the option to flag it as suicidal comes up.

Other social networks aren’t great at this either. Tumblr and Twitter both have pages dedicated to dealing with the topic while on their sites, but their actual product and user interface don’t lend well to reporting (on Twitter, “It’s abusive or harmful” comes up right after “Report Tweet” is clicked)

It does appear that Facebook is working to better its actual product by utilizing artificial intelligence, a tactic that other organizations are considering as well. It might not be enough yet, but at least it’s something.

Related Tags