Science

Facebook Has to Make a Choice on How to Stop Fake News

Getty Images / David Ramos

Facebook has a fake news problem. The full extent of the issue depends on who you ask — and apparently when you ask them, as CEO Mark Zuckerberg has gone from saying that the company needs to do a better job of filtering fake news to saying that it’s not that big a deal — but many can agree that it should be addressed.

The next question is what Facebook can do to resolve the problem.

There are a few possibilities. The easiest would be to bring back the human editors who were replaced by algorithms earlier this year, because that tech almost immediately started to include fake news in the “trending” section every Facebook user sees when they visit the website. Reversing the decision to nix the human curators would at least mitigate Facebook’s role in directly helping misinformation spread.

Facebook could also follow Daniel Sieradski’s lead by making something like “B.S. Detector,” a Chrome extension that tells people when they’re about to click on a link to a news source known to spread falsehoods. The extension is said to rely on just a few lines of JavaScript; the many talented engineers at Facebook could probably make a similar tool in just a few minutes.

Daniel Sieradski

Or the company could lean in to the algorithms. Right now Facebook’s News Feed is set to show people links shared by their friends, or that Facebook thinks they’ll be interested in. What if those algorithms were instead trained to spot fake news and make it so very few people ever see it in the News Feed? This wouldn’t require Facebook to ban fake news from its site, and users could still spread misinformation via direct messages, but at least they would have to seek out fake news instead of merely stumbling upon it.

Another possibility is simply banning links to certain websites. Both Google and Facebook have now changed their advertising rules to prevent fake news sites from making money off their services, but the social network has not instituted a service-wide ban on any websites. That’s still an option, and would prevent people from working around human editors or gaming an algorithm, but it would also be direct censorship of certain sites.

But that probably won’t happen. Gizmodo reported on Monday that Facebook has been scared to fight against fake news because any efforts would mostly affect right-leaning sites; the company doesn’t want the alt-right to claim that it’s pushing a liberal agenda by suppressing the “truth” exposed by conservative sites. That the articles are inaccurate doesn’t matter; it’s all about perception.

This is perhaps the most vexing problem Facebook must face as it tries to tackle fake news. The service has more than a billion users — when was the last time a group of people even a fraction of that size totally agreed on anything? If the company hires back its human editors its users will complain that those people are biased; if it expands its algorithms they’ll say the A.I. is biased; if it bans some sites it will be accused of censorship; if it does nothing it’ll be blamed for that, too.

There are many potential solutions to this problem, but there isn’t a silver bullet Facebook can fire into just one horrible beast. While many can agree that Facebook has to stop the spread of misinformation, it’s unlikely there will be a consensus on what exactly the company’s response should be.

Related Tags