So Facebook’s recent announcement that it would empower users to flag hoax news seemed like a step in the right direction.
In a post on its blog Facebook said the new feature would help it police the proliferation of fake news making its way onto the platform.
But like most things on Facebook, how you mark a post as fake is complicated:
- First you go to the top-right corner of a post and select "I don’t like this post."
- That brings up a window titled "Help Us Understand What's Happening."
- Then you select "I think it shouldn't be on Facebook."
- That brings up several choices including "It's a false news story."
So this would all seem to be a good thing, right? As the Washington Post wondered: Did Facebook just kill the Web’s burgeoning fake-news industry?
But what about abuse of this system? You need only look back to November to see how it could be abused.
It happened when The New York Times was reporting on Florida State University football players seemingly getting preferential treatment when local police ticketed them. When the Times’ official Twitter account tweeted out a link to the story, according to a report in USA Today, the tweet was quickly marked as spam.
The story says it is believed many FSU fans tagged the tweet as spam so that it would disappear from Twitter.
Upon being contacted by Times staffers Twitter restored the tweet, but not before hundreds, if not thousands of users, had seen it marked as spam.
So, could this happen on Facebook? Could hundreds or thousands of advocates for some cause mark a legitimate post as a hoax and have it flagged as such? Of course.
How do Facebook, Twitter and other social networks fight this virtual whack-a-mole approach to dealing with hoaxes? They have to have humans at the controls. No algorithm on its own can solve this problem.
So, what do you think? Will Facebook’s new ability to flag news items as fake be abused? Will it create other problems?