Facebook on Thursday said it had begun fact-checking photos and videos to curb the spread of fake news on the social media network, Reuters reported.
The development comes while Facebook faces huge criticism for the spread of fake news on the platform, for the use of the network to manipulate elections, and the most recent furore over Cambridge Analytica – the British firm accused of using the information of 50 million Facebook users to help the campaign of US President Donald Trump.
Facebook began the fact-checking exercise on Wednesday in France with the help of news agency AFP. Product Manager Tessa Lyons said they will expand the project to more countries and partners soon. The exercise is part of “efforts to fight false news around elections”, she said.
Lyons did not specify the criteria Facebook or AFP used to check the veracity of the photos and videos they looked over. She did not mention how much a photo could be edited before it is deemed fake.
In March 2017, the company had launched a new function to help mark fake news as “disputed” on the website. The new tool was meant to help identify websites and sources that are likely to put out misinformation. It uses fact-checkers such as Politifact and Snope to identify false news.
There has been widespread speculation that voters in the US were swayed by fake news on social media that had pushed them to vote for Trump in the 2016 presidential election. Facebook and Google were accused of helping mislead voters by allowing the spread of misinformation on their websites.
Limited-time offer: Big stories, small price. Keep independent media alive. Become a Scroll member today!
Our journalism is for everyone. But you can get special privileges by buying an annual Scroll Membership. Sign up today!