Are you baffled by your posts vanishing at random from Facebook recently? Or by entire accounts getting deleted or suspended?
Then you are not alone. The social media platform has suspended over 500 million accounts over the past few months, besides nearly two million propaganda posts.
That’s because Facebook is in hot water. It faces allegations of selective censorship, even as governments and the United Nations accuse the platform of letting itself be used as a medium by violent hatemongers. In April, the social media platform was forced to shut down services in Sri Lanka for allegedly aiding mob violence against the country’s Muslim minority.
With over 290 million users in India, which goes to polls in just a few months, the country has one of Facebook’s biggest user bases.
So it now wants to explain why certain accounts and posts are deleted, sometimes seemingly at random. In May, Facebook published a 27-page document laying down the context on how it determines if a post should be taken down.
Sheen Handoo, public policy manager at Facebook, spoke to Quartz about the complexities of determining misinformation, and why fake news continues to be a hard mountain to climb. Edited excerpts:
Which important areas do Facebook’s community standard guidelines cover? How do you determine whether a post should be taken down?
We don’t allow any form of violence on our platform. Among other things, we remove references to dangerous organisations such as terrorist groups. Any reference to or representation of suicide and self-injury is also not allowed. Similarly, we will also remove any objectionable content such as hate speech, nudity, pornography, or graphic violence. Our policies are consistent across regions because we don’t want to muzzle speech from one region and allow it in another.
Do you depend on technology to identify the nature of content online?
We use a combination of technology and an internal review team to identify violating content on the platform. We have teams in New Delhi, Singapore, Dublin, Washington, and San Francisco. We are trying to ramp up and build teams in other parts of the world also. At the end of 2017, we had 10,000 people working in this team and we want to have at least 20,000 by the end of 2018. We use artificial intelligence and machine learning tools to enhance human performance.
Your community policies are standard for all the regions Facebook is present in. Does that leave room for subjective references such as differences in language, or cultural references?
Our core job is really to make sure our policies are in line with the evolving social and linguistic norms. The way communities use our platform can change, sometimes on a daily basis. So, we want to be on top of it and make tweaks periodically. Based on the feedback we receive from the community, as well as from our internal teams, we review the content and refine it. Every two weeks we convene a content standards forum to review and pass these new policy recommendations.
Facebook has been under immense pressure to tackle fake news. Are you effectively monitoring misinformation on your platform?
We have learned a lesson from our experience in Sri Lanka and Myanmar where we saw how fake news can really lead to physical harm. So, we have tied up with third-party agencies in different countries for fact-checking because Facebook doesn’t want to be the arbitrator of truth. We partnered with Boom Live in India just before the Karnataka elections. We are trying to scale it up nationally.
There are challenges. For instance, how do we analyse a post which is in Hindi but written in English (Roman script)? We depend heavily on human reviewers for this. We are still grappling with the issue.
So far, how have these efforts helped Facebook tackle misinformation?
We recently published a detailed enforcement report. Globally, between October 2017 and March 2018, we disabled 583 million fake accounts within minutes of registration, and 99% were flagged by internal tools. We identified 837 million cases of spam, of which nearly 800% were flagged before reporting. We also removed 1.9 million pieces of terrorist propaganda, and about 99.5% of it was flagged by our AI and machine learning tools.
We also removed 2.5 million pieces of hate speech between January and March, but the accuracy rate of our technology tool stood at just 38%. That’s because hate speech tends to be very context-heavy. We need to train our internal tools better to catch those nuances. It is a work in progress.
How do you tackle attacks on public figures on social media?
When it comes to public figures, we differentiate between the types of threats: criticism, hate speech, or direct threat. In the case of a direct threat, there are some details that we need to consider to determine if the threat is credible, before taking it down.
We also have a vulnerable people’s category, which comprises people like the head of the state. When it comes to this category of people, we don’t look for details, we would take down a post or profile immediately if it is a direct attack.
What would you do in a reversed scenario, where a public figure is found violating your content policy?
If a public figure posts such a threat on their profile we will take it down. But mostly, these are part of a news report on what they say. That is fine because it is being used in context for reportage. We have taken down profiles of a few politically prominent people in the past, like Myanmar military accounts.
This article first appeared on Quartz.
Limited-time offer: Big stories, small price. Keep independent media alive. Become a Scroll member today!
Our journalism is for everyone. But you can get special privileges by buying an annual Scroll Membership. Sign up today!