Facebook’s algorithm led a dummy user in India to misinformation, hate speech and violent content within just three weeks of its launch, according to internal documents accessed by Bloomberg and other American media organisations.

The social media company had created the test account in February 2019 to ascertain the impact of the algorithms which determine the content that users see on the platform in India, Bloomberg reported.

The account followed algorithm-based recommendations of Facebook to browse pages, watch videos and join groups for 21 days. It was found that the feed of the dummy user was flooded with doctored images, fake news and violent scenes, Bloomberg reported.

Advertisement

“Following this test user’s News Feed, I have seen more images of dead people in the past three weeks than I have seen in my entire life total,” the Facebook researcher who created the account wrote in an internal report that was published in February 2019, according to The New York Times.

Days after Facebook created the test account, the Pulwama terror attack took place. The attack, orchestrated by the Pakistan-based terror group Jaish-e-Mohammad group, had killed 40 Central Reserve Police Force soldiers on February 14, 2019.

After that, posts with anti-Pakistan content started appearing in the groups that the test user was part of, according to The New York Times.

Advertisement

There were photos showing beheadings and posts claiming that 300 terrorists had died in an explosion in Pakistan, according to Bloomberg.

Facebook acknowledged that that the test user’s account was “filled with polarising and graphic content, hate speech and misinformation” because of its own recommendations.

Facebook said that the findings from the test led to a deeper analysis of its systems.

“Our work on curbing hate speech continues and we have further strengthened our hate classifiers, to include four Indian languages,” a spokesperson for the company said, according to Bloomberg.

Advertisement

Facebook’s internal documents, collected by former employee Frances Haugen, have shown that the company did not have sufficient resources in India, The New York Times reported. The company allocates 87% of its budget for labelling misinformation for the United States, while the remaining 13% is put aside for the rest of the world.

But, Facebook spokesperson Andy Stone said the figures did not include its “third-party fact-checking partners”, majority of whom are situated outside the US, according to The New York Times.