For India’s ruling Bharatiya Janata Party, WhatsApp has always been more than just a messaging application – it is one of its primary propaganda tools.

A recent investigation by the news website The Wire revealed that BJP operatives have been using a computer program called Tek Fog to hijack inactive WhatsApp accounts to disseminate messages. In the past, its volunteers have been accused of using the platform to spread fake news, foment communalism and sow hatred in society.

Kiran Garimella has observed this world from up close. While he was a postdoctoral student at the Massachusetts Institute for Technology, he and three others conducted the largest analysis ever of content in public WhatsApp groups made of BJP supporters.

Advertisement

In the runup to the 2019 general elections, the four researchers gathered 2 million WhatsApp posts from 5,000 political groups and winnowed them down to 27,000 useful, non-spam posts. Their finding: instead of explicitly spouting hatred against the minorities, BJP supporters prefer to disseminate fear of them through what the researchers called “fear speech”.

The researchers defined fear speech as “an expression aimed at instilling (existential) fear of a target (ethnic or religious) group”. These posts were categorised into broad categories: past events, population narratives, cultural references such as Quran verses, and speculation of dominance. Specific instances of fear speech included the idea that Muslims incite disharmony, are responsible for communal violence, exploit Dalits, and marry to undermine Hinduism.

Compared to other posts, the researchers found fear speech posts to have been shared by more people and to more groups. But that was not all. Fear speech posts spread at a faster pace, lasted longer, and were more often posted by users who occupied central positions in their network. To top it all, these posts escaped algorithmic detection more often.

Advertisement

“Fear speech messages clearly fit into a set of topics relating to aggression, crime, and, violence showcasing Muslims as criminals and using dehumanizing representations of them,” said the research paper published in February 2021.

Now an Assistant Professor at Rutgers University, Garimella spoke in an interview about the policy implications of his findings, his other research into private WhatsApp groups, and the problem of WhatsApp funding this kind of research. Edited excerpts from the interview:

Kiran Garimella.

How did you start out in this field?
I did my bachelor’s and master’s from the International Institute of Information Technology in Hyderabad. After my PhD in computer science, I was a postdoc at MIT, where we started looking at WhatsApp. This was in 2018 when a lot of lynchings were happening [in India]. Misinformation [was] having real-life impact and people were getting killed.

Advertisement

Not a lot of research focuses on platforms like WhatsApp. So, we thought, ‘Why not do something here?’ We invested a lot of effort into building infrastructure: we bought two dozen phones and SIM cards and made WhatsApp accounts. Then we joined about 10,000 publicly available groups that discuss politics. We just went in big. We wanted to do a large-scale study of the information ecosystem of WhatsApp.

Can you summarise your findings?
The project looked at the prevalence of hate speech in WhatsApp groups. This was during the 2019 elections. There was a buzz that it was a WhatsApp election.

We parsed the internet for whatever we could find. A lot of the WhatsApp groups [we discovered] were BJP-related because the BJP was ahead [of others] in creating such infrastructure. But there were hundreds of groups belonging to the Congress, Samajwadi Party and other parties as well. The groups were hierarchical: you had booth-level groups, constituency-level groups, and so on.

Advertisement

We built AI and machine-learning models to see the prevalence of hate speech in these groups. But not a lot of explicit references turned up, which was surprising. The popular perception was that these WhatsApp groups are full of hateful discussions. When we dug further, we found… a concerted effort to create a narrative of fear – that’s what we call fear speech. Instead of saying “Muslims are traitors”, you say “Muslims will become the dominant group in 2050”.

There are so many of these types of messages. They are hard to detect from a moderation point of view. Forget WhatsApp – there is no moderation there. Take Facebook. Facebook has hate speech classifiers but none works in these cases because nothing explicitly hateful is being said. What is being said is fearful and can potentially lead to hateful consequences – like we saw in Delhi riots.

The other [finding of our research] is the prevalence. If you look at messages that mention Muslims, one-fifth have a fear-based narrative.

Advertisement

What did you find unique about fear speech?
We found that fear speech is significantly more popular. It spreads faster and farther, like rumour and fake news. The other finding was that emojis were widely used in expressing hate and fear: emojis of demons depicting Muslims, and the orange flag as a Hindu power symbol.

What subjects were discussed as fear speech?
Love jihad, Islamisation of Bengal and Hyderabad, atrocities against Hindus in Kerala and Sri Lanka. These are some of the most popular categories.

What is the history of the phrase ‘fear speech’, and what are the policy consequences?
The term ‘fear speech’ is not new. It is from intergroup conflict research, from people who did research in Rwanda during the genocide.

Advertisement

One of the main consequences is in terms of moderation. If you don’t say anything explicitly hateful, Facebook doesn’t have a rule disallowing fear narrative. This might be OK because you don’t want Facebook to overreach. [But] in a lot of cases, Facebook initiates action after things have happened, like Myanmar for instance. Fear speech is now banned in Myanmar. You cannot post anything that creates fear about a specific community. But that rule only exists for Myanmar.

We are saying that this might already be the case in India. We might not have had large-scale offline consequences, but maybe there could be [in the future].

What research do you have coming up?
We went to party offices in Uttar Pradesh and asked social media persons there to add us to WhatsApp groups, which some of them did. This allowed us to collect hundreds of groups that are internal BJP groups, not publicly accessible. We looked into these groups for misinformation, hate and partisanship.

Advertisement

One of our key findings is that the overall prevalence of misinformation or hate speech might not be that high. When people think of WhatsApp, they think of a cesspool of hate and misinformation. We don’t find that. It’s mostly uninteresting content of rally pictures or good morning messages or pictures of gods.

The main catch is, if you look at messages about Muslims, then there is a lot of misinformation and hate. Roughly 1% to 2% [of messages in general] are misinformation but if you condition on it being about Muslims, then the percentage just jumps 10 times, like 20%.

Additionally, we are working on an analysis of election tipline data. How effective are tiplines for fact-checking misinformation? Do they surface misinformation and how fast?

Advertisement

In the private WhatsApp groups, was the content flowing from the top officials to bottom volunteers or vice versa?
We have been thinking about this question. How much of it is top-down and how much is bottom-up? It’s not as top-down as one would think. One way to say something has percolated top-down is if it was posted on the official channels on NAMO app and then posted on the groups. [But only] 20% of the content comes from what is posted on the NAMO app.

Can you summarise the findings of your other papers about social media?
As we analysed WhatsApp data, we found a lot of messages containing links to Google Docs, saying, “We should trend this specific hashtag and here are some tweets.” There were a few hundred examples of a hashtag that the high command or the IT cell wanted to trend on a specific day at a specific time. This was sent to thousands of WhatsApp groups.

We analysed 75 such campaigns, and 62 of them, over 90%, were trending on the day they were supposed to. It’s a very simple attack. It is just people copy-pasting the same tweet over again. Twitter doesn’t care. Even though the number of unique tweets is low, Twitter labels the hashtag as trending. Because of that, it [the message] is amplified further.

Advertisement

In [another] paper, we annotated a few thousand images for misinformation and looked at the prevalence of misinformation in public WhatsApp groups. [We found that] one in eight of those images was misinformation – that’s a lot of misinformation. There are specific categories of image misinformation. One category is images reshared out of context, such as an old image of Nehru attending a funeral that is posted saying, “Nehru is praying like Muslims” or “He is a secret Muslim”.

Another [category] is fake quotes or fake stats, such as a BBC survey finding that the BJP is going to win in Karnataka or that Bill Gates said an amazing thing about Modi. The third category, which is small, around 10% of the images, is photoshopped images. One of the popular ones was screenshots of TV news programmes that were edited.

So much funding in this space comes from Facebook and WhatsApp. What do you think about that?
Yes, that’s true. They have been funding a lot of this research, which is not great. In general, a lot of people who do this research are based somewhere else, not in India – for instance, I am in Rutgers, New Jersey.

Advertisement

A lot of the time there is no funding organisation in India that would be interested in funding something like this at a scale we would like. The National Science Foundation and local organisations would probably want us to do research about, yes, WhatsApp, but also diaspora communities in the US. That’s a problem.

Karishma Mehrotra is an independent journalist. She is a Kalpalata Fellow for Technology Writings for 2021.