In the wake of the assault on the US Capitol on January 6, Twitter permanently suspended Donald Trump’s personal account, and Google, Apple and Amazon shunned Parler, which at least temporarily shut down the social media platform favoured by the far right.

Dubbed “deplatforming,” these actions restrict the ability of individuals and communities to communicate with each other and the public. Deplatforming raises ethical and legal questions, but foremost is the question of whether it is an effective strategy to reduce hate speech and calls for violence on social media.

Advertisement

The Conversation US asked three experts in online communications whether deplatforming works and what happens when technology companies attempt it.

‘Sort of, but it is not a long-term solution’

The question of how effective deplatforming is can be looked at from two different angles: does it work from a technical standpoint, and does it have an effect on worrisome communities themselves?

Does deplatforming work from a technical perspective?

Gab was the first “major” platform subject to deplatforming efforts, first with removal from app stores and, after the Tree of Life shooting, the withdrawal of cloud infrastructure providers, domain name providers and other Web-related services.

Advertisement

Before the shooting, my colleagues and I showed in a study that Gab was an alt-right echo chamber with worrisome trends of hateful content. Although Gab was deplatformed, it managed to survive by shifting to decentralised technologies and has shown a degree of innovation – for example, developing the moderation-circumventing Dissenter browser.

From a technical perspective, deplatforming just makes things a bit harder. Amazon’s cloud services make it easy to manage computing infrastructure but are ultimately built on open source technologies available to anyone.

A deplatformed company or people sympathetic to it could build their own hosting infrastructure. The research community has also built censorship-resistant tools that, if all else fails, harmful online communities can use to persist.

Advertisement

Does deplatforming have an effect on worrisome communities themselves?

Whether or not deplatforming has a social effect is a nuanced question just now beginning to be addressed by the research community. There is evidence that a platform banning communities and content – for example, QAnon or certain politicians – can have a positive effect.

Platform banning can reduce the growth of new users over time, and there is less content produced overall. On the other hand, migrations do happen, and this is often a response to real-world events – for example, a deplatformed personality who migrates to a new platform can trigger an influx of new users.

Advertisement

Another consequence of deplatforming can be users in the migrated community showing signs of becoming more radicalised over time. While Reddit or Twitter might improve with the loss of problematic users, deplatforming can have unintended consequences that can accelerate the problematic behaviour that led to deplatforming in the first place.

Ultimately, it is unlikely that deplatforming, while certainly easy to implement and effective to some extent, will be a long-term solution in and of itself. Moving forward, effective approaches will need to take into account the complicated technological and social consequences of addressing the root problem of extremist and violent Web communities.

Jeremy Blackburn, Assistant Professor of computer science at the Binghamton University.

Advertisement

‘Yes, but driving people into the shadows can be risky’

Does the deplatforming of prominent figures and movement leaders who command large followings online work? That depends on the criteria for the success of the policy intervention. If it means punishing the target of the deplatforming so they pay some price, then without a doubt it works. For example, right-wing provocateur Milo Yiannopoulos was banned from Twitter in 2016 and Facebook in 2019 and subsequently complained about financial hardship.

If it means dampening the odds of undesirable social outcomes and unrest, then in the short term, yes. But it is not at all certain in the long term. In the short term, deplatforming serves as a shock or disorienting perturbation to a network of people who are being influenced by the target of the deplatforming. This disorientation can weaken the movement, at least initially.

However, there is a risk that deplatforming can delegitimise authoritative sources of information in the eyes of a movement’s followers, and remaining adherents can become even more ardent. Movement leaders can reframe deplatforming as censorship and further proof of a mainstream bias.

Advertisement

There is reason to be concerned about the possibility that driving people who engage in harmful online behaviour into the shadows further entrenches them in online environments that affirm their biases. Far-right groups and personalities have established a considerable presence on privacy-focused online platforms, including the messaging platform Telegram. This migration is concerning because researchers have known for some time that complete online anonymity is associated with increased harmful behaviour online.

In deplatforming policymaking, among other considerations, there should be an emphasis on justice, harm reduction and rehabilitation. Policy objectives should be defined transparently and with reasonable expectations in order to avoid some of these negative unintended consequences.

Ugochukwu Etudo, Assistant professor of operations and information management at the University of Connecticut.

Advertisement

‘Yes, but the process needs to be transparent’

Deplatforming not only works, I believe it needs to be built into the system. Social media should have mechanisms by which racist, fascist, misogynist or transphobic speakers are removed, where misinformation is removed, and where there is no way to pay to have your messages amplified. And the decision to deplatform someone should be decided as close to democratically as is possible, rather than in some closed boardroom or opaque content moderation committee like Facebook’s “Supreme Court.”

In other words, the answer is alternative social media like Mastodon. As a federated system, Mastodon is specifically designed to give users and administrators the ability to mute, block or even remove not just misbehaving users but entire parts of the network.

For example, despite fears that the alt-right network Gab would somehow take over the Mastodon federation, Mastodon administrators quickly marginaliSed Gab. The same thing is happening as I write with new racist and misogynistic networks forming to fill the potential void left by Parler. And Mastodon nodes have also prevented spam and advertising from spreading across the network.

Advertisement

Moreover, the decision to block parts of the network aren’t made in secret. They are done by local administrators, who announce their decisions publicly and are answerable to the members of their node in the network. I am on scholar.social, an academic-oriented Mastodon node, and if I do not like a decision the local administrator makes, I can contact the administrator directly and discuss it. There are other distributed social media system, as well, including Diaspora and Twister.

The danger of mainstream, corporate social media is that it was built to do exactly the opposite of what alternatives like Mastodon do: grow at all costs, including the cost of harming democratic deliberation. It is not just cute cats that draw attention but conspiracy theories, misinformation and the stoking of bigotry. Corporate social media tolerates these things as long as they are profitable – and, it turns out, that tolerance has lasted far too long.

Robert Gehl, Associate Professor of communication and media studies, Louisiana Tech University.

This article first appeared on The Conversation.