Around the world, governments are increasingly taking steps to regulate speech online – often amid pronounced protests from human rights and civil liberties organisations who have identified significant flaws in these steps that could end up doing more harm than good.

But with increasing public awareness of the harms posed by extreme speech on the internet – including terrorist content, hate speech, and other speech meant to reinforce differences and hatred – what is the right way for governments to respond?

Advertisement

Laws addressing different forms of speech have been used in some countries for everything from silencing legitimate expression to persecuting journalists, lawyers, human rights defenders, as well as religious and ethnic minorities. To avoid such harmful applications of the law – and ensure a robust public square that empowers the natural checks and balances of democratic institutions – the caution with which democracies have considered regulation of speech up until the digital age should also be exercised vis-a-vis the digital world.

Many approaches to online content regulation give cause for concern, but one approach of particular concern is instituting “intermediary liability”– imposing responsibility on companies for content generated by their users.

This approach was central to Germany’s epochal 2017 Network Enforcement Act or NetzDG, which requires companies to remove terrorist content and hate speech quickly after it surfaces, and is being replicated in different forms by Singapore, Russia, the Philippines, Kenya, Venezuela, the United Kingdom, France and many others.

Advertisement

Collateral damage?

Unfortunately, the consequences of this approach can often be contradictory to the social aims cited by lawmakers. Even where laws are clearly defined and rule of law principles obeyed, the threat of criminal charges or fines can cause companies to take a more aggressive approach to content monitoring and restriction than they would otherwise, which can result in serious unintended consequences.

Of particular concern, content meant to counter the impact of violent extremist speech, such as satirical or humorous critiques of terrorist messaging, is particularly likely to mimic “extremist” speech and therefor be mistakenly identified as illegal, and taken down.

Objective journalistic reporting that corrects disinformation – content that contributes to debate and discussion and does not violate laws or platform guidelines, content that should remain on such platforms – is similarly at risk.

Advertisement

Additionally, certain vulnerable communities that share common religion or language with extremists could be disproportionately affected by overly-aggressive approaches to content moderation, further marginalising them.

Aggressive forms of intermediary liability can also hinder law enforcement efforts. Censoring public forums on which extreme speech occurs can drive these conversations to more secure, peer-to-peer mediums, significantly limiting law enforcement and intelligence agency visibility.

The over-removal of controversial content and discussions further complicates such observation and understanding. In addition, those peddling extreme messages regularly frame their rhetoric in narratives of oppression and victimhood. By removing extreme content that is controversial or distasteful but does not directly incite violence or hatred, governments may unintentionally play into those narratives.

Advertisement

The way ahead

So what kind of legislative proposals should we desire? Policymaking processes should emphasise transparency and consultation, and the resulting policies should be underpinned by flexibility and collaboration.

Flexible approaches to policy consciously consider the dynamic nature of the internet and the needs of innovation and entrepreneurs. Rather than requiring a specific, rigid method of content enforcement or prescribing unrealistic outcomes, flexible policymaking acknowledges that perfect enforcement is a mirage and recognises that there is still much we do not understand about how to measure, let alone combat, harmful speech online.

The Manila Principles on Intermediary Liability outline positive policy approaches, and some recent proposals, including Argentina’s draft intermediary liability law, exemplify many positive qualities.

Advertisement

Content-focused policies should emphasise the importance of transparency, due process, and remedy in company processes. They should be flexible enough to reflect the diversity of online platforms and the unique nuances that set them apart from each other, while acknowledging that technologies underpinning both the problems and possible solutions are dynamic and evolving.

Finally, governments must acknowledge their own role in the process by providing clear legal definitions, being transparent about government requests for content restriction, fostering objective research and analysis, and committing to periodic re-evaluation of laws to ensure they are functioning as intended.

The variables driving extreme speech online are so multitudinous and complex that sorting them out requires a collaborative approach from government, companies, civil society, and academics. Policymaking should thus seek to empower collaboration among different stakeholders, and in particular should strive to include civil society in policy deliberations.

Advertisement

Civil society offers unique and independent perspectives, often stemming from close relationships with those most affected by extreme speech that highlight flaws, fill gaps, and ensure robustness of policy proposals.

In the absence of genuine civil society participation in the policymaking process, governments risk creating policies that are less credible, less sustainable, and ultimately less effective. Indeed, global leaders should stress the collaborative nature of process as much as the final product.

Extreme speech online is an important problem that is difficult for companies, governments, and civil society alike to address. Instead of using its substantial power to introduce more variables into the equation, governments should instead seek to identify how it can support the ecosystem that is already very actively pursuing answers to these questions and should strive to avoid inadvertently contributing more challenges to this already complicated problem.

Advertisement

Nikki Bourassa is a program and policy officer at the Global Network Initiative. She contributes this piece in her individual capacity and does not speak on behalf of her organisation.

This is the twelfth part of a series on tackling online extreme speech. Read the complete series here.