Following the attacks in Christchurch, New Zealand, in March, social media companies have once again come under growing pressure to “do something” about the proliferation of hateful and violent content on their platforms. We are told that media is in crisis, and something needs to be urgently done about it.

In one of the strictest laws proposed so far, the Australian government has debated adopting legal and punitive measures where “social media executives could be jailed and their companies fined billions of dollars if they fail to remove terrorist material like the Christchurch massacre live stream”.

Advertisement

The challenge posed by the toxification of online conversations is by no means limited to the Western world. India is also struggling with finding new ways to limit the negative fallout of hateful content, disinformation and “fake news” as it faces a general election in April-May.

Proposed measures

One of the measures proposed has been a code of ethics whereby social media platforms such as Facebook, WhatsApp and YouTube work with politicians to eliminate “objectionable content” from social media accounts.

Other measures debated include new legal ways to bring responsibility and accountability to provocative and inflammatory messages “propagated by mischief mongers”. Critics have, in turn, argued that such measures remain too vague and toothless to deal with the real scope of the problem.

Advertisement

While these ongoing debates about regulating extreme speech are timely and extremely important, some of the kneejerk reactions adopted in response to this “media in crisis” narrative also risk obfuscating what, in reality, consists of a complex set of theoretical, legal, technological and socio-political debates underpinning extreme speech and its regulation globally.

Whatever the outcome of these debates, the jumble sale of solutions proposed in response to this “media in crisis” narrative will have far-reaching consequences as to how we communicate in the future, both in liberal democracies as well as in countries where there are fewer safeguards against the misuse of social media for political purposes.

Two problems

What lessons, then, can comparative research on international regulatory efforts to address extreme speech hold for India and international policy more broadly?

Advertisement

From a critical international research perspective, there are two problems underpinning the current debates on extreme speech regulation that need to be addressed.

The first problem has to do with the terminology used. When approached from a comparative perspective, what is considered extreme speech begins to look quite different when the perspectives shift from Timbuktu to Tokyo, Helsinki to New Delhi.

What people consider hate speech, for instance, in Finland, a country with high levels of democratic and press freedoms, differs significantly from a country like Ethiopia, with a long history of restricting press freedoms and opposition voices.

Advertisement

Similarly, the legal definitions of hate speech also differ significantly across national contexts and usually target only a narrow spectrum of extreme content involving incitement to violence or targeted verbal abuse.

As one response to this kaleidoscopic mishmash of terminologies, our concept of extreme speech has tried to move the analytical focus away from simplistic legal-normative definitions of hateful, violent or misleading content to better take into account this sheer diversity of media-related practices behind what people do and say online globally.

The second problem has to do with the role of social media platforms in the regulation of extreme speech. When we look critically at the ongoing debates, these often seem to muddle three regulatory distinctions that are crucial for understanding the role – and responsibility – of social media companies in contemporary political debates.

  • The first is to see social media companies as hosting companies. This means they primarily provide a platform for third-party content that they then make available to the public. This also means that social media companies, in principle, should not be made liable for content as they do not publish it, nor do they have editorial control over the content that passes through them.
  • The second is to see social media companies as publishers. This involves the same regulatory approaches used for legacy media such as print, radio or television. In principle, this means that the social media companies should be, at least partially, responsible for the content that is shared on their platforms as they have some amount of editorial control over the content that is published.
  • The third is to see social media companies as distributers. This involves approaching them as active participants who hold enormous power to make visible (or invisible) certain types of content over others through the deliberate design choices they make in their platforms. This distribution activity also often relies on automated or algorithmic decision-making processes that drive the selection of media content that we see on our social media feeds daily. As a consequence, the regulatory measures adopted should also focus not only on the content shared on social media platforms but also on the specific mechanisms through which certain types of content is amplified over others and the lack of public transparency or accountability in how this happens across different global contexts.

Perhaps the question we should be asking in addition to what types of expressions and speech should be permitted into the sphere of legitimate political debates, and the associated problems of freedom of expression raised by this, is how the technological solutions and business logics used by companies such as Facebook also factor into the creation of the problem of extreme speech globally? And what can be done about this?

Advertisement

As the global outcry concerning social media companies’ potential culpability in extreme speech grows louder, so will the demands to find solutions for this problem. In the case of social media companies, these solutions will be unavoidably technological.

Indeed, if social media companies are compelled to moderate content like publishers so, the only way they can do this effectively is to develop more powerful algorithmic systems to weed through the enormous data feeds of social media conversations produced on any given day.

Social media companies have been already experimenting with using AI, or artificial intelligence, to filter and remove “bad content” before it becomes public. As these algorithms become more sophisticated with breakneck developments in machine learning and AI, and as countries push through legislation to make automatic filtering of content a legal requirement, the significance of such algorithmic systems will only grow.

Advertisement

Is Artificial Intelligence the solution?

From an international perspective, leaving the decisions for what types of content should be allowed in public and political discussions to such proprietary AI algorithms of technology companies is something that I do not feel very comfortable with, in India or elsewhere.

This raises a number of questions for an international research agenda around extreme speech regulations that need to be also highlighted around pivotal events such as the Indian elections.

  • What kinds of new algorithmic or AI-enabled “innovations” have been proposed by social media companies to moderate “bad content” in response to these criticisms? What are their technical specifics and particularities? What are some of the potential cultural, political and social biases involved in their deployment?
  • How are these algorithmic systems used to promote and/or suppress different types of content across different online platforms and across different national contexts and legal frameworks? Is there accountability or transparency in how these systems are used? Who monitors or has oversight over these systems?
  • How are similar technological systems also used by governments and other non-state actors in an effort to influence political debate and discussion? What are the cultural, ethical, social and political questions that need to be raised about their use and misuse?

As this digital clamour around extreme speech and its regulation become an increasingly defining feature of global communication in the 21st century, these “algorithmic mediations of media” in crisis need to be made the focus of critical research.

Advertisement

Matti Pohjonen is a lecturer in Global Digital Media at the School of Oriental and African Studies, University of London, UK.

This is the eleventh part of a series on tackling extreme speech online. Read the complete series here.