Twitter’s new system for reporting harassment and threats to law enforcement comes after the platform has received serious criticism for its poor handling of harassment.
The company’s chief executive, Dick Costolo, acknowledged the company’s failings in a leaked memo;
Complex problems in search of solutions
It’s encouraging to see Twitter’s executive recognising that it has a problem. It’s even more encouraging to see tangible efforts made to fix this problem. A host of changes have been made recently, including:
* amending the network’s rules to explicitly ban revenge porn,
* a system that requires users who regularly create new accounts to supply and verify their mobile phone number,
* a new opt-in filter that prevents tweets that contain “threats, offensive or abusive language, duplicate content, or are sent from suspicious accounts” from appearing in a user’s notifications.
None of these are perfect solutions. How will platforms adjudicate consent for revenge porn? Will attempts to verify user identity put anonymous users at risk?
The writer Jonathan Rauch once wrote that “the vocabulary of hate is potentially as rich as your dictionary” – and with this in mind, Twitter’s proposal to filter offensive language seems a little sisyphean – a laboured task that never ends.
Twitter’s new reporting system offers to email users a formal email copy of their reported Tweet, so that the user can pass it on as evidence to law enforcement agencies.
It took less than a day for the technology website Gizmodo to bill the new reporting tool a “useless punt”, noting that Twitter emails information that could’ve been captured with a screen-shot, and leaves the onus on the victim to report to local law enforcement agencies.
The weakest link
Conventional law enforcement agencies have a pretty mediocre track record for tackling online abuse and harassment.
United States Congresswoman Katherine Clark recently called on the Department of Justice to specifically focus on online harassment, after a “disappointing” meeting with the Federal Bureau of Investigations.
Technology journalist Claire Porter recently spoke of her frustrations with Australian police, who were seemingly unclear about their jurisdiction.
American journalist Amanda Hess described a similarly frustrating experience in explaining her harassment to an officer:
If officers are confused about dealing with online harassment, then their ability to help the victims of threats and abuse is severely hindered. So what are the legal frameworks for dealing with this kind of abusive behaviour? Are they being adequately used?
The ‘lawless internet’ is a myth
The kind of abuse and harassment that people face on the internet is illegal under a variety of laws, and with varying penalties. Under Title 18 of the United States Code:
* § 875 outlaws any interstate or foreign communications that threaten injury.
* § 2261A outlaws any interstate or foreign electronic communications with the intent to kill, injure, harass or intimidate another person – especially conduct that creates reasonable fear of death or serious bodily injury, or that attempts to cause substantial emotional distress to another person.
Under the United Kingdom’s Communications Act 2003:
* § 127 makes it an offence to send an electronic message that is grossly offensive, indecent, obscene or of menacing character.
Similarly, under Australian state and commonwealth law:
* § 474.15, .16 and .17 of the Criminal Code Act 1995 (Commonwealth) makes it an offence to use electronic communications to threaten to kill or harm, to send hoaxes about explosives or dangerous substances, or to menace, harass and cause offence.
* The Crimes Act 1900 (NSW) § 31, Criminal Code 1899 (Qld) § 308, Criminal Code Act 1924 (Tas) § 162, Crimes Act 1900 (ACT) § 30-31, Criminal Code 1983 (NT) § 166, Criminal Law Consolidation Act (SA) § 19(1, 3), Crimes Act 1958 (Vic) § 20, and Criminal Code 1913 (WA) § 338 (A, B) each make it an offence to make threats that cause a person to fear death or violence.
In addition to this, the Australian Government has recently announced the new Children’s e-Safety Commisioner, and new legislation that will allow the commissioner to impose fines on social networking platforms.
The commissioner’s office is being established as part of a A$10 million dollar policy initiative from the Department of Communications to enhance online safety for children.
This is a well-intentioned initiative with a laudable goal, but by narrowly focusing on the harassment of children and ignoring the wealth of existing laws, it might just miss the forest for the trees when it comes to addressing online harassment.
Given the extensive number of laws that could already be used to address online harassment, we must ask where the weaknesses fall in enforcing these laws.
If police officers are not yet adequately trained to engage with crimes committed online by local, interstate or international aggressors, or familiar with procedures to request data from social networks, then legislators must look at providing these agencies with the training and resources required to engage with their responsibilities in online spaces.
This article was originally published on The Conversation.
The company’s chief executive, Dick Costolo, acknowledged the company’s failings in a leaked memo;
We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years. It’s no secret and the rest of the world talks about it every day.
We’re going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them.
Complex problems in search of solutions
It’s encouraging to see Twitter’s executive recognising that it has a problem. It’s even more encouraging to see tangible efforts made to fix this problem. A host of changes have been made recently, including:
* amending the network’s rules to explicitly ban revenge porn,
* a system that requires users who regularly create new accounts to supply and verify their mobile phone number,
* a new opt-in filter that prevents tweets that contain “threats, offensive or abusive language, duplicate content, or are sent from suspicious accounts” from appearing in a user’s notifications.
None of these are perfect solutions. How will platforms adjudicate consent for revenge porn? Will attempts to verify user identity put anonymous users at risk?
The writer Jonathan Rauch once wrote that “the vocabulary of hate is potentially as rich as your dictionary” – and with this in mind, Twitter’s proposal to filter offensive language seems a little sisyphean – a laboured task that never ends.
Twitter’s new reporting system offers to email users a formal email copy of their reported Tweet, so that the user can pass it on as evidence to law enforcement agencies.
It took less than a day for the technology website Gizmodo to bill the new reporting tool a “useless punt”, noting that Twitter emails information that could’ve been captured with a screen-shot, and leaves the onus on the victim to report to local law enforcement agencies.
The weakest link
Conventional law enforcement agencies have a pretty mediocre track record for tackling online abuse and harassment.
United States Congresswoman Katherine Clark recently called on the Department of Justice to specifically focus on online harassment, after a “disappointing” meeting with the Federal Bureau of Investigations.
Technology journalist Claire Porter recently spoke of her frustrations with Australian police, who were seemingly unclear about their jurisdiction.
American journalist Amanda Hess described a similarly frustrating experience in explaining her harassment to an officer:
The cop anchored his hands on his belt, looked me in the eye, and said: ‘What is Twitter?’
If officers are confused about dealing with online harassment, then their ability to help the victims of threats and abuse is severely hindered. So what are the legal frameworks for dealing with this kind of abusive behaviour? Are they being adequately used?
The ‘lawless internet’ is a myth
The kind of abuse and harassment that people face on the internet is illegal under a variety of laws, and with varying penalties. Under Title 18 of the United States Code:
* § 875 outlaws any interstate or foreign communications that threaten injury.
* § 2261A outlaws any interstate or foreign electronic communications with the intent to kill, injure, harass or intimidate another person – especially conduct that creates reasonable fear of death or serious bodily injury, or that attempts to cause substantial emotional distress to another person.
Under the United Kingdom’s Communications Act 2003:
* § 127 makes it an offence to send an electronic message that is grossly offensive, indecent, obscene or of menacing character.
Similarly, under Australian state and commonwealth law:
* § 474.15, .16 and .17 of the Criminal Code Act 1995 (Commonwealth) makes it an offence to use electronic communications to threaten to kill or harm, to send hoaxes about explosives or dangerous substances, or to menace, harass and cause offence.
* The Crimes Act 1900 (NSW) § 31, Criminal Code 1899 (Qld) § 308, Criminal Code Act 1924 (Tas) § 162, Crimes Act 1900 (ACT) § 30-31, Criminal Code 1983 (NT) § 166, Criminal Law Consolidation Act (SA) § 19(1, 3), Crimes Act 1958 (Vic) § 20, and Criminal Code 1913 (WA) § 338 (A, B) each make it an offence to make threats that cause a person to fear death or violence.
In addition to this, the Australian Government has recently announced the new Children’s e-Safety Commisioner, and new legislation that will allow the commissioner to impose fines on social networking platforms.
The commissioner’s office is being established as part of a A$10 million dollar policy initiative from the Department of Communications to enhance online safety for children.
This is a well-intentioned initiative with a laudable goal, but by narrowly focusing on the harassment of children and ignoring the wealth of existing laws, it might just miss the forest for the trees when it comes to addressing online harassment.
Given the extensive number of laws that could already be used to address online harassment, we must ask where the weaknesses fall in enforcing these laws.
If police officers are not yet adequately trained to engage with crimes committed online by local, interstate or international aggressors, or familiar with procedures to request data from social networks, then legislators must look at providing these agencies with the training and resources required to engage with their responsibilities in online spaces.
This article was originally published on The Conversation.
Limited-time offer: Big stories, small price. Keep independent media alive. Become a Scroll member today!
Our journalism is for everyone. But you can get special privileges by buying an annual Scroll Membership. Sign up today!