Permanent and anonymous: Tackling online hate crime

A computer keyboard

Equally Ours’s Tanishtha Bhatia Sen Gupta on our recent seminar: Online hate crime and hate speech

In our recent seminar on online hate crime and hate speech, we worked with leading academics, policymakers and practitioners to explore the nature of online hate, and the policy responses needed to tackle it. The seminar, organised together with Oxford Brookes University, identified an urgent need for specific regulation to prevent and combat online hate.

The recent Sussex Hate Crime Project (pdf) has shown that online hate is a rapidly growing problem. Unfortunately, current legislation falls short of addressing its specific nuances.

There are legal provisions that apply to online hate – the Protection from Harassment Act 1997, the Public Order Act 1986, the Malicious Communications Act 1988 and the Communications Act 2003 – but they are outdated, and unfit to address the magnitude and anonymous nature of online hate.

Online hate speech is permanent and accessible

Speaking at the seminar, Chara Bakalis of Oxford Brookes took us through some of the key differences between online and offline hate. Hateful comments and threats posted online are easily searchable using the victims’ name, making them accessible and permanently visible to everyone. While many online platforms are attempting to respond to this with better safety and privacy settings, these only scratch the surface of the wider consequences of online hate.

Online hate also has a pernicious effect on democracy itself. With research showing an abusive or problematic tweet is sent to a female politician every thirty seconds, many now worry that women will be deterred from entering politics due to their disproportionate experience of online hate.

Speakers from different organisations told us more about how online hate hurts their communities. Rosanna Rafel-Rix from the Community Security Trust talked about online hate against the Jewish community, ranging from holocaust denial memes to Nazi sympatising comments. CST’s research (pdf) found 323 incidents of anti-semitic online hate between January and June 2019, with each incident consisting of hundreds of comments and posts.

Patricia Stapleton from The Traveller Movement spoke about how hate speech used by politicians about the Gypsy, Roma and Traveller communities contributes hugely to the hate they experience. This speech is normalised, often quoted in the media, and due to parliamentary privilege is often difficult to tackle, making it a major barrier to prevention of online hate.

From Stay Safe East, Ruth Bashall discussed how online hate affects Deaf and disabled people. She pointed out that current platforms for reporting online hate speech are often themselves inaccessible, making it hard for people with different accessibility needs to report or get redress for online hate.

The cumulative effect of online hate

A key theme that kept coming up is how online hate multiplies, and builds on itself. The nature of online platforms means people can share hateful content far and wide in minutes, magnifying the effect on the victim’s mental health and physical safety.

Chris Witt And Eduard Mead from the Home Office spoke about the links between online hate and offline violence. They highlighted that unregulated online hate speech has been linked to terrorist acts (such as the shootings in two mosques in Christchurch) and even genocide, such as that against the Rohingya people in Myanmar.

Addressing online hate

One major barrier to addressing online hate was raised by Paul Giannasi from the National Police Chiefs’ Council: the fact that many of the perpetrators are anonymous. It will be vital to identify the best ways to tackle this in developing any new regulation to respond to online hate. Even though technology exists to track down comments made by anonymous accounts, the sheer magnitude of comments often makes it impossible to prosecute perpetrators.

Participants brought up multiple times how experiencing online hate is different from experiencing hate in person, with distinct emotional and even physical impacts. All agreed that there is a need for specific regulation to address online hate, but many raised questions around how to implement this in practice.

One recommendation was an independent regulatory body to take charge of this, however the international nature of social media means this raises even more questions around jurisdiction and enforcement. Participants also discussed how free speech comes in to this, as it’s an argument commonly made against regulating online platforms.

To find out more about efforts across the sector to tackle online hate, check out these campaigns from some of the day’s speakers: the #Cutitout campaign by Réné Cassin and the Traveller Movement, and the Stop Funding Hate initiative.

Share this article

Share on facebook
Share on twitter
Share on linkedin

Related posts