By Robert H. Porter

A German comedian recently made international headlines by stencilling and spray painting 30 offensive tweets outside of Twitter’s Hamburg Headquarters. This was part of Shahak Shapira’s #HEYTWITTER (NB: contains graphic language) campaign, launched in response to Twitter’s failure to delete any of the 300 tweets that Shapira reported to Twitter for their racist, Islamaphobic, anti-Semitic, and homophobic content. According to Shapira, he only ever received 9 responses from Twitter in response to his 300 reports, which simply stated that the tweets were not breaking any of Twitter’s rules, despite the fact that they very clearly were. This poses the question: who is responsible for limiting harassment online?

At a time when most of the social media giants such as Facebook, Twitter, and YouTube are all revising their rules regarding harassment and hate speech on their respective platforms – and generally insisting that they really do care about protecting their users, and that they are truly invested in making progress – the situation is getting bad enough that governments are getting involved.

Germany’s recent decision to pass legislation that would force social media platforms to remove racist or slanderous material within a twenty-four hour window has come under criticism. However, when followers of white nationalist groups have increased by 600% since 2012, and social media platforms are only dealing with cases of harassment when instances go viral, it’s little wonder that legislative responses are emerging.

Many critics of anti-harassment measures decry the loss of free speech and the evils of censorship – which clearly conflates the issues of free speech and harassment. Ironically, it is often these free speech advocates who say nothing in response to harassment of social media users who find it difficult to share their own opinions because of the silencing effects of harassment.

Anti-harassment activists argue that all they want is for social media platforms actually to enforce their own policies – those that they have already written and currently have in place – and that it’s not censorship for a social media platform to enforce the user-content rules that they wrote, and users agreed to, in order to enforce anti-harassment and anti-hate speech restrictions.

The alternative argument against anti-harassment policies – once the free speech argument loses steam – is that of “if you can’t handle it, then stay off the Internet,” the second-cousin to “just don’t read the comments,” both of which are directly descended from “this is why we can’t have nice things.”

The fact is that social media companies have a responsibility to protect their users from harassment, by simply enforcing their own rules – ideally not solely through the use of algorithms, which have a spotty record of effectiveness at the best of times. Internet users should be pressuring social media companies to hold the creators of this content to account.

But, is it also the responsibility of society at large to confront the toxic origins of online harassment? Of course. We should support courses and programs that combat online harassment in schools and communities, as well as projects like HeartMob and PREVNet.

In the meantime, maybe we should all grab some paint.