A Twitter Pickle
Following her success with the Banknotes campaign, feminist campaigner Caroline Criado-Perez has made headlines again, this time for receiving swathes of abusive and threatening tweets. Her experience has dominated Twitter and spurred a massive response from across the board. Except for Twitter themselves, who may now be in a bit of a pickle.
Much of the discussion has surrounded the demand for Twitter to roll out an instant ‘report abuse’ function to enable users to quickly highlight abusive tweets. It’s a controversial proposal, not least because many fear that it will itself be misused to silence people who are doing nothing but offering minority opinions, and end up backfiring and shooting legitimate debate in the foot. It doesn’t take a great leap of imagination to see flash-point topics such as abortion, anarchism, sex work, or something Beliebers dislike, being targeted unreasonably as ‘abusive’. If the effect of an abuse button is to take out unpopular and minority voices as well as abusers, it risks shutting down people who are very commonly the target of precisely the kind of abuse the function seeks to prevent.
Massive logistical issues aside, another problem hinges on the subjective nature of what people understand as abuse. Even if well-intentioned, the range of tweets I have seen in the last few days has suggested that public understanding of what constitutes illegal abuse versus trolling, the role of the police and the service provider, and everything that runs between are patchy at best.
In raising this, a major issue has been brought to the fore: what is, or should be, the responsibility of the service provider? For Twitter, expected to rope in over £1bn in advertising revenue in 2014, this is an important question. If they have the option to offer an instant abuse reporting facility (and they do, their UK boss has said it is being piloted), it begs the question: what are the implications for NOT rolling out the facility?
Specifically, given they have multiple examples of illegal abuse posing a continued threat to users (many of whom have not received the platform this case has despite #shoutingback regularly), do they now have an obligation to act? If abuse is a clear and real risk to users, and the current laborious system has failed to protect a user from that abuse, can Twitter legitimately back out of developing a more responsive instant system? Would it be deemed reasonable for the company to disregard potential harm to individual users, by failing to significantly upgrade the facility? It opens an almighty can of worms.
When addressing reported tweets, the harm/threat involved is subjective. To make a judgement as to which are legal or illegal levels of unpleasantness, is a matter for the courts, not a company to decide. Distinguishing between unpleasant and vulgar, and unpleasant and illegal isn’t as straightforward as campaigners might wish it were, otherwise Twitter could simply add a ‘report to police’ option, head to the pub and be done with it. If they do moderate more heavily, who should decide this and on what grounds? Is a tweet calling someone, for example, a disablist term, automatically offensive? Is it automatically hate-speech? Can they ban one user if a recipient feels abused, but not then address another who says the same thing but is not reported? What is the appeals process, whom is accountable to whom? What about the linguistic differences from one legislative territory to the next? The list goes on.
Equally, if potentially criminal tweets are to be addressed, what of civil matters? A tweet which is defamatory, whether about a business or individual, could cause psychological and reputational harm and destroy livelihoods. Should Twitter act as the High Court and address complaints reported though an abuse button? If someone’s livelihood were at risk from (unlawful) libel should they not have redress to a system which would stop it spreading in the first place?
It crashes straight into that fundamental contradiction with all un-moderated web platform based on freedom of speech; namely that we do not, in the UK, have that pure right. Freedom of speech is necessarily curtailed through the courts in a range of civil and criminal legislation, from defamation, to harassment and assault through to human rights. Does a social network have a duty to respect that? To control it? To police it? To nanny us? An appropriate function which effectively protects individuals from being victims of crime is clearly needed, and recent events have proven a dramatic and stark example of this. However the biggest concerns now are what is going on behind the scenes. And Twitter is currently quiet on the matter.
So I do understand why some people don’t support the campaign, and it was with some ambivalence, that I signed the petition to add an abuse button.
I didn’t sign because I think that it is the only or best option; I didn’t even sign because I felt Caroline’s case, heinous as it is, is more or less worthy than others. (Even though I think she’s awesome). I also didn’t sign thinking this is the first and last big discussion there will be on the subject. There is no doubt much more to come.
I signed because at this point in time, it is the most prominent push to force the hand of a company who have reaped the benefit of being hands-off while others suffer the consequences. I signed because I am absolutely certain that they are fully aware of all of these issues and then some. I signed because the majority of abuse comes from the same type of people, targeting the same type of people, and having seen it time and again in the ‘real world’ that isn’t acceptable to me. I signed because silencing people through violent abuse is already illegal and that must be addressed. I signed because abuse is predictable, trackable, and should be manageable to a company of such enormous resource. I signed because I think they ought to be obliged to take reasonable steps to look after the very communities they make such a huge profit from.
But most simply, I signed because I don’t agree with Twitter’s policy of making it harder to report abuse than it is to tweet it. We are equal at 140 characters in all other respects. For a website whose popularity and reputation is built on speed of communication, they should offer support in the same way. However much of a pickle it puts them in while they figure it out.
*watching the debate with interest*