Hateful content: Is the media biased?
- Published
The attack on Muslims at Finsbury Park mosque has prompted a debate about whether the media has inherent biases, and caused a major kerfuffle within Britain's newspapers.
To take just a single example, one person on Twitter said, external of The Times's front page: "He's white, so let's highlight the fact that he's jobless, a lone wolf and suffered mental health issues."
The Times certainly wasn't alone in receiving such opprobrium.
On Thursday morning, the Daily Mail devoted an entire page to an editorial taking umbrage, to put it mildly, with a Guardian cartoon suggesting the attacker at Finsbury Park mosque may have been indoctrinated by reading that paper and The Sun. I tweeted about that battle here, external.
But the issue that most interests me is that Facebook and Google have been getting it in the neck, external for not doing more to remove far-right material online glorifying the attack.
In recent months, political pressure has mounted on these two tech giants, as well as other smaller firms, to ensure that the internet is not a "safe space" for terrorists. That old political instinct that Something Must Be Done has kicked into overdrive.
At a recent press conference, Theresa May and Emmanuel Macron committed not only to working together on this issue, but introducing fines if companies didn't act faster to remove hateful material.
It is interesting that this political pressure has been re-applied in the light of an attack by a white man on Muslims. It suggests that the scope of material on the web that could meet with popular disapproval is vast.
Practical and philosophical problems
Will an anti-Semitic attack prompt demands that Facebook and Google act on anti-Semitism?
Could a rise in sectarian hatred in India prompt demands in that country for, say, Hindu nationalist content to be removed?
Perhaps so. Which means this is a good time to remind you of the complexity of this issue and why, though I certainly don't instinctively take the side of the most powerful companies in the world, it's important to be clear about the implications of turning our Something Must Be Done ire against them.
First of all, we have to separate the issue of encryption from the broader one about hateful content. At times, leading British politicians haven't seemed to grasp the difference.
Messaging apps like WhatsApp are attractive to terrorists because they are protected at both ends: security services can't easily infiltrate them. But if you undermine that protection by creating a so-called back door to encryption, that is an invitation to all sorts of nasties - from foreign powers to cyber-criminals - to take advantage.
On the issue of what to do about hateful material, the problems with clamping down on it are several, and both practical and philosophical.
Practically speaking, the sheer volume of content is impossible to manage, probably even with Artificial Intelligence. Some 350 million photos are posted on Facebook every day, and 400 hours of video uploaded on YouTube every minute.
Moreover, internet content appears in multiple jurisdictions.
If my mate in Jamaica is a fascist, and uploads a violent video from his veranda in Kingston Town saying all non-Rastafarians must die, with a specific threat attached, some people might say that is a matter for the Jamaican authorities.
But if I download it and share it among my school mates in Tooting, London is it a matter for UK authorities too?
Censorship
Internet companies are also to a large extent protected by the Communications Decency Act, an American law from 1996 which says social media users rather than platforms are responsible for content. This legislation was designed to protect freedom of information.
Companies like Google say they are doing lots to tackle extremism. See Kent Walker's piece earlier this week in the Financial Times for instance.
As I've written before, Silicon Valley companies are terrified of legislation and have a mindset that promotes technological over regulatory solutions to social problems. In this, they differ from many Europeans.
And this points to a philosophical problem about asking the likes of Google and Facebook to police the internet more closely.
Do we really want private companies - who have what critics argue is a global monopoly - to censor ever more of our lives?
Having completely reinvented our public domain in a very, very short space of time, asking tech giants to now do more to tackle extremism, whether Muslim, anti-Muslim or whatever, gives them a social role we may not like the implications of when we think about it.
Of course, with their immense power must come immense responsibility - from paying taxes properly to submitting themselves to greater public scrutiny (something they far too often run scared of).
But for the practical and philosophical reasons I outline, responding to the political imperative for something to be done on hateful online content is as complicated as it is vital.