Twitter and hate speech: What's the evidence?
- Published
Among the topics discussed during Elon Musk's interview with the BBC was the prevalence of hate speech and misinformation on the platform.
"Do you see a rise of hate speech?" Mr Musk said. "I don't."
He asked our reporter James Clayton for specific examples of hateful content.
When he couldn't pinpoint individual messages, Mr Musk said: "You don't know what you're talking about… you just lied."
It's prompted intense criticism on Twitter itself, mostly - but certainly not exclusively - from right-wing and far-right accounts.
What evidence is there?
But there are both in-depth studies and anecdotal evidence that suggest hate speech has been growing under Mr Musk's tenure.
Several fringe characters that were banned under the previous management have been reinstated.
They include Andrew Anglin, founder of the neo-Nazi Daily Stormer website, and Liz Crokin, one of the biggest propagators of the QAnon conspiracy theory.
Other lesser-known Twitter users have taken advantage of the new ownership. One account with a racial slur in its user name was able to get a blue checkmark. Another one was purchased by a neo-Nazi who tweets videos of himself reciting Mein Kampf - Hitler's autobiography.
Anti-Semitic tweets doubled from June 2022 to February 2023, according to research from the Institute of Strategic Dialogue (ISD), external. The same study found that takedowns of such content also increased, but not enough to keep pace with the surge.
The ISD also found an increase of nearly 70% in Islamic State accounts - a problem that was once huge on Twitter, but had been reduced to a trickle by account bans.
The Center for Countering Digital Hate, a London-based campaign group, found that slurs increased substantially, external after Mr Musk's takeover.
Our own reporting also provides some clues. The BBC analysed over 1,100 previously banned Twitter accounts that were reinstated under Mr Musk. A third appeared to violate Twitter's own guidelines. Some of the most extreme depicted rape and drawings showing child sexual abuse. Such content was also a scourge on Twitter for years before Mr Musk acquired the platform.
But a BBC investigation heard from Twitter insiders who expressed concern that the company is no longer able to protect users from trolling, state-co-ordinated disinformation and child sexual exploitation.
What don't we know?
A few issues cloud the matter. One is that there is no blanket definition of hate speech under American law, which is generally much more permissive than other countries because of the First Amendment to the US Constitution.
This, after all, is a country where in 1978 civil rights lawyers sued to defend the right of a neo-Nazi group to march through the Chicago suburb of Skokie, where many Holocaust survivors lived.
Mr Musk's free-speech views - which are mainstream in the United States - may have encouraged people who were worried about a ban to speak more freely. In other words, we don't know if the spike identified by researchers will last.
There is clearly still moderation happening on Twitter, and Mr Musk himself has robustly pushed back on the studies and investigations.
He's argued that he has taken a politically neutral line - that not only right-wing accounts have been reinstated, but some left-wing accounts that were previously banned as well.
"This is not a right-wing takeover, but rather a centrist takeover," he tweeted.
And, he argues, his strategy is working. In December, he tweeted that hate speech was down by a third.
Allow Twitter content?
This article contains content provided by Twitter. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. You may want to read Twitter’s cookie policy, external and privacy policy, external before accepting. To view this content choose ‘accept and continue’.
Researchers and journalists have focused on the most extreme content - not mere jokes or insults, but highly abusive language. Critics have pointed out, however, that Mr Musk's own definition of hate speech isn't clear.
And he recently ended free access to Twitter's application programming interface or API - data that researchers use to study the platform.
Without such data it will be hard to objectively study the issue going forward. Mr Musk has previously said he boosted transparency by making Twitter's algorithm open-source.
Mr Musk said his efforts to delete bots - automated accounts - has decreased misinformation on Twitter since his takeover. And he cited the site's community notes feature, where users themselves can comment and add context to tweets.
"My experience is there is less misinformation rather than more," he told our reporter.
Some outside experts disagree. An early study from NewsGuard, external, a company that tracks online misinformation, found that engagement with popular, misinformation-spreading accounts spiked after Mr Musk's takeover.
In the week following his acquisition of Twitter, the most popular, untrustworthy accounts enjoyed an almost 60% increase in engagement in the form of likes and retweets, according to the survey.
Science Feedback, another fact-checker, found that misinformation "super spreaders" - defined as accounts that consistently publish popular tweets containing links to known misinformation - have markedly increased engagement since Mr Musk's takeover, external.
Both organisations have been attacked by Musk's supporters, as online fact-checking has become another arena fractured along political lines.
The BBC's own analysis found false anti-vax claims and the denial of the 2020 US election result among the sample of more than 1,000 reinstated accounts.
Mr Musk says that he's on the side of truth and believes that his strategy will make Twitter better in the long run.
"The acid test is people use the system and find it to be a good source of truth, or they don't," Mr Musk told our reporter. "And no system is going to perfect in its pursuit of the truth, but I think we can be the best, the least inaccurate."
Mr Musk said he prefers "ordinary people" to information from journalists, at several points challenging the BBC's technology correspondent with questions of his own.
But Mr Musk himself was not fully accurate when describing the BBC's reporting. At one point he claimed that the BBC had not reported on Covid vaccine side effects.
But the BBC has reported on proven, very rare side effects when they emerged, and examples can be found online - for instance here, here and here.
Update 5 June 2023: This article has been amended to remove a description of where two fact-checking organisations sit on the political spectrum, and to instead explain that online fact-checking is frequently the subject of political contention.
Related topics
- Published12 April 2023