Twitter tests 'misleading' post report button for first time

Twitter logo on a screen with the silhouettes of people on their phones in front of itImage source, Reuters

Twitter is introducing a way to report posts as "misleading" for the first time.

Many of the large social media networks have been accused of not doing enough to fight the spread of disinformation during the Covid pandemic and US election campaigns.

Twitter's reporting function has never offered a clear option for such posts.

It said the new feature was only a test, and will only be available in a few countries to begin with.

"Some people" in Australia, South Korea, and the United States will now see an option for "it's misleading" when trying to report a tweet, the tech giant said.

It also warned users that the system may not have a significant effect.

"We're assessing if this is an effective approach so we're starting small," the company said on its safety account., external

"We may not take action on and cannot respond to each report in the experiment, but your input will help us identify trends so that we can improve the speed and scale of our broader misinformation work."

Twitter plans to eventually launch the feature in other countries around the world.

Currently, someone reporting misinformation must choose from options such as "it's suspicious or spam" or "it's abusive or harmful" - and then narrow that down to more specific sub-categories to make a report.

Because the options are so specific, it can often be unclear which one to use.

If I had a penny for how many times people message me asking why there's no option to report misinformation on Twitter, I'd be a very rich woman.

Since the start of the pandemic, pressure has mounted on social media sites to do more to combat a wave of harmful falsehoods that have spread online.

That includes unfounded conspiracies about Covid-19 and vaccines, as well as surrounding last year's US election, which went on to inspire the riots at Capitol Hill and saw US President Donald Trump's account suspended.

I've spent the past year-and-a-half covering the real-world impact of misleading posts online - scaring people off Covid jabs, destroying relationships, and provoking violence.

Some critics argue that the option to report misinformation should have been introduced months ago to help prevent this offline harm. But the question remains - what impact will this really have?

There are fears that the social media site will struggle to moderate the avalanche of content reported - including from those promoting falsehoods, who then flag accurate information as misleading.

Twitter has focused on issuing suspensions and bans to accounts which consistently spread harmful Covid-19 misinformation when they come to the company's attention.

It also began putting warning labels on such tweets in early 2020, announced a collaboration with news organisations as part of an attempt to debunk false information, and started a pilot scheme in January to allow a small number of people to submit "notes" about misleading content.

However, Twitter and other tech giants continue to be criticised for the spread of false information.

Chief executives have repeatedly appeared before US politicians to answer questions about their policies, while groups such as the Center for Countering Digital Hate have accused them of not doing enough to combat vaccine misinformation - among other forms of harm.