Social media warned to crack down on hate speech
- Published
The EU is urging social networks to be more proactive in both preventing and swiftly removing hate speech.
It has produced a list of guidelines that include the increased use of automation to stop removed content being reposted, and removing flagged content more quickly.
Tech firms will be monitored by the EU in the coming months, the commission said.
One MEP said automation should not determine the suitability of content.
The commission, which said it might also consider further regulation, also urges social platforms to work more closely with authorities and to invest more in automated tools for flagging content which incites hatred, violence and terrorism.
'Clear signal'
A European Commission research project carried out last year found that only 40% of hate speech was removed within 24 hours of being flagged.
"The commission has decided to thoroughly tackle the problem of illegal content online," said Mariya Gabriel, EU commissioner for the digital economy and society.
"The situation is not sustainable: in more than 28% of cases. It takes more than one week for online platforms to take down illegal content.
"Today we provide a clear signal to platforms to act more responsibly. This is key for citizens and the development of platforms."
It will complete its assessment by May 2018.
Automated 'errors'
However, the guidelines have received a mixed response.
MEP Julia Reda wrote in a blog, external that increased use of automation was "an attack on our fundamental rights".
She also listed nine examples of automatic filter errors, including a video of cats purring which was incorrectly flagged as infringing the copyright of a record label.
"We can't let automatic filters be the arbiters over content disputes on the internet," she said.
- Published14 September 2017
- Published2 June 2017
- Published14 March 2017
- Published6 December 2016