Google, Facebook, Twitter face EU fines over extremist posts

  • Published
ISIS fighterImage source, Reuters
Image caption,

Draft EU regulation is being planned to force social media to act more swiftly over terror content

Google, Facebook and Twitter must remove extremist content within an hour or face hefty fines, the European Commission's president has said.

In his annual State of the Union address to the European Parliament, Jean-Claude Juncker said an hour was a "decisive time window".

Net firms had been given three months in March to show they were acting faster to take down radical posts.

But EU regulators said too little was being done.

If authorities flag content that incites and advocates extremism, the content must be removed from the web within an hour, the proposal from the EU's lead civil servant states. Net firms that fail to comply would face fines of up to 4% of their annual global turnover.

The proposal will need backing from the countries that make up the European Union as well as the European Parliament.

In response to the plans, Facebook said: "There is no place for terrorism on Facebook, and we share the goal of the European Commission to fight it, and believe that it is only through a common effort across companies, civil society and institutions that results can be achieved.

"We've made significant strides finding and removing terrorist propaganda quickly and at scale, but we know we can do more."

Image source, Reuters
Image caption,

Jean-Claude Juncker gave the proposal in his annual State of the Union speech

A spokesperson for YouTube added that the site "shared the European Commission's desire to react rapidly to terrorist content and keep violent extremism off our platforms."

"That's why we've invested heavily in people, technology and collaboration with other tech companies on these efforts."

Internet platforms will be required to develop new methods to police content but it is unclear what form those would take.

"We need strong and targeted tools to win this online battle," said Justice Commissioner Vera Jourova.

While firms such as Google are increasingly relying on machine learning to root out issues, they also need a lot of human moderators to spot extremist content.

The Commission will retain its voluntary code of conduct on hate speech, agreed with Facebook, Microsoft, Twitter and YouTube in 2016.

What is the scale of the problem and what is being done?

  • in 2017, Google said it would dedicate more than 10,000 staff to rooting out violent extremist content on YouTube

  • YouTube said staff had viewed nearly two million videos for violent extremism from June to December 2017

  • YouTube said more than 98% of such material was flagged automatically, with more than 50% of the videos removed having fewer than 10 views

  • industry members have worked together since 2015 to create a database of "digital fingerprints" of previously identified content to better detect extremist material. As of December 2017, it contained more than 40,000 such "hashes"

  • in 2017, Facebook claimed that 99% of all Islamic State and al Qaeda-related content was removed before users had flagged it. The social network said that 83% of the remaining content was identified and removed within an hour

  • between August 2015 and December 2017, Twitter said that it had suspended more than 1.2 million accounts in its fight to stop the spread of extremist propaganda. It said that 93% were flagged by internal tools, with 74% suspended before their first tweet