Facebook reveals measures to remove terrorist content

  • Published
Facebook logoImage source, Getty Images
Image caption,

Facebook has been criticised for not doing enough to remove terror-related content

Facebook has announced details of steps, external it is taking to remove terrorist-related content.

The move comes after growing pressure from governments for technology companies to do more to take down material such as terrorist propaganda.

In a series of blog posts by senior figures and an interview with the BBC, Facebook says it wants to be more open about the work it is doing.

The company told the BBC it was using artificial intelligence to spot images, videos and text related to terrorism as well as clusters of fake accounts.

"We want to find terrorist content immediately, before people in our community have seen it," it said.

No safe space

The ability of so-called Islamic State to use technology to radicalise and recruit people has raised major questions for the large technology companies.

They have been criticised for running platforms used to spread extremist ideology and inspire people to carry out acts of violence.

Governments, and the UK in particular, have been pushing for more action in recent months, and across Europe talk has been moving towards legislation or regulation.

Image source, Getty Images
Image caption,

MPs have said the government should consider making sites pay to help police what people post

Earlier this week in Paris, the British prime minister and the president of France launched a joint campaign to ensure the internet could not be used as a safe space for terrorists and criminals.

Among the issues being looked at, they said, was creating a new legal liability for companies if they failed to remove certain content, which could include fines.

Facebook says it is committed to finding new ways to find and remove material - and now wants to do more than talk about it.

"We want to be very open with our community about what we're trying to do to make sure that Facebook is a really hostile environment for terror groups," Monika Bickert, director of global policy management at Facebook, told the BBC.

One criticism British security officials make is of the extent to which companies rely on others to report extremist content rather than acting proactively themselves.

Facebook has previously announced it is adding 3,000 employees to review content flagged by users.

But it also says that already more than half of the accounts that it removes for supporting terrorism are ones that it finds itself.

It says it is also now using new technology to improve its proactive work.

"We know we can do better at using technology - and specifically artificial intelligence - to stop the spread of terrorist content on Facebook," the company says.

Image source, AFP
Image caption,

France and the UK want to ensure terrorists have no safe spaces online

Automatic analysis

One aspect of the novel technology it is talking about for the first time is image matching.

If someone tries to upload a terrorist photo or video, the systems look to see if this matches previous known extremist content to stop it going up in the first place.

A second area is experimenting with AI to understand text that might be advocating terrorism.

This is analysing text previously removed for praising or supporting a group such as IS and trying to work out text-based signals that such content may be terrorist propaganda.

That analysis goes into an algorithm learning how to detect similar posts.

Machine learning should mean that this process will improve over time.

The company says it is also using algorithms to detect "clusters" of accounts or images relating to support for terrorism.

This will involve looking for signals such as whether an account is friends with a high number of accounts that have been disabled for supporting terrorism.

The company also says it is working on ways to keep pace with "repeat offenders" who create accounts just to post terrorist material and look for ways of circumventing existing systems and controls.

"Our technology is going to continue to evolve just as we see the terror threat continue to evolve online," Ms Bickert told the BBC.

"Our solutions have to be very dynamic."

One of the major challenges in automating the process is the risk of taking down material relating to terrorism but not actually supporting it - such as news articles referring to an IS propaganda video that might feature its text or images.

Whereas any image of child sexual abuse is illegal and can be taken down, an image relating to terrorism - such as an IS member waving a flag - can be used to glorify an act in one context or be used as part of a counter-extremism campaign in another.

"Context is everything," Ms Bickert said.

Image source, AP
Image caption,

In February, Mark Zuckerberg said Facebook was looking to AI to help it police its site

Caught out

The company says its algorithms are not yet as good as people at understanding the context that helps distinguish between the different categories.

Facebook says it has grown its team of specialists so that it now has 150 people working on counter-terrorism specifically, including academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts, and engineers.

Ms Bickert said: "We have to have people who can review it.

"I like to think of it as using the computers to do what computers do well and using people to do what people do well."

Challenges remain. A few minutes after creating an account in a made-up name, I was able to find complete versions of IS propaganda videos that included the beheading of Western hostages.

Critics argue that the challenges may be enormous in a site with two billion users but the company makes billions of dollars from the content on its site and could devote more resources - and more of its best engineers - to dealing with the issue.

The company says it has begun focusing its "most cutting edge techniques" to combat the problem and clearly now believes it needs to be seen to be acting.