Facebook adds 'blackface' photos to banned posts
- Published
Facebook has updated its rules to tackle posts containing depictions of "blackface" and common anti-Semitic stereotypes.
Its Community Standards now explicitly state such content should be removed if used to target or mock people.
The company said it had consulted more than 60 outside experts before making the move.
But one campaigner said she still had concerns about its wider anti-racism efforts.
'Deeply damaging'
"Blackface is an issue that's been around for decade, which is why it's surprising that it's only being dealt with now," said Zubaida Haque, interim director of the Runnymede Trust race-equality think tank.
"It's deeply damaging to black people's lives in terms of the hatred that's targeted towards them and the spread of myths, lies and racial stereotypes.
"We welcome Facebook's decision.
"But I'm not entirely convinced these steps are part of a robust strategy to proactively deal with this hatred as opposed to it being a crisis-led sort of thing."
Hate-speech policies
Facebook's rules have long included a ban on hate speech related to race, ethnicity and religious affiliation, among other characteristics.
But they have now been revised to specify:
caricatures of black people in the form of blackface
references to Jewish people running the world or controlling major institutions such as media networks, the economy or the government
The rules also apply to Instagram.
"This type of content has always gone against the spirit of our hate-speech policies," said Monika Bickert, Facebook's content policy chief.
"But it can be really difficult to take concepts... and define them in a way that allows our content reviewers based around the world to consistently and fairly identify violations."
Folk dancers
Facebook said the ban would apply to photos of people portraying Black Pete - a helper to St Nicholas, who traditionally appears in blackface at winter festival events in the Netherlands.
And it might also remove some photos of English morris folk dancers who have painted their faces black.
However, Ms Bickert suggested other examples - including critical posts drawing attention to the fact a politician once wore blackface - might still be allowed once the policy comes into effect.
The announcement coincided with Facebook's latest figures on dealing with problematic posts.
The tech firm said it had deleted 22.5 million items of hate speech in the months of April to June, compared with 9.6 million the previous quarter.
It said the rise was "largely driven" by improvements to its auto-detection technologies across several languages including Spanish, Arabic, Indonesian and Burmese. This implied that much content had been missed in the past.
Facebook acknowledged that it was still unable to give a measurement of the "prevalence of hate speech" on its platform - in other words whether the problem is in fact worsening.
It already gives such a metric for other topics, including violent and graphic content.
But a spokesman said the company was hoping to start providing a figure later in the year. He also said the social network intended to start using a third-party auditor to check its numbers some time in 2021.
One campaign group said it suspected hate speech was indeed a growing problem.
"We have been warning for some time that a major pandemic event has the potential to inflame xenophobia and racism," said the Center for Countering Digital Hate (CCDH)'s chief executive Imran Ahmed.
'Inexcusable' numbers
Facebook's report also revealed that staffing issues caused by the pandemic had meant it took action on fewer suicide and self-harm posts - on both Instagram and Facebook.
And on Instagram, the same problem meant it took action on fewer posts in the category it calls "child nudity and sexual exploitation". Actions fell by more than half, from one million posts to 479,400.
"Facebook's inability to act against harmful content on their platforms is inexcusable, especially when they were repeatedly warned how lockdown conditions were creating a perfect storm for online child abuse at the start of this pandemic," said Martha Kirby from the NSPCC.
"The crisis has exposed how tech firms are unwilling to prioritise the safety of children and instead respond to harm after it's happened rather than design basic safety features into their sites to prevent it in the first place," she said.
However, on Facebook itself, the number of removals of such posts increased.
- Published13 May 2020
- Published18 March 2020
- Published8 February 2018
- Published6 February 2020
- Published29 May 2020