Facebook moderation firm Cognizant quits
- Published
Cognizant, a professional services firm, is to stop moderating "objectionable" content on behalf of Facebook and other internet firms.
It follows an investigation by The Verge, external into working conditions and the mental health of employees working at an Arizona moderation centre on behalf of the social network.
Cognizant also has workers in India, Europe and Latin America.
It is believed its decision will result in around 6,000 job cuts.
The firm told the BBC: "We have determined that certain content work in our digital operations practice is not in line with our strategic vision for the company and we intend to exit this work over time. This work is largely focused on determining whether certain content violates client standards - and can involve objectionable materials.
"Our other content-related work will continue. In the meantime, we will honour our existing obligations to the small number of clients affected and will transition, over time, as those commitments begin to wind down. In some cases, that may happen over 2020, but some contracts may take longer."
In response, Facebook's Arun Chandra said: "We respect Cognizant's decision to exit some of its content review services for social media platforms.
"Their content reviewers have been invaluable in keeping our platforms safe - and we'll work with our partners during this transition to ensure there's no impact on our ability to review content and keep people safe."
Cognizant's sites in Tampa, Florida, and Phoenix, Arizona, will be affected but will remain operational until March next year. Facebook plans to increase staff at a centre in Texas, run by a different partner.
The social network currently has 15,000 content moderators working around the globe 24 hours a day in 20 review sites. They regularly view images and videos showing depraved content, including child sex abuse, beheadings, torture, rape and murder.
Hate 'megaphone'
The BBC's own investigation into content moderation found one former content moderator was so traumatised by the job she had done that she was unable to trust others and felt "disgusted by humanity".
Meanwhile, a new report from human rights group Avaaz has highlighted the rise of hate speech in India aimed at the population of Bengali Muslims in the state of Assam. It found posts it regarded as hate speech were shared nearly 100,000 times and viewed at least 5.4 million times.
As of September it found that Facebook had removed just 96 of the 213 posts and comments.
"Facebook is being used as a megaphone for hate, pointed directly at vulnerable minorities in Assam," it concluded.
Facebook said in response: "We have clear rules against hate speech, which we define as attacks against people on the basis of things like caste, nationality, ethnicity and religion, and which reflect input we received from experts in India.
"We take this extremely seriously and remove content that violates these policies as soon as we become aware of it. To do this, we have invested in dedicated content reviewers, who have local language expertise and an understanding of India's long-standing historical and social tensions."
- Published14 October 2018