Plans to use Facebook and Instagram posts to train AI criticised
- Published
Plans to use peoples' public posts and images on Facebook and Instagram to train artificial intelligence (AI) tools belonging to parent company Meta have been attacked by digital rights groups.
The social media giant recently has been informing UK and European users of the platforms that, under privacy policy changes, external taking effect on 26 June, their information can be used to "develop and improve" its AI products.
This includes posts, images, image captions, comments and Stories that users over the age of 18 have shared with a public audience on Facebook and Instagram, but not private messages.
Noyb, a European campaign group that advocates for digital rights, called, external its processing of years' worth of user content on the sites an "abuse of personal data for AI".
It has filed complaints with 11 data protection authorities across Europe, urging them to take immediate action on halt the company's plans.
Meta said it was confident its approach complied with relevant privacy laws and was consistent with how other big tech firms used data to develop AI experiences across Europe.
In a blogpost, external published on 22 May, it said European user information would support a wider rollout of its generative AI experiences, in part by providing more relevant training data.
"These features and experiences need to be trained on information that reflects the diverse cultures and languages of the European communities," it said.
Tech firms have been rushing to find fresh, multiformat data to build and improve models that can power chatbots, image generators and other buzzy AI products.
Meta chief executive Mark Zuckerberg said on an earnings call, external in February the firm's "unique data" would be key to its AI "playbook" going forward.
“There are hundreds of billions of publicly shared images and tens of billions of public videos," he told investors, also noting the firm's access to an abundance of public text posts in comments.
The company's chief product officer, Chris Cox, said, external in May the firm already uses public Facebook and Instagram user data for its generative AI products available elsewhere in the world.
'Highly awkward'
The way in which Meta has informed people about the change in the use of their data has also been criticised.
Facebook and Instagram users in the UK and Europe recently received a notification or email about how their information will be used for AI from 26 June.
This says the firm is relying on legitimate interests as its legal basis for processing their data - meaning people essentially have to opt-out by exercising their "right to object" if they do not want it to be used for AI.
Those wanting to do so can click the hyper-linked "right to object" text when opening the notification, which takes them to a form requiring they say how the processing would impact them.
The process has been criticised by Noyb, as well as people online who say they have tried to opt-out.
In a series of posts about it on X, one user described it as "highly awkward", external.
Another voiced concern, external that having to fill in a form and explain the processing's impact on them might "dissuade" those who want to object from doing so.
"Shifting the responsibility to the user is completely absurd," said Noyb co-founder Max Schrems.
Mr Schrems is an Austrian activist and lawyer who has previously challenged Facebook's privacy practices.
He said Meta should have to ask users to consent and opt-in, "not to provide a hidden and misleading opt-out form".
"If Meta wants to use your data, they have to ask for your permission. Instead, they make users beg to be excluded," he added.
Meta says the process is legally compliant and used by rivals.
According to its privacy policy, external, it will uphold objections and stop using information unless it finds it has "compelling" grounds that do not outweigh user rights or interests.
But even if you do not have a Meta account, or successfully object, the company says it may still use some information about you for its AI products - such as if you appear in an image shared publicly by someone else on Facebook or Instagram.
"Meta is basically saying that it can use any data from any source for any purpose and make it available to anyone in the world, as long as it’s done via 'AI technology'" said Mr Schrems.
The Irish Data Protection Commission - which leads on ensuring Meta's compliance with EU data law due to its Dublin headquarters - confirmed to the BBC it has received a complaint from Noyb and is "looking into the matter”.
- Published4 June
- Published17 May