Child sex abuse victims want Online Safety Bill strengthened
- Published
A group of victims of child sexual exploitation has written to the UK government asking for the Online Safety Bill, external to be strengthened.
They say tech companies must be made responsible, external for stopping child abuse on live-stream and video call platforms.
Charities say online predators increasingly live-stream, because most tech firms have no built-in software to detect or stop this type of abuse.
Currently, the bill does not specifically address live-streaming.
The bill does put a duty of care on tech platforms to tackle the distribution of child abuse materials.
The International Justice Mission, which helps victims of sexual exploitation, said: "The bill needs to go further in recognising that these platforms aren't just where abusive material is published, the platforms are used as tools to commit abuse."
The charity reports that half of the victims are 12 or under at the time of rescue, with some being less than a year old.
It said children are often abused in places like the Philippines "while remote offenders in places like the UK pay to direct and view the abuse in real time".
Abused by his uncle
Malone, not his real name, was eight when his uncle started abusing him in the Philippines.
"I was asked to get naked and they were filming me while I was undressing.
"My uncle would fetch me from school and he would ask me to take my clothes off and he would take pictures. It came to a point that he was threatening me to not tell anyone by showing me his gun."
After about two years Malone was rescued, and his uncle was sent to jail.
Paying £27 to view sexual abuse
The NSPCC wants the bill to be strengthened, external to give better protection to children.
In 2020, the Independent Inquiry into Child Sexual Abuse, external heard that the UK was the third largest consumer of live streaming abuse in the world.
A recent report by the Australian Institute of Criminology found, external that offenders were paying on average about £27 to view the sexual abuse of children.
Ruby, not her real name, was 16 when she became a victim of live-streamed sexual exploitation in the Philippines.
After her parents died, she was tricked by traffickers who offered her a job.
They trapped her in a house and she was sexually abused live online.
"The times that I spent in that house were the darkest days of my life," she said.
"The memories of those things that I did there and those faces of those customers that I interacted with haunted me for years."
The National Crime Agency reports that between 550,000 and 850,000 people in the UK pose a risk to children at any one moment.
Rob Jones, from the organisation, said: "Live-streamed abuse is an increasing problem that is getting worse.
"The only people that can tackle this problem are those working on the tech platforms where live-streaming abuse takes place.
"If you are creating an end-to-end platform, you can introduce technology that can prevent abuse."
'Proactive' technology
NSPCC research has shown, external one in 20 children in the UK who live-streamed with someone they had not met face-to-face were asked to remove an item of clothing.
Andy Burrows, from the charity, said: "Live-streaming services expanded rapidly during the pandemic, but in a race to roll out products tech firms put growth before children's safety."
The charity wants the Online Safety Bill to force these companies to use "proactive" technology to stop live-stream abuse.
But creating these solutions isn't easy.
The technology has to be extremely accurate, suitable for use on social media platforms with billions of users, and it also needs to satisfy privacy concerns.
Meta, Facebook's parent company, says it uses artificial intelligence (AI) to detect and prioritise video calls and live-streams that are likely to contain child sexual exploitation.
UK firm SafeToNet was one of five companies given £85,000 by the UK government, external in October to work on ways to stop the spread of child sexual abuse material on end-to-end encrypted online platforms.
It claims it is developing AI technology that can recognise when child exploitation is taking place "in real-time, in live-stream content" on mobile devices and "disable the camera".
Technology expert Prof Peter Sommer questions the suitability of this kind of technology.
He said: "Using machine learning as a means of detecting live-streamed videos of child sexual abuse is likely to remain very imprecise, much more so than for static photographs. How do you distinguish the grandparent talking to a grandchild wearing a small swimsuit?"
Prof Sommer points out that moderating this material quickly and effectively would be expensive.
There has been strong objection to the introduction of this kind of technology from privacy campaigners.
Jim Killock, from the Open Rights Group, said: "Blanket monitoring everyone's video calls all of the time is a step too far.
"Removing or limiting encryption would open up video calls to access and exploitation by criminals, so would be an unacceptably dangerous step to take."
What is in the draft Online Safety Bill?
The Online Safety Bill is due to be debated in Parliament on Tuesday.
Among its proposals are:
Regulator Ofcom would get powers to regulate social media sites
Companies could be forced to have a duty of care for their users, including protecting them from legal but harmful content, like abuse that doesn't cross the criminality threshold
Companies who breach Ofcom rules could face fines of up to £18m
Social media sites would have to moderate content from different political viewpoints equally and without discrimination
Provisions would be introduced to tackle online scams, like romance fraud and fake investment opportunities
- Published9 March 2022
- Published17 February 2022
- Published24 January 2022
- Published21 January 2022
- Published19 January 2022