Online Safety Bill: Crackdown on harmful social media content agreed

  • Published
A stock photo of a child and adult browsing onlineImage source, Getty Images

Peers have passed a controversial new law aimed at making social media firms more responsible for users' safety on their platforms.

The Online Safety Bill has taken years to agree and will force firms to remove illegal content and protect children from some legal but harmful material.

Children's charity the NSPCC said the law would mean a safer online world.

But critics argued it would allow a regulator, and tech firms to dictate what may or may not be said online.

The nearly 300-page bill will also introduce new rules such as requiring pornography sites to stop children viewing content by checking the ages of users.

While the act is often spoken about as a tool for reining in Big Tech, government figures have suggested more than 20,000 small businesses will also have to comply.

Platforms will also need to show they are committed to removing illegal content including:

  • child sexual abuse

  • controlling or coercive behaviour

  • extreme sexual violence

  • illegal immigration and people smuggling

  • promoting or facilitating suicide

  • promoting self-harm

  • animal cruelty

  • selling illegal drugs or weapons

  • terrorism

New offences have also been included in the bill, including cyber-flashing and the sharing of "deepfake" pornography.

And the bill includes measures to make it easier for bereaved parents to obtain information about their children from tech firms.

The technology secretary Michelle Donelan told the BBC the bill was "extremely comprehensive".

Asked when there would be evidence of tech firms changing their behaviour she said: "We've already started to see that change in behaviour happening.

"As soon as this bill gains Royal Assent, the regulator will be working even more hand in hand with those social media platforms and you'll see them changing the way that they're operating", she added.

Long journey

The bill has had a lengthy and contentious journey to becoming law, beginning six years ago when the government committed to the idea of improving internet safety.

The idea that inspired the bill was relatively simple, scribbled down on the back of a sandwich packet by two experts, Prof Lorna Woods of the University of Essex and William Perrin of the charitable foundation Carnegie UK.

Prof Woods told the BBC that finally seeing it pass was "slightly unreal".

"I think when you're waiting for anything for a long time, there's always that sense of, 'Oh, it's here,'" she said.

But the complexity of the act does cause her concerns that big tech companies will challenge parts of it in court.

"I think maybe the complexity leads itself to that sort of challenge and that could delay the full coming into force of the regime."

Driving the bill have been the stories of those who have suffered losses and harm which they attribute to content posted on social media.

Online safety campaigner Ian Russell has told the BBC the test of the bill will be whether it prevents the kind of images his daughter Molly saw before she took her own life after viewing suicide and self-harm content online on sites such as Instagram and Pinterest.

Imran Ahmed of the Center for Countering Digital Hate welcomed the passage of the bill saying "too much tragedy has already befallen people in this country because of tech companies' moral failures".

But digitalrights campaigners the Open Rights Group said the bill posed "a huge threat to freedom of expression with tech companies expected to decide what is and isn't legal, and then censor content before it's even been published".

Lawyer Graham Smith, author of a book on internet law, said the bill had well-meaning aims, but in the end it contained much that was problematic.

"If the road to hell is paved with good intentions, this is a motorway," he told the BBC.

He said it was "a deeply misconceived piece of legislation", and the threat it posed to legitimate speech was likely to be "exposed in the courts".

And popular messaging services such as WhatsApp and Signal have threatened to refuse to comply with powers in the bill that could be used to force them to examine the contents of encrypted messages for child abuse material.

However following statements made the government, external about these powers in the Lords, Meredith Whittaker, the president of Signal, said that they were "more optimistic than we were when we began engaging with the UK government"

But she added it was imperative that campaigners press for a public commitment that the "unchecked and unprecedented power" in the bill to undermine private communications would not be used.

Wikipedia has also said it can't comply with some of the requirements of the bill.

After royal assent the baton will pass to the communications regulator, Ofcom, who will be largely responsible for enforcing the bill.

It will draw up codes of conduct that will provide guidance on how to comply with the new rules.

Those who fail can face large fines of up to £18m, or in some cases executives could face imprisonment.

Dame Melanie Dawes, chief executive of Ofcom, called the bill's passage through parliament "a major milestone in the mission to create a safer life online for children and adults in the UK." "Very soon after the Bill receives Royal Assent, we'll consult on the first set of standards that we'll expect tech firms to meet in tackling illegal online harms, including child sexual exploitation, fraud and terrorism," she added.

There is a lot staked on the success of the bill - not only the safety of children and adults, but also the UK's ambitions as a tech hub and possibly, if things go wrong, continued access to popular online services.

For Prof Woods the bill will be a success if social media companies and others are more responsive to user concerns.

"And maybe we won't have to see quite so much of the stuff we don't want to see in the first place. But I don't think we should expect perfection. Life's not perfect," she said.

Related topics