Social media: How do other governments regulate it?
- Published
The government is to outline new powers for the media regulator Ofcom to police social media.
It is supposed to make the companies protect users from content involving things like violence, terrorism, cyber-bullying and child abuse.
Companies will have to ensure that harmful content is removed quickly and take steps to prevent it appearing in the first place.
They had previously relied largely on self-governance. Sites such as YouTube and Facebook have their own rules about what is unacceptable and the way that users are expected to behave towards one another.
Self-governance
YouTube releases a transparency report, external, which gives data on its removals of inappropriate content.
The video-sharing site owned by Google said that 8.8m videos were taken down between July and September 2019, with 93% of them automatically removed by machines, and two thirds of those clips not receiving a single view.
It also removed 3.3 million channels and 517 million comments.
Globally, YouTube employs 10,000 people in monitoring and removing content, as well as policy development.
Facebook, which owns Instagram, told Reality Check it has more than 35,000 people around the world working on safety and security, and it also releases statistics, external on its content removals.
Between July and September 2019 it took action on 30.3 million pieces of content of which it found 98.4% before any users flagged it.
If illegal content, such as "revenge pornography" or extremist material, is posted on a social media site, it has previously been the person who posted it, rather than the social media companies, who was most at risk of prosecution. But that may now change.
So if the UK has previously mainly relied on social media platforms governing themselves, what do other countries do?
Germany
Germany's NetzDG law came into effect at the beginning of 2018, applying to companies with more than two million registered users in the country.
They were forced to set up procedures to review complaints about content they were hosting, remove anything that was clearly illegal within 24 hours and publish updates every six months about how they were doing.
Individuals may be fined up to €5m ($5.6m; £4.4m) and companies up to €50m for failing to comply with these requirements.
The government issued its first fine under the new law to Facebook in July 2019. The company had to pay €2m (£1.7m) for under-reporting illegal activity on its platforms in Germany, although the company complained that the new law had lacked clarity.
European Union
The EU is considering a clampdown, specifically on terror videos.
Social media platforms face fines if they do not delete extremist content within an hour.
The EU also introduced the General Data Protection Regulation (GDPR) which set rules on how companies, including social media platforms, store and use people's data.
It has also taken action on copyright. Its copyright directive puts the responsibility on platforms to make sure that copyright infringing content is not hosted on their sites.
Previous legislation only required the platforms to take down such content if it was pointed out to them.
Member states have until 2021 to implement the directive into their domestic law.
Australia
Australia passed the Sharing of Abhorrent Violent Material Act in 2019, introducing criminal penalties for social media companies, possible jail sentences for tech executives for up to three years and financial penalties worth up to 10% of a company's global turnover.
It followed the live-streaming of the New Zealand shootings on Facebook.
In 2015, the Enhancing Online Safety Act created an eSafety Commissioner with the power to demand that social media companies take down harassing or abusive posts. In 2018, the powers were expanded to include revenge porn.
The eSafety Commissioner's office can issue companies with 48-hour "takedown notices", and fines of up to 525,000 Australian dollars (£285,000). But it can also fine individuals up to A$105,000 for posting the content.
The legislation was introduced after the death of Charlotte Dawson, a TV presenter and a judge on Australia's Next Top Model, who killed herself in 2014 following a campaign of cyber-bullying against her on Twitter. She had a long history of depression.
Russia
A law came into force in Russia in November giving regulators the power to switch off connections to the worldwide web "in an emergency" although it is not yet clear how effectively they would be able to do this.
Russia's data laws from 2015 required social media companies to store any data about Russians on servers within the country.
Its communications watchdog blocked LinkedIn and fined Facebook and Twitter for not being clear about how they planned to comply with this.
China
Sites such as Twitter, Google and WhatsApp are blocked in China. Their services are provided instead by Chinese providers such as Weibo, Baidu and WeChat.
Chinese authorities have also had some success in restricting access to the virtual private networks that some users have employed to bypass the blocks on sites.
The Cyberspace Administration of China announced at the end of January, external 2019 that in the previous six months it had closed 733 websites and "cleaned up" 9,382 mobile apps, although those are more likely to be illegal gambling apps or copies of existing apps being used for illegal purposes than social media.
China has hundreds of thousands of cyber-police, who monitor social media platforms and screen messages that are deemed to be politically sensitive.
Some keywords are automatically censored outright, such as references to the 1989 Tiananmen Square incident.
New words that are seen as being sensitive are added to a long list of censored words and are either temporarily banned, or are filtered out from social platforms.
This piece was originally published in April 2018 and has been updated to reflect the Ofcom proposals and more recent statistics.
- Published5 February 2019
- Published27 January 2019
- Published20 August 2018