Davos: Theresa May warns tech firms over terror content
- Published
Investors should put pressure on technology giants to respond more quickly to extremist content on social networks, the prime minister has said.
Theresa May told the World Economic Forum in Davos that investors should consider the social impact of the firms they have a stake in.
Social networks must stop providing a platform for terror, extremism and child abuse, she stressed.
Such content ought to be "removed automatically", Mrs May added.
"Earlier this month, a group of shareholders demanded that Facebook and Twitter disclose more information about sexual harassment, fake news, hate speech and other forms of abuse that take place on the companies' platforms," she said.
"Investors can make a big difference here by ensuring trust and safety issues are being properly considered - and I urge them to do so."
Ms May told the BBC in Davos that while technology firms were already working with the government, much more still needed to be done.
"The tech companies can be a tremendous force for good in so many ways, but also we need to ensure that we're looking at those ways in which the internet and technology can be used by those who wish to do us harm," she said.
The prime minister also wants to see the stalwarts of the tech industry work together with start-ups to deal with the issue. Smaller platforms - such as the privacy-focused encrypted messaging app Telegram - are often used by terrorists, criminals and paedophiles, external.
"These companies have some of the best brains in the world," Mrs May said. "They must focus their brightest and best on meeting these fundamental social responsibilities."
Telegram has previously said it is "no friend of terrorists", external and blocked channels used by extremists.
Artificial intelligence
Last year Facebook announced several measures designed to improve the detection of illegal content on the network, including using artificial intelligence to spot images, videos and text related to terrorism, as well as clusters of fake accounts.
In November the social network said 99% of the material it now removes about al-Qaeda and so-called Islamic State was first detected by itself rather than its users.
However, Facebook admitted it had to do more work to identify other terror and extremist groups.
"Tech firms may not always agree with government on the means, but there is no disagreement on the objective to make online platforms hostile environments for illegal and inappropriate content," said Julian David, chief executive of tech trade association techUK.
"Much has already been achieved by working in partnership with government and tech firms are committed to keep working to ensure the safety and security of their users."
- Published29 November 2017
- Published5 June 2017