Six ideas for making social media safe - could they work?

  • Published
Young girl using mobile - posed by model
Image caption,

Some have called for extra measures to keep children safe on social media

Can anything really be done to stop the deluge of online abuse?

The death of 14-year-old Hannah Smith has sparked calls for stricter controls on websites used by children.

She was taunted on Ask.fm, a Latvian-based website which allows users to ask questions and receive answers back, often anonymously, from respondents all over the world.

It was content contained within those responses, Hannah's father said, which drove his daughter to take her own life.

Her death came at a time when the internet community at large was debating how best to deal with abuse - be it sexist, homophobic, racist or any of the multitudes of ways to cause offence to others via the internet.

The BBC has been looking at some of the solutions put forward to assess which, if any, could be effective.

1. Add a report abuse button to everything

"Completely inadequate" was how Caroline Criado-Perez, a campaigner who called for women to be put on UK banknotes, described the system for reporting abuse on Twitter.

She, like many others, has called for an abuse button to be located on each tweet - meaning offensive statements can be flagged to moderators easily.

Currently, a web form must be filled out by the complainant - something which becomes very difficult if you are being deluged by hundreds of followers.

Media caption,

The BBC's Sian Lloyd speaks to Hannah's "devastated" friends

Twitter has said it will roll out the function - already available on some of its apps - to its whole system soon.

But some worry that co-ordinated attempts to report comments that certain groups disagree with could restrict robust debate.

In the case of Ask.fm, the abuse button was already there - available via a drop-down menu attached to every post. It is not clear whether Hannah ever used the button or not.

Facebook has offered a report abuse button for some time - giving users an option to state why they deem it to be offensive.

"Having an abuse button is certainly a start," says Arthur Cassidy, a media psychologist from knowthenet.org.uk.

"But it is by no means going to solve a problem. The turnaround time is far too slow - it's not adequate."

2. Get a machine to do it

Facebook's moderators number into the thousands, working in various different worldwide locations. And yet, it's still never enough - things will always slip through.

One possible solution to this will be to train machines to pick up possible abusive messages, and prevent them from being sent.

How? By using a pre-defined, but often updated, list of blocked words.

This is by no means a simple task. A recent study indicates, external that 80% of teens are using internet slang, but only 30% of parents claim to have any idea what any of it means.

Automated moderation also has a risk of getting it wrong - it's easy to programme a machine to spot banned words, but considerably more difficult to teach it how to understand context.

"Anything that relies on artificial intelligence won't work," says James Diamond, an e-safety expert who consults schools and social services.

"Teens generally choose words that are quite innocent. Even if you find out what the meaning is, you can't really block it as it will block a lot of legitimate use of the word."

He added: "Some sites would be loathe to bring it in, because the moment they become more difficult to use, users would start leaving the site."

Teenager Lia, 13, has experienced this situation first-hand when using children's network Moshi Monsters: "I mentioned the show Dick 'n' Dom... it said I can't post that."

3. Force people to use their real names

The huge majority of trolls hide behind a simple defence: anonymity.

By hiding their own identity, many believe they are immune to punishment - although police and technology firms are able to establish the real identities of trolls using various techniques.

Facebook is a firm believer in making sure people use their real names, even defending that policy in court. It said it means "people represent who they are in the real world".

Image caption,

A real name system could be troublesome for children in later life, one mum argues

But a real-name policy on other networks is fraught with difficulty. On Twitter, which has no restrictions on names - unless you steal a celebrity's - anonymity is a key component allowing activists and other targeted people to communicate freely.

In China, the government has mandated a real-name policy for the biggest sites, including the Twitter-like Sina Weibo. Any user wanting to sign up is required to input details of their national registration number.

But sites in the western world that ask for real names will face a problem: users, particularly if they are children, just go elsewhere.

"It could put them off," says Lia. "Some people might not want parents to know what they're doing."

Furthermore, argues Lia's mother Joanne, forcing everyone to use real names raises even greater problems: "Future employers will Google your name - this is the world we live in.

"Our children are leaving that digital footprint as they grow up."

4. Get the police to do it

In just the past fortnight, three arrests have been made in connection with the posting of offensive tweets.

Having the police step in and take action has its upsides, most obviously the sending out of a message that things you do online have very real consequences in normal life.

Media caption,

Starmer: "Chilling effect on free speech"

But in October last year, director of public prosecutions Keir Starmer said that if clear guidelines over when police should step in were not set, it could have a "chilling effect" on free speech online.

A freedom of information request by the BBC revealed that 1,700 disputes arising from social media made their way to the courts in 2012 - up 10% from 2011.

Perhaps most famously, in 2010, a man was arrested after posting a joke tweet saying he would blow up an airport. He was arrested by anti-terror police and convicted - only to have the decision overturned on appeal in 2012. The affair became known as the #twitterjoketrial.

"This is a grey area, it's something we need more clarification on," says media psychologist Mr Cassidy.

"The police tell me that they don't have the resources to deal with it."

5. Make social networks employ more moderators

According to Le Monde, Ask.fm employed a team of 50 moderators. Bigger sites, like Facebook, are coy with exact numbers - but are keen to stress that it's a large team, working around the clock.

Many websites, including the BBC, often outsource the responsibility of moderating to external companies who specialise in the area.

Whatever the method, more moderators is a good thing, argues Joanne.

"If a site is going to set itself up as a place for children or teenagers to gather," she says, "they have a certain responsibly to those children."

However, any legislative pressure on companies to hire bigger and better teams could be ineffective. Ask.fm, for example, is based in Latvia.

Also, some are concerned that tight guidelines could stifle the ability for start-up companies to build and grow.

In 2010, 50 million messages were being posted to Twitter every day, but the company was only of sufficient size to sustain 300 employees.

6. Teach people to be nicer to each other

Holly Seddon, who runs Quib.ly, external, a website for parents to discuss technology, says coping with online bullying is a problem best tackled by parenting over technology.

"It's about changing the culture," she says.

"If you ban one website, another one will pop up. We're getting to a situation where we're blaming companies, but it's not about the technology - it's people's behaviour that needs to change."

At one school, teachers have put together a special panel of technology commissioners from each year group, tasked with discussing and learning about all things technological relating to school.

"There should be more teaching of how to act online," says pupil Lia, "it has helped quite a lot."

But she adds: "It can't reach everyone, and not everyone really cares what the school says."

Follow Dave Lee on Twitter @DaveLeeBBC, external