AI used to target kids with disinformation

Media caption,

Misinformation is spreading online via videos that particularly appeal to children

YouTube channels that use AI to make videos containing false "scientific" information are being recommended to children as "educational content".

Investigative BBC journalists working in a team that analyses disinformation, information that is deliberately misleading and false, found more than 50 channels in more than 20 languages spreading disinformation disguised as STEM [Science Technology Engineering Maths] content.

These include pseudo-science - that's presenting information as scientific fact that is not based on proper scientific methods, plus outright false information and conspiracy theories. These are theories or beliefs that some groups of people are trying to deliberately mislead the general public, typically for the benefit of a small powerful group.

Examples of conspiracy theories are the existence of electricity-producing pyramids, the denial of human-caused climate change and the existence of aliens.

Our analysis shows YouTube recommends these "bad science" videos to children alongside legitimate educational content.

What's AI?

Artificial intelligence - or AI for short - is technology that enables a computer to think or act in a more 'human' way.

It does this by taking in information from its surroundings, and deciding its response based on what it learns or senses.

The term 'artificial intelligence' was first used in 1956.

In the 1960s, scientists were teaching computers how to mimic - or copy - human decision-making.

This developed into research around 'machine learning', in which robots were taught to learn for themselves and remember their mistakes, instead of simply copying.

More clicks, more money

Kyle Hill is a YouTuber and science educator with a huge number of young viewers. He started noticing these "bad science" videos cropping up in his feed a couple of months ago. He says his followers contacted him about recommended content that seemed legitimate, but was instead full of false information.

The creators appear to have stolen and manipulated accurate content and then republished them.

The videos focused on wild claims, with sensationalist commentary, catchy titles and dramatic images to draw viewers in. The more people watch the videos, the more revenue - that's money - the channels get by allowing adverts to be shown on the screen.

YouTube also benefits from high performing content; as it takes about 40% of the money made from advertising on someone's channel.

The creators were also tagging their "bad science" videos as "educational content", meaning they were more likely to be recommended to children.

"Being the science man that I am," Kyle says, "I took that personally. These channels seemed to have identified the exact right thing to maximise views for the least amount of effort."

Image caption,

We found that YouTube recommends these 'bad science' videos to children alongside legitimate educational content

How to spot fakes

The BBC's Global Disinformation Team found dozens of channels on YouTube producing this type of misleading material, in languages including Arabic, Spanish, and Thai. Many of those channels had more than a million subscribers. Their videos often receive millions of views.

The channel creators publish content rapidly, with many posting multiple videos every day. To be able to do this at such speed, the BBC journalists suspected the content creators were using generative AI programs. These are programs like Chat GPT and MidJourney that can create new content when asked to [eg. 'a black cat wearing a crown'], rather than searching the internet to find examples that already exist.

To test this theory, they took videos from each channel and used AI detection tools and expert analysis to assess the probability or likelihood that the footage, narration and script were made using AI. The BBC analysis showed that most of those videos had used AI to generate text and images, and to scrape - that is to extract information from a website, and manipulate material from real science videos. The result is content that looks factual, but is mostly untrue.

Tips to avoid being misled by disinformation

If you want to try to make sure you don't get caught out by disinformation, there a few things you can look out for.

Ask yourself:

Has this claim been reported anywhere else? Genuinely surprising new information and facts are typically shared widely and quickly which means it will have been picked up by radio, TV or in the newspapers

Have you heard of the content creator that published the video - how trustworthy are they - remember follower numbers and views don't always make someone a reliable source?

Is the video a copycat designed to look like another genuine video?

Is someone speaking in the video and do they look and sound normal or is it a computer generated voice/image?

Does the content seem believable and does it fit in with what you already know and have been taught at school?

Image source, Reuters
Image caption,

Pupils believed the AI-made content was accurate, before journalists told them it wasn't

'I enjoyed watching it'

To test whether the "bad science" videos would be recommended to children, the journalists created children's accounts on the main YouTube site. (All the children they spoke to said they used children's accounts rather than YouTube Kids.) After four days of watching legitimate science education videos, the BBC journalists were recommended the AI-made "bad science" videos too. If they clicked on them, more of the false-science channels were recommended.

As part of their experiment, the journalists then shared some of the recommended false science content to two groups of 10-12-year-olds - one in the UK and one in Thailand - to see whether the children would believe the information they were watching. One video focused on UFO and alien conspiracies, which have been widely shown to be untrue. The other video falsely claimed the Pyramids of Giza were used to create electricity.

The children were convinced: "I enjoyed watching it," said one girl, "At the beginning, I wasn't sure aliens exist, but now I think they do."

Another child was impressed by the electric pyramids: "I didn't know people so long ago would be able to make electricity to use modern technology."

Some children in the group were able to pick out AI use in the videos. "I found it quite funny that they didn't even use a human voice, I thought it wasn't human," one child said.

When the journalists then explained to the children those videos were made using AI and contained false information, they were shocked.

"I'm actually really confused. I thought it was real," said one boy. Another said: "I would've probably believed it, if you hadn't told us it was fake."

Image caption,

Teachers say the misleading videos appeal to children's natural curiosity and risk confusing them

Children likely to believe misinformation

Teachers say playing on children's natural curiosity for new, intriguing ideas risks confusing them about what is true.

"These videos do well because they are conspiratorial [suggesting they are exposing a secret most people don't know about]," says Professor Vicki Nash, Director of the Oxford Internet Institute. "We are all fascinated by things that run counter to what we're officially told, and children are more susceptible to this than adults."

Claire Seeley is a primary school teacher in the UK, who agrees.

"Children will often take what they've seen as fact first and foremost. Only maybe when they're a little older, will they start to question it," she said.

Professor Nash also questioned whether hosting platforms, like YouTube, should be earning money from the adverts seen alongside these misleading videos: "The idea that YouTube and Google are making money off the back of adverts being served with pseudo-science news seems really unethical to me."

The BBC contacted a number of companies that produce this type of misleading AI content. One responded, saying their content was intended for "entertainment purposes". They denied targeting children and said they did not use AI "for the majority" of their scripts.

Image source, Reuters
Image caption,

YouTube says it displays information panels that provide extra context for viewers about third party content

Children's education at risk

YouTube said it recommends YouTube Kids for under 13s, which has a "higher bar" for the quality of videos that can be shown. It said YouTube was committed to removing misinformation from its platforms and providing families with a "safe and high-quality experience".

It also directed journalists to information panels which it says show additional context from third-party sources on conspiracy-related content. The BBC found those information panels were only present for a few of the videos across the 50 channels.

YouTube did not comment on questions about the advertising revenue received from views of those misleading videos.

As AI tools continue to improve, misleading content will become easier to create and the quality of the videos will become more difficult to identify. Claire Seeley said she was worried and urged teachers and parents to prepare for more misleading content.

"We don't have a really clear understanding of how AI-generated content is really impacting children's understanding. As teachers, we're playing catch up to try to get to grips with this."