Deepfake videos 'double in nine months'
- Published
New research shows an alarming surge in the creation of so-called deepfake videos, with the number online almost doubling in the last nine months. There is also evidence that production of these videos is becoming a lucrative business.
And while much of the concern about deepfakes has centred on their use for political purposes, the evidence is that pornography accounts for the overwhelming majority of the clips.
The research comes from cyber-security company Deeptrace. Its researchers found 14,698 deepfake videos online, compared with 7,964 in December 2018.
They said 96% were pornographic in nature, often with a computer-generated face of a celebrity replacing that of the original adult actor in a scene of sexual activity.
While many of the subjects featured were American and British actresses, the researchers found that South Korean K-Pop singers were also commonly inserted into fake videos, highlighting that this is a global phenomenon.
The report does highlight the potential for the use of deepfake technology to be used in political campaigns. But in the two cases it highlights - in Gabon and Malaysia - the allegations that faked videos had been used turned out to be incorrect.
What seems clear, though, is that the real danger at the moment is the use of the technology in revenge porn and cyber-bullying.
Henry Ajder, head of research analysis at Deeptrace, says too much of the discussion of deepfakes misses the mark.
"The debate is all about the politics or fraud and a near-term threat, but a lot of people are forgetting that deepfake pornography is a very real, very current phenomenon that is harming a lot of women," he explains.
Fake images, real money
Deeptrace's very existence is evidence of how rapidly the deepfake phenomenon has become a concern for corporations and governments.
It describes its mission as protecting "individuals and organisations from the damaging impacts of AI- generated synthetic media".
The term deepfake was first coined in a Reddit post in 2017, and this report explains that in just two years a whole industry has emerged to profit from this phenomenon.
Deeptrace found that the four leading deepfake-themed pornography websites, supported by advertising, had attracted 134 million views for their videos since February 2018.
Apps making it possible to create this material are now proliferating.
One that allowed users to synthetically remove the clothes from still images of women charged $50 (£40) for removing a watermark from each finished product.
Visits to the app's website surged after a critical article was written about it, and the owners took it down.
But the software is still out there, repackaged by others seeking to profit from it.
One independent expert highlighted that other software has also made it much easier to create fake videos than before.
"It's now become possible to create a passable deepfake with only a small amount of input material - the algorithms need smaller and smaller amounts of video or picture footage to train on," explained Katja Bego, principal researcher at innovation foundation Nesta.
"As the technology is advancing so rapidly, it is important for policymakers to think now about possible responses. This means looking at developing detection tools and raising public awareness, but also [to] consider the underlying social and political dynamics that make deepfakes potentially so dangerous."
The authors of the Deeptrace report also describe service portals - online businesses generating and selling deepfake videos.
One such portal required 250 photos of the target subject and two days of processing to generate a video. Deeptrace says the prices charged vary but can be as little as $2.99 per video.
Another report earlier this year, external by the Witness Media Lab, a collaboration between a human-rights organisation and Google, found that creating deepfake videos still requires some skill - but that is changing quickly.
The report says right now simulating actual faces completely realistically still involves a significant team of people with specialised skills and technology.
But the lengthy process is being automated, allowing people without that specialist knowledge to make videos that may be less sophisticated but can be generated much faster.
Looking at videos flagged with the deepfake hashtag on YouTube, there are some impressive examples of how the technology is being used by professional teams.
One video where The Shining suddenly features Jim Carrey, external in the Jack Nicholson role, is made by an artist called Ctrl Shift Face.
The anonymous creator helpfully warns on his channel: "Do not believe everything that you see on the internet, OK?"
Ctrl Shift Face's aim is to entertain rather than deceive. But there are obviously fears that such fakery could be used to sway an election campaign or whip up hatred against a particular group.
So far, however, there appear to be few, if any, instances of deepfakes succeeding in fooling people for malevolent purposes.
Now, as a business set up to protect organisations from this phenomenon, it could be in the interests of Deeptrace to hype this threat. And Ms Bego questioned whether deepfake-detection technology is the right approach.
"A viral video can reach an audience of millions and make headlines within a matter of hours," she explained.
"A technological arbiter telling us the video was doctored after the fact might simply be too little too late."
In any case, it appears that in the short term the real victims of malicious users of deepfake videos will not be governments and corporations but individuals, most of them women.
It is unlikely that they will be able to afford to hire specialists to combat their abusers.
- Published26 September 2019