AI fake face website launched

  • Published
A selection of faces from the websiteImage source, thispersondoesnotexist.com
Image caption,

A selection of faces from the website

A software developer has created a website that generates fake faces, using artificial intelligence (AI).

Thispersondoesnotexist.com, external generates a new lifelike image each time the page is refreshed, using technology developed by chipmaker Nvidia.

Some visitors to the website say they have been amazed by the convincing nature of some of the fakes, although others are more clearly artificial.

And many of them have gone on to post some of the fake faces on social media.

This Twitter post cannot be displayed in your browser. Please enable Javascript or try a different browser.View original content on Twitter
The BBC is not responsible for the content of external sites.
Skip twitter post by jordan314

Allow Twitter content?

This article contains content provided by Twitter. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. You may want to read Twitter’s cookie policy, external and privacy policy, external before accepting. To view this content choose ‘accept and continue’.

The BBC is not responsible for the content of external sites.
End of twitter post by jordan314

Nvidia developed a pair of adversarial AI programs to create and then critique the images, in 2017.

The company later made these programs open source, meaning they are publicly accessible.

Image source, thispersondoesnotexist.com
Image caption,

Not all faces on the website are convincingly human

Realistic fakes

As the quality of synthetic speech, text and imagery improves, researchers are encountering ethical dilemmas about whether to share their work.

Media caption,

Why these faces do not belong to 'real' people

Last week, the Elon Musk backed OpenAI research group announced it had created an artificially intelligent "writer".

But the San Francisco group took the unusual step of not releasing the technology behind the project publicly.

"It's clear that the ability to generate synthetic text that is conditioned on specific subjects has the potential for significant abuse," the group said in a statement to AI blog Synced.