Blake Lemoine: Google fires engineer who said AI tech has feelings

  • Published
Blake Lemoine poses for a portrait in Golden Gate Park in San Francisco, California on Thursday, June 9, 2022Image source, The Washington Post/ Getty Images
Image caption,

Blake Lemoine photographed in San Francisco last month

Google has fired one of its engineers who said the company's artificial intelligence system has feelings.

Last month, Blake Lemoine went public with his theory that Google's language technology is sentient and should therefore have its "wants" respected.

Google, plus several AI experts, denied the claims and on Friday the company confirmed he had been sacked.

Mr Lemoine told the BBC he is getting legal advice, and declined to comment further.

In a statement, Google said Mr Lemoine's claims about The Language Model for Dialogue Applications (Lamda), external were "wholly unfounded" and that the company worked with him for "many months" to clarify this.

"So, it's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information," the statement said.

Lamda is a breakthrough technology that Google says can engage in free-flowing conversations. It is the company's tool for building chatbots.

Blake Lemoine started making headlines last month when he said Lamda was showing human-like consciousness. It sparked discussion among AI experts and enthusiasts about the advancement of technology that is designed to impersonate humans.

Mr Lemoine, who worked for Google's Responsible AI team, told The Washington Post, external that his job was to test if the technology used discriminatory or hate speech.

He found Lamda showed self-awareness and could hold conversations about religion, emotions and fears. This led Mr Lemoine to believe that behind its impressive verbal skills might also lie a sentient mind.

His findings were dismissed by Google and he was placed on paid leave for violating the company's confidentiality policy.

Mr Lemoine then published a conversation he and another person had with Lamda, to support his claims.

This Twitter post cannot be displayed in your browser. Please enable Javascript or try a different browser.View original content on Twitter
The BBC is not responsible for the content of external sites.
Skip twitter post by Blake Lemoine

Allow Twitter content?

This article contains content provided by Twitter. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. You may want to read Twitter’s cookie policy, external and privacy policy, external before accepting. To view this content choose ‘accept and continue’.

The BBC is not responsible for the content of external sites.
End of twitter post by Blake Lemoine

His firing was first reported by Big Technology, external in its newsletter.

In its statement, Google said it takes the responsible development of AI "very seriously" and published a report, external detailing this. It added that any employee concerns about the company's technology are reviewed "extensively", and that Lamda has been through 11 reviews.

"We wish Blake well", the statement ended.

Mr Lemoine is not the first AI engineer to go public with claims that AI technology is becoming more conscious. Also last month, another Google employee shared similar thoughts with The Economist, external.

You may be interested in watching:

Media caption,

Robot "priests" can recite prayers, perform funerals, and even comfort those experiencing a spiritual crisis