Google executive warns of face ID bias

  • Published
Google's Diane Greene spoke to the BBC on the sidelines of the company's annual cloud conference in San FranciscoImage source, Google
Image caption,

Google's Diane Greene spoke to the BBC on the sidelines of the company's annual cloud conference in San Francisco

Facial recognition technology does not yet have "the diversity it needs” and has “inherent biases”, a top Google executive has warned.

The remarks, from the firm’s director of cloud computing, Diane Greene, came after rival Amazon’s software wrongly identified 28 members of Congress, disproportionately people of colour, as police suspects.

Google, which has not opened its facial recognition technology to public use, was working on gathering vast sums of data to improve reliability, Ms Greene said.

However, she refused to discuss the company’s controversial work with the military.

“Bad things happen when I talk about Maven,” Ms Greene said, referring to a soon-to-be abandoned project with the US military to develop artificial intelligence technology for drones.

After considerable employee pressure, including resignations, Google said it would not renew its contract with the Pentagon after it lapses some time in 2019.

The firm has not commented on the deal since, only to release a set of “AI principles” that stated it would not use artificial intelligence or machine learning to create weapons.

'Thinking really deeply'

On face recognition, there has been considerable concern among Silicon Valley workers, and civil rights groups, about the application of the emerging technology - particularly when it comes to law enforcement. Amazon’s Rekognition software, which allows clients to use Amazon AI tech to power facial recognition, was being used by at least two police forces in the US.

There are major misgivings about the accuracy and readiness of the technology which has seen widespread, controversial use in China.

In the US, the misidentification of members of Congress was discovered by the American Civil Liberties Union (ACLU), which published its findings on Thursday, external. Amazon disputed the ACLU’s conclusions about its technology, saying the group had used the wrong settings.

Ms Greene said that while Google does use facial recognition to help users identify friends in pictures, its underlying technology was not open for public use.

"We need to be really careful about how we use this kind of technology,” she told the BBC.

"We're thinking really deeply. The humanistic side of AI - it doesn't have the diversity it needs and the data itself will have some inherent biases, so everybody's working to understand that."

She added: "I think everybody wants to do the right thing. I'm sure Amazon wants to do the right thing too. But it's a new technology, it's a very powerful technology.”

Google’s image recognition software has been offensively inaccurate in the past. In 2015, it identified a black couple as being “gorillas”. The firm apologised.

Two members of Congress have written to Amazon chief executive Jeff Bezos to talk about the alleged issue with his company’s system.

Speaking of facial recognition more widely, the ACLU said: "Congress should enact a federal moratorium on law enforcement use of this technology until there can be a full debate on what - if any - uses should be permitted."

________

Follow Dave Lee on Twitter @DaveLeeBBC, external

Do you have more information about this or any other technology story? You can reach Dave directly and securely through encrypted messaging app Signal on: +1 (628) 400-7370