Powerful artificial intelligence ban possible, government adviser warns
- Published
- comments
Some powerful artificial general intelligence (AGI) systems may eventually have to be banned, a member of the government's AI Council, external says.
Marc Warner, also boss of Faculty AI, told the BBC that AGI needed strong transparency and audit requirements as well as more inbuilt safety technology.
And the next six months to a year would require "sensible decisions" on AGI.
His comments follow the EU and US jointly saying a voluntary code of practice for AI was needed soon.
Political connections
The AI Council is an independent expert committee which provides advice to government and leaders in artificial intelligence.
Faculty AI says it is OpenAI's only technical partner helping its customers safely implement ChatGPT and its other products into their systems.
The company's tools helped forecast demand, external for NHS services during the pandemic - but its political connections have attracted scrutiny.
Mr Warner added his name to a Center for AI Safety, external warning the technology could lead to the extinction of humanity. And Faculty AI was among technology companies whose representatives discussed the risks, opportunities and rules needed to ensure safe and responsible AI with Technology Minister Chloe Smith, at Downing Street, on Thursday.
AI describes the ability of computers to perform tasks typically requiring human intelligence.
'Different rules'
"Narrow AI" - systems used for specific tasks such as translating text or searching for cancers in medical images - could be regulated like existing technology, Mr Warner said.
But AGI systems, a fundamentally novel technology, were much more worrying and would need different rules.
"These are algorithms that are aimed at being as smart or smarter than a human across a very broad domain of tasks - essentially, every task," Mr Warner added.
Humanity was in its position of primacy on this planet primarily because of its intelligence, he said.
'Strong limits'
"If we create objects that are as smart or smarter than us, there is nobody in the world that can give a good scientific justification of why that should be safe," Mr Warner said.
"That doesn't mean for certain that it's terrible - but it does mean that there is risk, it does mean that we should approach it with caution.
"At the very least, there needs to be sort of strong limits on the amount of compute [processing power] that can be arbitrarily thrown at these things.
"There is a strong argument that at some point, we may decide that enough is enough and we're just going to ban algorithms above a certain complexity or a certain amount of compute.
"But obviously, that is a decision that needs to be taken by governments and not by technology companies".
'Competitive advantage'
Some say concerns around AGI are distracting from problems with existing technologies - bias in AI recruitment or facial-recognition tools, for example.
But Mr Warner said this was like saying: "'Do you want cars or aeroplanes to be safe?' I want both."
Others say too much regulation might make the UK less attractive to investors and stifle innovation.
But Mr Warner said the UK could find a competitive advantage in encouraging safety.
"My long-term bet is that actually, to get value out of the technology, you need the safety - in the same way to get value out of the aeroplane, you need the engines to work," he said.
'Too late'
The UK's recent White Paper on regulating AI was criticised for failing to set up a dedicated watchdog.
But Prime Minister Rishi Sunak has outlined the need for "guardrails" and said the UK could play "a leadership role".
On Wednesday, US Secretary of State Antony Blinken and European Union Commissioner Margrethe Vestager said voluntary rules were needed quickly.
The EU Artificial Intelligence Act, which will be among the first to regulate AI, is still going though legislative processes.
And Ms Vestager said it would take two to three years for different pieces of legislation to come into effect - "and we're talking about a technological acceleration that is beyond belief".
But industry and others would be invited to contribute to a draft voluntary code of conduct within weeks.
After a meeting of the fourth US-EU Trader and Technology Council, Mr Blinken said, external it was important to establish voluntary codes of conduct "open to" a "wide universe of countries... all likeminded countries".
- Published30 May 2023
- Published26 May 2023