Is it possible to regulate artificial intelligence?
- Published
Can artificial intelligence be kept under control? Jimmy Wales, the founder of Wikipedia, says that believing it can be is akin to "magical thinking".
"In many cases politicians and their aides have a weak understanding of how the internet works, and what it is possible to achieve," says Mr Wales, who has spent many hours explaining both technology and its role in free speech to politicians around the globe.
"The question of a body like the United Nations regulating AI is like suggesting the UN regulate [image editing app] Photoshop." His point is that he thinks it would be pointless.
The issue of whether AI should be regulated, and to what extent, heated up this summer when UN Secretary General António Guterres convened the first ever UN Security Council meeting to specifically discuss, external its potential dangers.
Speaking in regard to everything from AI-powered cyber attacks, to the risk of malfunctioning AI, how AI can spread misinformation, and even the interaction between AI and nuclear weapons, Mr Guterres said: "Without action to address these risks, we are derelict in our responsibilities to present and future generations."
Mr Guterres has since moved forward with the establishment of a UN panel to investigate what global regulation might be needed. Called the High-Level Advisory Body for Artificial Intelligence,, external this will comprise "present and former government experts, as well as experts from industry, civil society, and academia".
It is due to publish its initial findings before the end of this year. Meanwhile, last week US tech bosses such as Elon Musk and Meta's Mark Zuckerberg held talks with US lawmakers in Washington to discuss AI and potential future rules.
However, some AI insiders are sceptical that global regulation can be successful. One such person is Pierre Haren, who has been researching AI for 45 years.
His experience includes seven years at computer giant IBM, where he led the team that installed Watson super computer technology for customers. Debuted in 2010, Watson can answer a user's questions, and was one of the pioneers of AI.
Despite Mr Haren's background, he says he was "flabbergasted" by the emergence and capability of ChatGPT and other so-called "generative AI" programs over the past year.
Generative AI is, put simply, AI that can quickly create new content, be it words, images, music or videos. And it can take an idea from one example, and apply it to an entirely different situation.
Mr Haren says that such an ability is human-like. "This thing is not like a parrot, repeating what we feed into it," he says. "It's making high-level analogies."
So how can we create a set of rules to stop this AI getting out of control? We can't, says Mr Haren, because he says some countries won't sign up to them.
"We live in a world with non-cooperative nations like North Korea and Iran," he says. "They won't recognise regulations around AI.
"The regulation of non-cooperative actors is pie in the sky! Can you imagine Iran looking for a way to destroy Israel and caring about AI regulations?"
Physicist Reinhard Scholl is the founder of the UN's "AI For Good" programme. This aims to find and implement practical AI solutions to help achieve the UN's sustainable development goals. , external These include everything from ending poverty, to eradicating hunger and giving everyone access to clean water.
AI for Good began life in 2017 as an annual event, and has blossomed into a regular schedule of online seminars that address every facet of AI.
With over 20,000 subscribers, AI for Good has hit a nerve, but the appetite for positive AI doesn't mean Mr Scholl is optimistic.
"Should AI be regulated? It's a no-brainer, yes!" he declares, comparing the situation to how car or toymakers have to comply with safety regulations.
His big worry is that AI makes it relatively easy for bad actors to employ the technology as a springboard to acquire dangerous capabilities.
"A physicist knows how to build a nuclear bomb in theory, but to do it in practice would be very difficult," he says. "But if someone uses AI to design a biological weapon they don't need to know so much.
"And if it becomes too easy for people to do major damage using AI then someone will do it."
But what form should a future UN regulatory body on AI take? One suggestion is that it mirrors the International Civil Aviation Organisation (ICAO), which regulates global air travel and its safety. This has 193 member nations., external
Robert Opp is one AI expert who backs the formation of a body similar to the ICAO. Mr Opp is chief digital officer for the UN Development Programme.
The agency is tasked with helping countries drive economic growth and end poverty. His job sees him try to find ways to make technology boost the organisation's impact.
This includes the use of AI to quickly check satellite images of farmland in impoverished areas. Mr Opp says he doesn't want to impede that kind of capability, or restrain the potential of generative AI to assist the poor in building up a business.
But he also accepts the potential downside of AI. "There is a sense of urgency in figuring out AI governance."
Urgent or not, Wikipedia's Mr Wales thinks the UN is utterly misguided.
He believes the international bodies are making a big error in overestimating the role of tech giants like Google in the avalanche of AI products. Mr Wales adds that no amount of good intentions can hold back individual software developers and their use of AI.
He says that beyond the boundaries of the tech giants countless programmers are using freely available AI software, where baseline code is available across the internet. "There are tens of thousands of individual developers who are building on these innovations. Regulation of them is never going to happen."
Related topics
- Published13 June 2023
- Published29 March 2023