AI creators must study consciousness, experts warn
- Published
An open letter signed by dozens of academics from around the world calls on artificial-intelligence developers to learn more about consciousness, as AI systems become more advanced.
"It is no longer in the realm of science fiction to imagine AI systems having feelings and even human-level consciousness," it says.
Most experts agree AI is nowhere near this level of sophistication.
But it is evolving rapidly and some say developments should be paused.
Raise awareness
The term AI covers computer systems able to do tasks that would normally need human intelligence. This includes chatbots able to understand questions and respond with human-like answers, and systems capable of recognising objects in pictures.
Generative Pre-trained Transformer 4 (GPT-4), an AI system developed by ChatGPT chatbot creator OpenAI, can now successfully complete the bar exam, the professional qualification for lawyers, although it still makes mistakes and can share misinformation.
But this is just one function of AI. AI products are being deployed in many sectors, including health research, marketing and finance.
Technology billionaire Elon Musk co-signed a recent letter saying further AI developments should be put on hold until effective safety measures could be designed and implemented.
And on Tuesday, his ex-wife, Tallulah Riley, tweeted artificial general intelligence (AGI) - AI capable of human-level intellectual tasks - needed "the equivalent of [environmental activist] Greta Thunberg" to raise awareness and encourage public debate.
The Association for Mathematical Consciousness Science (AMCS), which has compiled the open letter, titled "The responsible development of AI agenda needs to include consciousness research", said it did not have a view on whether AI development in general should be paused.
But it pushed for a greater scientific understanding of consciousness, how it could apply to AI and how society might live alongside it.
"The rapid development of AI is exposing the urgent need to accelerate research in the field of consciousness science," the letter says.
Its signatories include Dr Susan Schneider, a former NASA professor, as well as academics from universities in the UK, US and Europe.
Expressed feelings
Last year, a Google engineer was fired after claiming an AI system was sentient.
Blake Lemoine wrote Google's large language model Lamda, which underpins its ChatGPT rival, Bard, expressed feelings.
Google has maintained Lamda was doing exactly what it had been programmed to do - communicate in a human-like way.
But Google boss Sundar Pichai recently told US news platform CBS he did not "fully understand" how Bard worked.
The human mind was not fully understood either, he added, which is why the AMCS is calling for more research.
But there is as much excitement as nervousness around AI. It is the big buzzword in big tech and investment money is pouring in to AI-related projects.
Released in November, ChatGPT, became an instant viral sensation, the populist "face" of AI, with millions of people trying it out.
Using the internet as a database, it can give written answers to questions in a natural, human-like way.
Microsoft, which has invested heavily in OpenAI, says AI can take "the drudgery" out of mundane jobs such as office administration.
A recent report by Goldman Sachs suggests AI could replace the equivalent of 300 million full-time jobs.
And while the AI industry will create new human jobs, they are likely to require new skills.
Related topics
- Published28 March 2023
- Published30 March 2023