More than 1,300 experts call AI a force for good
- Published
- comments
An open letter signed by more than 1,300 experts says AI is a "force for good, not a threat to humanity".
It was organised by BCS, the Chartered Institute for IT, to counter "AI doom".
Rashik Parmar, BCS chief executive, said it showed the UK tech community didn't believe the "nightmare scenario of evil robot overlords".
In March, tech leaders including Elon Musk, who recently launched an AI business, signed a letter calling for a pause in developing powerful systems.
That letter suggested super-intelligent AI posed an "existential risk" to humanity. This was a view echoed by film director Christopher Nolan, who told the BBC that AI leaders he spoke to saw the present time "as their Oppenheimer moment". J.Robert Oppenheimer played a key role in the development of the first atomic bomb, and is the subject of Mr Nolan's latest film.
But the BCS sees the situation in a more positive light, while still supporting the need for rules around AI.
Richard Carter is a signatory to the BCS letter. Mr Carter, who founded an AI-powered startup cybersecurity business, feels the dire warnings are unrealistic: "Frankly, this notion that AI is an existential threat to humanity is too far-fetched. We're just not in any kind of a position where that's even feasible".
Signatories to the BCS letter come from a range of backgrounds - business, academia, public bodies and think tanks, though none are as well known as Elon Musk, or run major AI companies like OpenAI.
Those the BBC has spoken to stress the positive uses of AI. Hema Purohit, who leads on digital health and social care for the BCS, said the technology was enabling new ways to spot serious illness, for example medical systems that detect signs of issues such as cardiac disease or diabetes when a patient goes for an eye test.
She said AI could also help accelerate the testing of new drugs.
Signatory Sarah Burnett, author of a book on AI and business, pointed to agricultural uses of the tech, from robots that use artificial intelligence to pollinate plants to those that "identify weeds and spray or zap them with lasers, rather than having whole crops sprayed with weed killer".
The letter argues: "The UK can help lead the way in setting professional and technical standards in AI roles, supported by a robust code of conduct, international collaboration and fully resourced regulation".
By doing so, it says Britain "can become a global byword for high-quality, ethical, inclusive AI".
In the autumn UK Prime Minister Rishi Sunak will host a global summit on AI regulation.
While the BCS may argue existential threats are sci-fi, some issues are just over the horizon or are already presenting problems.
It has been predicted that the equivalent of up to 300 million jobs could be automated, and some companies have already said they will pause hiring in some roles as a result of AI.
But Mr Carter thinks AI - rather than replacing humans - will boost their productivity. In his own work he says ChatGPT is useful, but he says he is wary of putting too much trust in it, comparing it to a "very knowledgeable and a very excitable, 12-year-old".
He argues companies will always need to have humans involved in the workplace, to take responsibility if things go wrong: "If you take the human completely out of the loop, how do you manage accountability for some sort of catastrophic event happening?"
He, like other signatories, believes regulation will be needed to avoid the misuse of AI.
Ms Purohit says a motive for signing was the need for rules to "make sure that we don't just run off and create lots and lots of things without paying attention to the testing and the governance, and the assurance that sits behind it".
Related topics
- Published17 July 2023
- Published12 July 2023
- Published30 March 2023