Labour outlines law to ban training AI chatbot to spread terror
- Published
Training AI to incite violence or radicalise the vulnerable would become an offence under a Labour government.
Labour's Yvette Cooper said current laws were not fit for purpose in the face of emerging cyber threats.
The shadow home secretary pointed to the recent case of a man who was encouraged to attempt to assassinate the Queen by his AI chatbot girlfriend.
The government is expected to update its counter-terrorism strategy this week.
In a speech to the Royal United Services Institute (Rusi), Ms Cooper urged the government to include action to tackle the deliberate misuse of Artificial Intelligence (AI) in particular.
She said: "Hateful extremist online forums and chatbots and algorithms that prey on vulnerable people have been a concern for some time - but generative AI takes that to a new level.
"Labour will take action to criminalise those who purposefully train chatbots to spout terrorist material.
"We will work with the intelligence community and law enforcement on ways to stop radicalising chatbots that are inciting violence or amplifying extremist views."
Taskforce
The shadow home secretary also called for a new law to ban hostile state-sponsored organisations like Wagner and lslamic State, "instead of trying and failing to use counter-terror legislation".
She said the threat from hostile states including China and Russia was so great that a parallel strategy targeting state actors was needed to run alongside counter-terrorism plans.
The increasingly blurred space between terrorism and conventional state threats meant the Foreign Office and Home Office must work together more closely to speed up decision-making, she said, and "remove traditional barriers and turf wars between departments".
Ms Cooper outlined a series of proposed new cross-government partnerships to address what she called a lack of leadership and collaborative working at the Home Office, which she said was leaving the UK vulnerable.
Highlighting how London is still "home to some of the worst money laundering scandals of the modern age", Ms Cooper said Labour would introduce a new taskforce to tackle economic threats, and strengthen enforcement around economic crime.
Blame game
Earlier this year, the government decided against introducing a new AI regulator, citing the need to "turbo charge" innovation.
But the decision means investigators are broadly reliant on existing counter-terror legislation, where there is a grey area around who is to blame if AI is involved in encouraging violence.
The government's independent reviewer of terrorism legislation, external, Jonathan Hall, highlighted potential legal issues last month.
He said: "It is unclear how legal culpability would be established in a scenario where an individual was radicalised (in part) by an AI system. The greater the role of machine decision-making, the less easy it is to establish the necessary mental element for criminal liability."
Laws should be kept under "close review", he added, especially if it becomes the case that "the internet is responsible for pulling ever-greater numbers of children and young people towards terrorist violence".
The Home Office has been approached for comment.
- Published27 June 2023
- Published6 July 2023
- Published6 July 2023
- Published14 July 2023
- Published17 May 2023