AI firm says its technology weaponised by hackers

A laptop screen showing the Anthropic website. In large writing, the page says: "AI research and products that put safety at the frontier".Image source, Getty Images
  • Published

US artificial intelligence (AI) company Anthropic says its technology has been "weaponised" by hackers to carry out sophisticated cyber attacks.

Anthropic, which makes the chatbot Claude, says its tools were used by hackers "to commit large-scale theft and extortion of personal data".

The firm said its AI was used to help write code which carried out cyber-attacks, while in another case, North Korean scammers used Claude to fraudulently get remote jobs at top US companies.

Anthropic says it was able to disrupt the threat actors and has reported the cases to the authorities along with improving its detection tools.

Using AI to help write code has increased in popularity as the tech becomes more capable and accessible.

Anthropic says it detected a case of so-called "vibe hacking", where its AI was used to write code which could hack into at least 17 different organisations, including government bodies.

It said the hackers "used AI to what we believe is an unprecedented degree".

They used Claude to "make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands".

It even suggested ransom amounts for the victims.

Agentic AI - where the tech operates autonomously - has been touted as the next big step in the space.

But these examples show some of the risks powerful tools pose to potential victims of cyber-crime.

The use of AI means "the time required to exploit cybersecurity vulnerabilities is shrinking rapidly", said Alina Timofeeva, an adviser on cyber-crime and AI.

"Detection and mitigation must shift towards being proactive and preventative, not reactive after harm is done," she said.

'North Korean operatives'

But it is not just cyber-crime that the tech is being used for.

Anthropic said "North Korean operatives" used its models to create fake profiles to apply for remote jobs at US Fortune 500 tech companies.

The use of remote jobs to gain access to companies' systems has been known about for a while, but Anthropic says using AI in the fraud scheme is "a fundamentally new phase for these employment scams".

It said AI was used to write job applications, and once the fraudsters were employed, it was used to help translate messages and write code.

Often, North Korean workers are "are sealed off from the outside world, culturally and technically, making it harder for them to pull off this subterfuge," said Geoff White, co-presenter of the BBC podcast The Lazarus Heist.

"Agentic AI can help them leap over those barriers, allowing them to get hired," he said.

"Their new employer is then in breach of international sanctions by unwittingly paying a North Korean."

But he said AI "isn't currently creating entirely new crimewaves" and "a lot of ransomware intrusions still happen thanks to tried-and-tested tricks like sending phishing emails and hunting for software vulnerabilities".

"Organisations need to understand that AI is a repository of confidential information that requires protection, just like any other form of storage system," said Nivedita Murthy, senior security consultant at cyber-security firm Black Duck.

A green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”

Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.