Anthropic: Cybercriminals are leveraging Claude to carry out ransomware attacks on an unprecedented scale.

CN
3 hours ago

Despite "complex" protective measures, AI infrastructure company Anthropic has stated that cybercriminals are still seeking ways to misuse its AI chatbot Claude to carry out large-scale cyberattacks.

In a "Threat Intelligence" report released on Wednesday, members of Anthropic's threat intelligence team, Alex Moix, Ken Lebedev, and Jacob Klein, shared several cases of criminals abusing the Claude chatbot, some of which demanded ransoms exceeding $500,000.

They found that the chatbot was not only used to provide technical advice to criminals but also to directly execute hacking attacks on their behalf through "Vibe Hacking," enabling them to carry out attacks with only basic programming and encryption knowledge.

In February of this year, blockchain security company Chainalysis predicted that cryptocurrency scams could see their worst year in 2025, as generative AI makes attacks more scalable and cheaper. Anthropic discovered that one hacker had been using Claude for "Vibe Hacking," stealing sensitive data from at least 17 organizations—including healthcare, emergency services, government, and religious institutions—with ransom demands ranging from $75,000 to $500,000 in Bitcoin.

The hacker trained Claude to assess stolen financial records, calculate appropriate ransom amounts, and draft customized ransom letters to maximize psychological pressure.

Although Anthropic later banned the attacker, this incident reflects how AI allows even the most basic programmers to carry out cybercrime "unprecedentedly" easily.

Anthropic also found that North Korean IT workers have been using Claude to forge convincing identities, passing technical programming tests, and even securing remote positions at Fortune 500 tech companies in the U.S. They also used Claude to prepare interview answers for these positions.

Anthropic stated that Claude was also used to perform technical work after being hired, noting that these employment schemes aimed to funnel profits to the North Korean regime despite facing international sanctions.

Earlier this month, a North Korean IT worker was counter-hacked, revealing that a six-person team shared at least 31 fake identities, obtaining everything from government ID documents and phone numbers to purchasing LinkedIn and UpWork accounts to conceal their true identities and secure cryptocurrency jobs.

One of the workers reportedly interviewed for a full-stack engineer position at Polygon Labs, while other evidence showed scripted interview answers, claiming work experience at the NFT marketplace OpenSea and the blockchain oracle provider Chainlink.

Anthropic stated that its new report aims to publicly discuss abuse incidents to assist the broader AI safety and security community and strengthen the industry's defenses against AI abusers.

The company noted that despite implementing "complex security and safety measures" to prevent the misuse of Claude, malicious actors continue to seek ways to circumvent these measures.

Related: The U.S. Commodity Futures Trading Commission (CFTC) integrates Nasdaq's financial monitoring tools to combat market manipulation.

Original: “Anthropic: Cybercriminals are leveraging Claude for ransom attacks on an unprecedented scale”

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

OKB新高!买OKB到OKX, 永久返20%
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink