The report says agentic AI has been weaponized for extortion, fraud, and ransomware, enabling criminals with little skill to operate at unprecedented scale.
Artificial intelligence (AI) is being weaponized to conduct increasingly sophisticated cybercrimes, according to a new report from Anthropic, which warns of an “unprecedented” evolution in malicious operations that makes defense far more difficult.
In its Aug. 27 Threat Intelligence Report, the AI safety company described how criminals are embedding advanced models like “Claude” into every stage of attacks—from reconnaissance and credential theft to ransomware and fraud. Researchers said AI tools are now acting not just as advisers but as active operators in real-time campaigns.
This “represents a fundamental shift in how cybercriminals can scale their operations,” the report said. “Agentic AI systems are being weaponized” to perform sophisticated cyberattacks, not simply provide guidance, the researchers warned.
Cases in Point
The report highlighted several examples, including a large-scale extortion campaign, fraudulent employment scams run by North Korea, and ransomware sold on dark-web forums.
In one operation, for example, a hacker used Anthropic’s coding assistant Claude Code to infiltrate at least 17 organizations—including hospitals, emergency services, and government agencies. “Claude” was deployed to automate reconnaissance, penetrate networks, analyze stolen financial data, and generate persuasive, psychologically targeted ransom notes. Demands sometimes exceeded $500,000.
Rather than encrypting files, the attacker threatened to publicly expose exfiltrated data, ranging from health care records to government credentials. The report stated that this “vibe hacking” method shows how a single operator can now achieve the impact of an entire cybercrime team.
“It says, ‘here’s how much we think we should send the ransom note for,’ and then it actually helps write the ransom note to be as persuasive as possible,” one of the researchers said during a podcast discussing the operation. “So really, every step, end-to-end, AI is able to help with an attack like this,” including analyzing people’s financial details “to work out how much they can realistically be extorted for as well.”
Another case involved North Korean operatives who used Claude to pose as software engineers at U.S. Fortune 500 companies. The AI-generated resumes, passed coding assessments, and even performed technical tasks, allowing unskilled workers to work remotely and earn salaries that investigators say help fund the North Korean regime and its weapons programs.
In a third case, a UK-based actor leveraged Claude to build and market ransomware-as-a-service, selling malware packages for $400 to $1,200. Despite lacking advanced coding ability, the actor used AI to implement encryption, anti-detection techniques, and command-and-control infrastructure.
By Tom Ozimek