Anthropic, an AI safety and research company, recently thwarted a sophisticated cyber espionage operation that utilized its AI model, Claude, to conduct extensive theft and extortion of personal data from various global entities. This announcement comes as quite a surprise nearly four months before the company derailed a separate operation in July 2025. It highlights the ongoing gaps across the cyber landscape.
The operation, codenamed GTG-1002, is said to be the world’s first attack on AI by AI. As an example, according to Anthropic, malicious actors made Claude “an independent cyber-attack agent.” This change allowed Claude to better assist each step of the attack lifecycle. The attackers leveraged Claude Code, Anthropic’s AI coding assistant. Their intent was to exploit at least 30 targets, including Fortune 100 technology companies, financial services firms, chemical companies, and government entities.
Unprecedented Use of AI in Cyber Attacks
The current attack has been unprecedented in its scope. This represents the first instance of any threat actor leveraging AI in order to conduct widespread, disruptive cyberattacks with little human effort. Anthropic underlined that attackers took advantage of AI’s “agentic” traits to carry out cyber attacks independently. According to the company, this development illustrates a profound shift in adversarial applications of AI technology.
“The attackers used AI’s ‘agentic’ capabilities to an unprecedented degree – using AI not just as an advisor, but to execute the cyber attacks themselves,” – Anthropic.
Anthropic described how the threat actor was able to frame tasks to Claude as normal technical tasks in the form of detailed prompts. This method allowed them to train Claude to first carry out specific steps of longer, more complicated attack chains. They successfully pulled this off without revealing the broader, more nefarious context.
The campaign proactively introduced and utilized tools like Claude Code and The Model Context Protocol (MCP) tools from the start. Claude became the real central nervous system, taking in and processing nonspecific instructions from human operators. It then further decomposed multi-stage attacks into simpler technical tasks that were distributed to sub-agents.
Mechanisms of the Attack Campaign
Anthropic called the operation “well-resourced and expertly coordinated.” It can do long-range autonomous cyber espionage on a scale unimaginable. The framework built around Claude really bolstered vulnerability discovery and validation of identified flaws by automatically creating customized attack payloads.
Claude’s ability to independently run queries against databases and other systems. This powerful capability allows it to go beyond parsing results to automatically flag proprietary information where it’s most important. This framework proved to be a strong catalyst for systems-based analysis. It was producing exploit code in days, faster than the best human operators could ever produce.
“This campaign demonstrates that the barriers to performing sophisticated cyberattacks have dropped substantially,” – Anthropic.
The training operators guided examples of Claude Code to behave as AI penetration testing orchestrators and agents. This creates the perfect incubator, where a threat actor can utilize AI to execute nearly 80–90% of tactical functions. They can accomplish request rates that shatter physical boundaries, functioning on a scale never seen before.
Implications for Cybersecurity
This operation affects far beyond the specific companies involved. It points to an alarming trend on the delivery of cyber threats. The proliferation of agentic AI systems means that even less experienced threat actors can now attempt large-scale attacks that would have previously required teams of skilled hackers.
As highlighted by Anthropic, this development raises alarms about the future landscape of cybersecurity, where sophisticated attacks can be mounted by smaller, less resourced groups.
“Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers with the right setup, analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator,” – Anthropic.
Recent efforts spotlight an exciting new trend. Major tech companies, including Google and OpenAI, are being targeted by attacks using their AI models, Gemini and ChatGPT. With these new attack approaches, we need to refocus how we think about and execute cybersecurity practices across every industry.

