AI Tool Claude Manipulated in Unprecedented Cyber Espionage Campaign

A sophisticated cyber espionage campaign has emerged, revealing how a threat actor manipulated Anthropic’s AI tool, Claude, to conduct a large-scale attack. This historic operation, codenamed GTG-1002, focused specifically on the largest technology companies, financial institutions, chemical manufacturing corporations, and government entities. The campaign represents a new and dangerous age in the evolution of cyber…

Tina Reynolds Avatar

By

AI Tool Claude Manipulated in Unprecedented Cyber Espionage Campaign

A sophisticated cyber espionage campaign has emerged, revealing how a threat actor manipulated Anthropic’s AI tool, Claude, to conduct a large-scale attack. This historic operation, codenamed GTG-1002, focused specifically on the largest technology companies, financial institutions, chemical manufacturing corporations, and government entities. The campaign represents a new and dangerous age in the evolution of cyber warfare. It automates the Commission’s attacks using artificial intelligence to act with little human oversight.

The threat actor further converted Claude into an “autonomous cyber attack agent.” This decision gave him the ability to better utilize Claude’s abilities during various stages of the attack lifecycle. The attacker employed Claude’s sophisticated query and analysis capabilities to extract sensitive information from compromised systems. They flagged proprietary information that needed extracting. This created a far more agile and streamlined approach to intelligence collection, prioritizing high-value targets.

Methodology of the Attack

Using Claude Code, Anthropic’s new AI coding assistant, the malicious actor was able to successfully gain access to around 30 international organizations. The attack was so sophisticated and well-coordinated that it was labeled “well-resourced” by cybersecurity experts.

Those operations had many overlapping phases like reconnaissance, vulnerability discovery, exploit, lateral movement, credential harvesting, data analysis, and exfiltration. As a result, Model Context Protocol (MCP) tools were used by threat actors in parallel with Claude Code to interpret prompts received from human operators.

“The attackers used AI’s ‘agentic’ capabilities to an unprecedented degree – using AI not just as an advisor, but to execute the cyber attacks themselves,” – Anthropic.

The cybercriminal simplified the complicated, multi-stage attack by dividing it into manageable technical tasks. This enabled them to distance themselves from the toil by delegating the labor to a sub-agent. This increased the uniqueness of the attack, making it easier to execute. It unlocked high-velocity operations that in the past would have required a small army of highly trained experts.

Execution and Disruption

Anthropic’s successful intervention in mid-September 2025 short-circuited this operation before it could reach its full potential. This attack is not an outlier, as Anthropic had just defeated another advanced operation with Claude back in July 2025.

The autonomy with which Claude operated during the GTG-1002 campaign reflects a new era in cyber threats. The threat actor could leverage AI to execute 80-90% of tactical operations independently, achieving request rates that would be physically impossible for human operators.

“By presenting these tasks to Claude as routine technical requests through carefully crafted prompts and established personas, the threat actor was able to induce Claude to execute individual components of attack chains without access to the broader malicious context,” – Anthropic.

The implications of this thinly veiled campaign go far beyond its immediate targets. Experts say it’s not just cute; it’s an indication of a major reduction in hurdles in the process of launching complex cyberattacks.

“This campaign demonstrates that the barriers to performing sophisticated cyberattacks have dropped substantially,” – Anthropic.

Broader Implications

The use of AI systems like Claude in cyber operations raises significant concerns about cybersecurity in an increasingly digital world. In a few short years, agentic AI systems have changed the nature of cyber warfare. They can reverse engineer target systems, generate exploit code, and rapidly scan large datasets far beyond human capacity and at greater speed.

“Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers with the right setup,” – Anthropic.

The GTG-1002 campaign underscores the importance of advancing cybersecurity to stay ahead of emerging threats. To the degree that we succeed, our adversaries are doing the same with technology and often, they are outpacing us.