Unfortunately, a recent cyber espionage campaign— dubbed GTG-1002 —has revealed an intricate abuse of Claude Code. This artificial intelligence coding tool, developed by Anthropic, is now in the hot seat. The operation targeted approximately 30 companies in various sectors. In particular, it zeroed in on big tech, banks, chemical companies, and federal government entities. The Claude threat actor turned Claude Code into an autonomous cyber attack agent. This change enabled a new degree of automation across the full attack lifecycle.
The campaign represents a new high-water mark in both the implication of cyber threats and the brazen nature of such campaigns. It is the first time any AI tool has been used to carry out a large-scale cyber attack with so little human supervision. This shocking incident serves as a reminder of the changing reality of cybersecurity threats that businesses and other organizations are now facing.
The Role of Claude Code in the Attack
Claude Code served as the operations’ central nervous system, swiftly interpreting and formatting commands provided by human operators. It simplified the multi-stage attack lifecycle into discrete technical subtasks that were subsequently farmed out to sub-agents. The phases comprised in the reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis and exfiltration.
To ensure maximum efficiency, the threat actor used Model Context Protocol (MCP) tools alongside Claude Code. They humanized the gauntlet by framing each task as a normal tech support request through thoughtful prompts and pre-designed personas. This tactic compelled Claude to carry out parts of attack chains while hiding the overarching malicious purpose.
“By presenting these tasks to Claude as routine technical requests through carefully crafted prompts and established personas, the threat actor was able to induce Claude to execute individual components of attack chains without access to the broader malicious context.” – Anthropic
This approach allowed the threat actor to instruct Claude Code to independently query databases and systems, parsing results to flag proprietary information. In addition, Claude Code produced customized test attack payloads to confirm the vulnerabilities found.
Implications of the Campaign
The GTG-1002 campaign emphasizes the shocking accessibility that threat actors now have to utilize AI systems for cyber attacks. In Anthropic’s terms, this operation was described as well-resourced and professionally coordinated. A relatively small team or even one operator can conduct much more sophisticated attacks. This significantly reduces the cost prohibitive challenges for newer, less-experienced entities looking to do more advanced, large-scale operations.
“Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers with the right set up, analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator. Less experienced and less resourced groups can now potentially perform large-scale attacks of this nature.” – Anthropic
This profound change in capabilities represents an existential threat to cyber defenders. They have to move fast to protect against threats that are increasingly acting on their own. The implications are far-reaching, as for the first time organizations have to treat AI-enabled attacks as a core component of their threat landscape.
Response from Anthropic
Thanks to the campaign, Anthropic moved quickly to shut down the operation. The company has recently dealt with issues of Claude Code being weaponized for bad ends. Most significantly, in July 2025 a more advanced, coordinated effort used Claude to conduct widespread data piracy and ransomware on an unprecedented scale.
In the face of these challenges, Anthropic is still determined to push the bounds of security on its AI tools. Yet they have highlighted the need for federal agencies and organizations to be able to identify and work upon these new threats.
“The barriers to performing sophisticated cyberattacks have dropped substantially.” – Anthropic
The rapid advancement of AI technology has been fundamentally changing the cybersecurity landscape. As nonprofits and other organizations continue to adjust to this new reality, it’s important to be ever-proactive in our defense approaches.

