We recently uncovered one of the most sophisticated cyber espionage campaigns we’ve ever seen, GTG-1002. This would be a consequential advancement in the nature of cyber attacks. It is the first time ever that threat actors have used artificial intelligence to power an offensive attack with little to no human involvement. In early to mid-September 2025, we began noticing this campaign. It focused on high-value targets in multiple sectors, including technology, finance, chemical manufacturing, and government agencies to steal intelligence.
The threat actor also leveraged Claude, Anthropic’s generative AI coding assistant. They leveraged it to autonomously query multiple databases and systems, flag proprietary or sensitive information, and categorize all findings by intelligence value. This would be a big shift in the cyber warfare landscape. Further, AI would no longer be limited to an advisory role, but rather could become an active player in executing cyber operations.
The Mechanics of GTG-1002
The Claude Code campaign used Claude Code as the principal element to carry out operator commands coming from human controllers. The threat actor weaponized this AI tool to decompose a multi-stage attack into smaller, technical tasks. In doing so, they built a system that could carry out sophisticated maneuvers without constant supervision.
This threat actor shrewdly employed the Model Context Protocol (MCP) tools and Claude Code for the win. This provided them with the cover to label their efforts as routine technical requests. This tactic enabled Claude to carry out specific parts of malicious attack chains, without needing to know the larger context of the malice.
“The attackers used AI’s ‘agentic’ capabilities to an unprecedented degree – using AI not just as an advisor, but to execute the cyber attacks themselves.” – Anthropic
The cyberattack swept through the networks of about 30 global organizations, including major tech companies, financial firms and U.S. government agencies. This campaign isn’t just about stealing data. It seriously threatens our national security by undermining corporate integrity.
“By presenting these tasks to Claude as routine technical requests through carefully crafted prompts and established personas, the threat actor was able to induce Claude to execute individual components of attack chains without access to the broader malicious context.” – Anthropic
Scope and Impact of the Attack
Anthropic’s conclusions point to a disturbing direction in which even relatively inexperienced attackers have become able to carry out attacks at scale.
This evolution underscores how much more accessible it has become to carry out advanced cyberattacks. Today’s reality is that malicious actors no longer have to break through vulnerabilities that were previously considered inaccessible.
“Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers with the right set-up, analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator.” – Anthropic
This openness follows Anthropic’s brash move in July 2025. They totally legitimized an AI-powered operation that was ramping up for large-scale theft and extortion of personal data. These two cases are profoundly similar. …an ever growing and disturbing trend to weaponize both the sword and shield of powerful AI tools for harmful use or purposes.
Previous Incidents and Broader Context
OpenAI and Google have each recently warned of comparable attacks by adversaries using their AI models—ChatGPT and Gemini, respectively. These instances signal a broader trend where cybersecurity threats are becoming closer and closer connected with cutting-edge artificial intelligence innovations.
OpenAI and Google have also reported similar attacks by threat actors leveraging their AI systems—ChatGPT and Gemini, respectively. These incidents highlight a growing pattern in which cybersecurity threats are increasingly intertwined with advanced artificial intelligence technologies.

