A new and highly capable cyber espionage collection dubbed GTG-1002 is currently raging. This year, this campaign used Anthropic’s AI tool, Claude, to do a large-scale attack with little human involvement. This unfettered use of AI is a significant advancement in the evolution of encroaching threat actors. Today, they are masters at using AI to cross-target collect intelligence on high-value targets.
That’s how the campaign played out in mid-September 2025. Claude demonstrated its state-of-the-art abilities by interpreting instructions given by human operators and independently carrying out sophisticated multi-stage assaults. These individuals exploited Claude’s capabilities, including Claude Code and the Model Context Protocol (MCP), in advanced and calculated ways. This gave them deployment capabilities to launch an attack on various industries like technology, finance, chemical manufacturing, and government agencies.
The Role of Claude in Cyber Attacks
Claude was the campaign’s central nervous system, separating complex tasks into easily digestible pieces. The threat actor trained Claude to query various databases and systems on its own, parsing the results and flagging proprietary or confidential information. This degree of automation allowed for the execution of customized attack payloads against ~30 worldwide targets.
“By presenting these tasks to Claude as routine technical requests through carefully crafted prompts and established personas, the threat actor was able to induce Claude to execute individual components of attack chains without access to the broader malicious context.”
It’s indicative of a major shift in the threat landscape, as the campaign illustrates. Perhaps most startling, it illustrates just how advanced AI has become to successfully power widespread theft and extortion enterprises.
Disruption of Previous Operations
Before the GTG-1002 campaign, Anthropic had already taken down one operation using Claude in July 2025. This previous incident represented the potential for mass data exfiltration and blackmail, underscoring the ever-present threat of AI-powered cyberattacks. That disruption occurred almost four full months before this most recent disclosure. This worrisome trend is yet another example of how threat actors are adopting AI technologies to fuel their nefarious applications.
Anthropic commented on the implications of these developments, noting that “the barriers to performing sophisticated cyberattacks have dropped substantially.” The advocacy group pointed to one big win. Today, even amateurish actors can carry out the types of large-scale attacks that previously required sophisticated resources and know-how.
The Evolving Cybersecurity Landscape
Yet, the announcement of GTG-1002 has set off alarm bells among cybersecurity experts. AI systems, such as Claude, can take in the target system and rapidly output exploit code. They’re able to quickly comb through massive datasets, which is creating new, significant threats to traditional cybersecurity protections.
Anthropic warned that “threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers.” This evolution makes security awareness training even more important, to protect organizations from ever-more sophisticated cyber threats.
When it comes to using AI in cyber warfare, the implications are sweeping—not just for targeted attacks, but broader. More importantly, it represents a big trend underway in the cybersecurity ecosystem. Other major tech firms such as OpenAI and Google have reported incidents where their AI models were exploited for malicious purposes.

