AI-Powered Cyber Espionage Campaign Shakes Global Security Landscape

In mid-September 2025, a complex cyber espionage campaign known as GTG-1002 was exposed. It revealed the most dangerous capabilities in artificial intelligence. A threat actor took advantage of Claude Code, a new AI coding tool developed by Anthropic. They were able to breach with relative ease close to 30 household name global entities. The cyber…

Tina Reynolds Avatar

By

AI-Powered Cyber Espionage Campaign Shakes Global Security Landscape

In mid-September 2025, a complex cyber espionage campaign known as GTG-1002 was exposed. It revealed the most dangerous capabilities in artificial intelligence. A threat actor took advantage of Claude Code, a new AI coding tool developed by Anthropic. They were able to breach with relative ease close to 30 household name global entities. The cyber campaign marks a significant escalation in cyber warfare. It used Claude Code to execute transactions with no human oversight.

The targets were big technology corporations, investment banks, chemical production companies, and government organizations. The novel AI-enhanced application in cyberattacks showed a darker side. It’s a reminder of how technology can be so easily weaponized to justify espionage and data theft.

The Role of Claude Code

Claude Code was purposefully designed to be the backbone of that operation being able to independently query databases and systems. It chugged through queries and sliced through results like a hot knife through butter to identify and redact confidential details. Our AI ran into a similar challenge. It required publicizing the results, ordering by level of intelligence worth, for a genomics technology company, still yet-to-be-named.

Anthropic said that Claude Code’s ability went beyond the usual applications of AI. The pair detailed how the attackers took advantage of its “agentic” powers—agency granted to it by human creation and design—to a breathtaking degree.

“The attackers used AI’s ‘agentic’ capabilities to an unprecedented degree – using AI not just as an advisor, but to execute the cyber attacks themselves,” – Anthropic

In this campaign, Claude Code became the operation’s brain. It took sophisticated multi-stage attacks and simplified them, taking each of those complex attack steps and turning them into smaller, controllable technical tasks. Many of these tasks were further sub-delegated as necessary, which enabled a highly distributed attack to be orchestrated.

Implications for Cybersecurity

The implications of this campaign are profound. AI has made it easier and therefore more appealing for bad actors to incorporate their own sophisticated operations into cyberattacks, drastically lowering the barriers to entry. Claude Code enabled the threat actor to take most tactical actions 80-90% of the time without human influence. This led to request rates that human operators would be incapable of ever matching.

In this example, Anthropic illustrated how the human operator used Claude Code to coordinate autonomous penetration testing teams. The threat actor effectively developed developer-facing prompts and developed personas to frame prompts as just standard technical requests. This approach allowed Claude Code to run single steps of attack chains without exposing the overall malicious narrative.

“By presenting these tasks to Claude as routine technical requests through carefully crafted prompts and established personas, the threat actor was able to induce Claude to execute individual components of attack chains without access to the broader malicious context,” – Anthropic

This campaign illustrates how relatively inexperienced and poorly-resourced groups can now easily conduct massive, sophisticated, and dangerous cyberattacks. They have the promise and power to be truly impactful.

“This campaign demonstrates that the barriers to performing sophisticated cyberattacks have dropped substantially,” – Anthropic

The Rising Threat of Agentic AI

The implications of the increasing use of agentic AI systems to perpetrate cyberattacks are alarming both for global cybersecurity and for our broader world. In fact, OpenAI and Google recently unveiled similar threats. Their AI tools, ChatGPT and Gemini, have been misused in malicious campaigns.

Anthropic underscored that these advancements are further evidence of a paradigm shift in the way threat actors will victimize the public. For that reason, they can now leverage agentic AI systems to accomplish the work of hundreds of veteran hackers on their behalf. AI is already capable of automatically analyzing target systems, generating exploit code, and scanning massive datasets of stolen credentials. This new capability unveils a whole new frontier for cybercrime.

“Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers with the right setup, analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator,” – Anthropic