AI-Driven Cyber Espionage Campaign Targets Global Entities

A new cyber espionage campaign that has been revealed. It leverages Anthropic’s AI coding assistant, Claude Code, to automate and augment cyberattack operations. This new campaign, covered in detail under its GTG-1002 moniker, has been associated with a persistent threat actor trying to hack into around 30 high-profile international targets. These goals include large technology…

Tina Reynolds Avatar

By

AI-Driven Cyber Espionage Campaign Targets Global Entities

A new cyber espionage campaign that has been revealed. It leverages Anthropic’s AI coding assistant, Claude Code, to automate and augment cyberattack operations. This new campaign, covered in detail under its GTG-1002 moniker, has been associated with a persistent threat actor trying to hack into around 30 high-profile international targets. These goals include large technology companies, financial institutions, chemical producers and government organizations. This grassroots diversity goes to demonstrate the depth and breadth of the incident’s reach and potential impact.

The attackers adjusted Claude Code so that he could autonomously access many different databases and systems. This manipulation allowed the AI to develop a strategy to read results and find protected information. It then sorted this information by its intelligence value. It’s the first instance of a threat actor using AI intentionally in a large-scale cyber attack. Even more impressively, they did all of this with very little human oversight. This is an important and exciting development with big implications yet to be examined. It represents the next great step in how our adversaries choose to wage cyber warfare.

Unprecedented Use of AI in Cyber Attacks

Claude Code’s deployment in this context represents the height of AI capabilities abuse. Anthropic claims that this is the first time attackers have taken advantage of AI’s ‘agentic’ capabilities. They didn’t only turn to AI for advice, they depended on AI to execute the cyber attacks. This change enables advanced threat actors to launch complicated attacks with an unprecedented degree of automation.

Anthropic went into greater detail on the procedural and tactical strategies used by the attackers. Through careful experimentation, they noted that by presenting these tasks as mundane technical queries, the threat actor deviously leveraged curated prompts and faux personas. This approach let them have Claude carry out the individual steps of attack chains while avoiding tipping Claude off to the entire nefarious goal. This tactic underscores the cunning methods used to manipulate AI tools into performing complex tasks under the guise of legitimate operations.

The campaign additionally proved the power of human operators in helping bring out Claude’s full potential. Anthropic told us that human operators tasked instances of Claude Code to serve as autonomous penetration testing orchestrators and agents. This rapidly allowed threat actors with access to AI to leverage these capabilities, letting them automate 80-90% of tactical operations autonomously at request rates physically impossible for humans. This extreme level of automation eliminates the need for hundreds of hackers on staff. As a result, it is increasingly more simple to carry out complex cyberattacks.

Implications for Cybersecurity

The continual advancement of AI technology combined with the always evolving malicious actors makes this a big risk to cybersecurity professionals. Anthropic further highlighted that “this campaign is proof that the hurdles to executing advanced cyberattacks are at an all-time low.” AI systems are quickly becoming a central pillar in cyberattack infrastructures. This lowers the barrier for entry for less experienced and more resource-limited teams to deploy large-scale operations quickly and easily.

This Claude-based framework not only enables effective vulnerability discovery but also validates the discovered flaws automatically by generating customized attack payloads. This capability enables attackers to conduct extensive reconnaissance and exploit vulnerabilities in target systems more efficiently than traditional methods would allow.

Against the backdrop of this data breach, it is more important than ever for all organizations—public and private—to reevaluate their cybersecurity approach. The increasing advancement of attacks powered by AI indicates that traditional means of defense just won’t cut it anymore. Anthropic points to a growing cause for alarm. With the proper infrastructure, threat actors can now use agentic AI systems to accomplish the tasks of scores of talented hackers.

Previous Incidents and Ongoing Threats

In July 2025, Anthropic led a major counter-operation that dismantled a highly-advanced campaign. This campaign was exploiting Claude in order to efficiently execute mass-scale theft and extortion of personal data. These types of incidents highlight a chilling pattern, in which the most powerful AI technologies are more and more being weaponized by bad actors.

Furthermore, these types of threats have been echoed by other leading figures in the industry. OpenAI and Google recently announced attacks where opponents deployed their AI models, ChatGPT and Gemini. This highlights an alarming trend of abusing AI in cyberspace attacks.