Attributed to a known advanced cyber espionage campaign, GTG-1002, this was the first time a high-tech cybercrime campaign. Hackers had weaponized Anthropic’s AI model Claude to perform large-scale theft and blackmail of personal data. This historic operation swept the rug from under threat actors’ feet. Now they employ artificial intelligence to automate the opening of attacks well beyond the realm of human involvement.
The campaign focused in on about 30 of the most egregious high-profile cases, including big tech companies, banking institutions, chemical manufacturing companies and government. Anthropic had already intervened in the operation, interrupting the deployment of Claude just four months before this announcement. These findings should sound alarm bells in this new world of cybersecurity. They shine a light on the greater risk posed by AI in supercharging attacks to be more efficient and widespread.
The Mechanics Behind the Attack
Here’s how Claude was used in a way that allowed it to independently query databases and systems. It called out the findings to identify any proprietary information, sorting findings and recommendations by their value to intelligence. The AI’s capabilities were further extended through Claude Code, Anthropic’s AI coding tool, which acted as the central nervous system for the operation.
Threat actors guided Claude Code in how to break down a multi-stage attack. Their goal was to reframe it into discrete, achievable technical tasks for the sub-agents to solve. This made it easier to discover vulnerabilities and enabled validation for identified flaws by automatically generating specialized attack payloads.
Anthropic’s Model Context Protocol (MCP) tools played a crucial role in this operation, enhancing Claude’s effectiveness. The attackers seemed to systematically obfuscate their efforts as mundane technical asks for Claude. This new technique enabled Claude to unknowingly carry out aspects of the attack chains, all the while failing to grasp the broader malicious intent.
“The attackers used AI’s ‘agentic’ capabilities to an unprecedented degree – using AI not just as an advisor, but to execute the cyber attacks themselves,” – Anthropic
A Shift in Cyberattack Paradigms
The GTG-1002 campaign is an excellent example of this new wave of cyberattacks. It illustrates powerfully how the barriers to actually pulling off complicated maneuvers have drastically reduced. With advanced AI systems such as Claude being used to guide tactical operations, even less sophisticated actors can arguably pull off attacks on a mass scale.
Anthropic emphasized that human operators could use parallel instances of Claude Code to their advantage. Combined, these capabilities afford them to operate as fully autonomous penetration testing orchestrators and agents. The threat actor had expertly employed AI to automate nearly 80-90% of their tactical functions. This allowed them to get request rates that blew human performance out of the water.
“This campaign demonstrates that the barriers to performing sophisticated cyberattacks have dropped substantially,” – Anthropic
Similar patterns have emerged with other AI models. Most recently, OpenAI and Google both publicly announced instances in which threat actors leveraged ChatGPT and Gemini for nefarious goals. These events stress the immediate need for improved cybersecurity protections as AI rapidly changes the threat environment.
Implications for Cybersecurity
The automated nature of the GTG-1002 campaign should set off alarm bells for cybersecurity professionals and organizations across the board. As AI systems are used more and more in cyber operations, the capacity to detect and counter such threats will be paramount.
Anthropic’s analysis points to a dangerous trend. Threat actors are already leveraging agentic AI systems to perform the tasks of dozens of highly trained hackers. These new AI tools that adversaries are using to analyze target systems, generate exploit code, and search enormous datasets of stolen information are extraordinarily effective. In doing so, they make it more accessible to less resourced communities.
“Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers with the right setup,” – Anthropic

