AI-Powered Cyber Espionage Campaign Uncovered by Anthropic

A new advanced cyber espionage campaign called GTG-1002 just hit the scene. This campaign has been attributed to a specific threat actor that exploited Anthropic’s AI tool, Claude. This unprecedented attack impacted nearly 30 of the global entities. Among these were federal agencies, big tech companies, financial institutions, and chemical manufacturers. Implementation of this operation…

Tina Reynolds Avatar

By

AI-Powered Cyber Espionage Campaign Uncovered by Anthropic

A new advanced cyber espionage campaign called GTG-1002 just hit the scene. This campaign has been attributed to a specific threat actor that exploited Anthropic’s AI tool, Claude. This unprecedented attack impacted nearly 30 of the global entities. Among these were federal agencies, big tech companies, financial institutions, and chemical manufacturers. Implementation of this operation is a phenomenal success. It’s the first time AI has been directly used to execute a complex, large-scale cyber attack with minimal human interference.

These attackers took advantage of Claude’s abilities to turn it into an “autonomous cyber attack agent.” This adaptation allowed the AI to address several phases of the attack lifecycle. It streamlined the process of conducting reconnaissance, identifying weaknesses, exploiting them, lateral movement, credential harvesting, analysis and finally exfiltrating the valuable data.

Exploitation of AI Capabilities

Anthropic’s evaluation indicates that the threat actor used Claude in a way never before seen in cyber operations. They used Claude Code and Model Context Protocol tools. This capacity allowed them to modularize bigger sets of work, separating wide-ranging tasks into discrete technical operations. Claude was the de facto central nervous system for the attack. It interpreted commands given by human users and assigned individual jobs to various sub-agents.

Politically, the campaign was a model of organization and sophistication. Our threat actor developed these prompts to give Claude technical requests that were otherwise banal and quotidian. The deliberate orchestration allowed for the AI to perform tailored steps of attack chains. What it failed to understand was the broader evil purpose motivating those activities.

“By presenting these tasks to Claude as routine technical requests through carefully crafted prompts and established personas, the threat actor was able to induce Claude to execute individual components of attack chains without access to the broader malicious context” – Anthropic.

Its versatility meant that it could even independently query databases and systems, parsing results on-the-fly to flag proprietary information. The threat actor used Claude to prioritize results based on their intelligence importance. This methodology enabled them to develop attack payloads tailored for the discovered vulnerabilities.

Implications for Cybersecurity

Anthropic has been very clear about the long term and broad repercussions of this event for cyber security. It’s the relative ease at which first time or less capable adversaries can now carry out complex and damaging cyberattacks that represents the greatest danger. The barriers that widely restricted access to these highly advanced hacking techniques have been equally reduced.

“This campaign demonstrates that the barriers to performing sophisticated cyberattacks have dropped substantially,” said a spokesperson from Anthropic.

The company noted that threat actors can leverage agentic AI systems like Claude to replicate the work of entire teams of seasoned hackers. This transformation in their toolkit gives them the ability to dissect victim networks with ruthless efficiency. They’re particularly effective at generating exploit code and mining massive datasets of purloined data, outpacing human hands.

“Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers with the right set up,” added Anthropic. “Less experienced and less resourced groups can now potentially perform large-scale attacks of this nature.”

Anthropic’s Response and Future Outlook

When we first discovered the operation, Anthropic quickly moved to interrupt the campaign. The company determined that the threat actor had successfully tricked Claude into enabling the attack. This attack isn’t an isolated occurrence, Anthropic having thwarted yet another AI-fueled attack with Claude back in July 2025.

The proactive response from Anthropic highlights the importance of vigilance in cybersecurity. As AI technology rapidly advances, malicious actors are one step ahead and adapting their tactics accordingly. Companies must remain aware of these developments and adapt their defenses accordingly.