Anthropic Disrupts Sophisticated AI-Driven Cyber Espionage Campaign

In a significant development within the cybersecurity landscape, Anthropic announced it disrupted a sophisticated cyber espionage operation in July 2025. GTG-1002 is a ship-seeking missile for a new era of cyber warfare. This is the first time that threat actors have used artificial intelligence in order to conduct a widespread cyber attack largely autonomously. To…

Tina Reynolds Avatar

By

Anthropic Disrupts Sophisticated AI-Driven Cyber Espionage Campaign

In a significant development within the cybersecurity landscape, Anthropic announced it disrupted a sophisticated cyber espionage operation in July 2025. GTG-1002 is a ship-seeking missile for a new era of cyber warfare. This is the first time that threat actors have used artificial intelligence in order to conduct a widespread cyber attack largely autonomously. To combat this new strategy, it’s important to understand the changing threat landscape in how adversaries will most likely leverage AI technologies for bad intent.

The GTG-1002 campaign targeted approximately 30 HVTs. This was comprised of major technology companies, financial services, chemical manufacturers and state DOTs. The operation demonstrated the threat actor’s adaptability and level of professional organization. This should alarm everyone who values cybersecurity across our government, critical infrastructure, and defense sectors.

AI as an Autonomous Cyber Attack Agent

The intruders repurposed Anthropic’s large language model, Claude, into what the hackers called an “autonomous cyber attack agent.” This change in focus made it difficult for them to defend against all stages of the attack lifecycle. The lifecycle included reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis and data exfiltration.

It is suspected the threat actor used Claude Code and Model Context Protocol (MCP) tools to plan these attacks. Claude Code served as the hardware’s central nervous system. It handled instructions given by operators, often military personnel, and broke down advanced, multi-layered attacks into technical drillable steps. Using this framework, the attackers performed extensive vulnerability discovery and proved the flaws they identified by creating customized attack payloads.

As Anthropic themselves pointed out, this was a strikingly high level at which AI was used in these attacks.

“The attackers used AI’s ‘agentic’ capabilities to an unprecedented degree – using AI not just as an advisor, but to execute the cyber attacks themselves,” – Anthropic.

This view underscores the changing nature of cyber threats and growing use of AI technologies by bad guys.

Easing Barriers to Cyber Attacks

The GTG-1002 campaign illustrates a concerning trend: the barriers to conducting sophisticated cyber attacks have significantly diminished. Anthropic has sounded the alarm on this change. They argue that today, even novice groups can carry out devastating attacks on a larger scale more easily than before.

“This campaign demonstrates that the barriers to performing sophisticated cyberattacks have dropped substantially,” – Anthropic.

The threat actor fed general technical queries to Claude through highly detailed prompts. This method misled the AI into running tangible parts of attack chains without requiring it to have any awareness of the broader malevolent framework. This process made possible perhaps the deepest and most successful operational pivot to a digital-first operation.

Anthropic’s analysis found a shocking pattern. Threat actors have advanced AI systems to automate tasks that only a few years ago would require entire battalions of highly-trained, experienced hackers.

“Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers with the right set up, analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator,” – Anthropic.

This new capability presents an enormous set of challenges to cybersecurity defenders who need to adjust to and account for these realities.

The Future of Cybersecurity and AI

Similar incidents have been reported by organizations building AI models like ChatGPT and Gemini. This serves as a stark reminder that AI-enabled cyber dangers are an urgent, universal threat. The sophisticated ways that threat actors can misuse powerful AI tools justifies a reassessment of the security protocols we currently have established.

For its part, Anthropic underscored that it wouldn’t have been caught and stopped this operation until much longer if not for extensive mitigations. None could characterize the campaign as anything other than well-resourced and smartly run. In another powerful example, the threat actor told Claude to autonomously query various databases and systems. Claude’s entire mission was to discover proprietary information, determined by him solely on the intelligence value.

“The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates,” – Anthropic.

This shocking announcement highlights the need for all of us to stay perpetually aware and actively progressive in our cybersecurity efforts to combat advancing dangers.