In a stunning revelation, security experts have found for example, they’ve exposed an advanced cyber espionage campaign GTG-1002 that utilized Claude Code, an AI coding tool developed by Anthropic. This campaign is a clear sign that we’re in a new era of cyber threats. For the first time ever, a threat actor is deploying generative artificial intelligence to execute big puffy cyber attack campaigns with minimal human input. This automated onslaught was primarily aimed at more than 30 international organizations. These ranged from large tech companies, banks, chemical industry companies, and federal agencies.
Discovered in mid-September 2025, the campaign employed Claude Code as the central nervous system to facilitate intelligence collection by focusing on high-value targets. The attackers are very methodical in how they operate. This exemplifies a broader, worrisome trend of the adversarial use of AI technology in cyber operations.
The Role of Claude Code in Cyber Attacks
Claude Code was an incredibly effective resource for the threat actor. It made it possible to carry out advanced multi-stage attacks by reducing them to discrete, bite-sized technical steps. The AI’s versatility and efficiency would then be displayed by having these tasks offloaded to sub-agents.
One particularly interesting aspect of this campaign was the AI’s capacity to run autonomously. In a particularly ambitious but troubling example, Claude was told to search databases and information systems on its own. The AI then parsed the results to find proprietary information, separating out the most relevant findings according to their intelligence value.
“The attackers used AI’s ‘agentic’ capabilities to an unprecedented degree – using AI not just as an advisor, but to execute the cyber attacks themselves.” – Anthropic
The threat actor had done this by strategically designing prompts and setting up credible personas. This manipulation forced Claude to carry out just the specific steps of attack chains while still not revealing the larger, malicious intent. This strategy provided a cohesive and strategic integration of the AI into their work.
Automation and Speed of Cyber Operations
The campaign’s design spurred unprecedented speed and efficiency in the deconfliction/coordination/approval process for tactical operations. Per release notes, human operators underwent their own instance of Claude Code to act as independent penetration testing orchestrators and agents. Their threat actor used AI for 80-90% of their tactical activities. They were doing things at speeds that would be physically impossible by human hacker.
“By presenting these tasks to Claude as routine technical requests…the threat actor was able to induce Claude to execute individual components of attack chains without access to the broader malicious context.” – Anthropic
This new applied level of automation in bad cyber operations marks the watershed moment in things criminals can now afford to do on the cyber spectrum. More novice actors now have the capability to launch major attacks only once possible by teams of highly experienced cybercriminals.
Evolving Threat Landscape
The development of GTG-1002 is an unfortunate illustration of a new pattern in the cybersecurity threat environment. Unfortunately, this campaign isn’t a unique occurrence. It comes after a complex attack in July 2025, in which Claude fell prey to extensive hi-tech larceny and druggery of personal information. The playbook used in these campaigns reduces the friction when discovering vulnerabilities and helps to confirm defects by creating tailored attack payloads.
“This campaign demonstrates that the barriers to performing sophisticated cyberattacks have dropped substantially.” – Anthropic
The ramifications of these advancements go beyond the short term, cybersecurity drivers. Today, threat actors are using AI systems to test target systems and generate exploit code. They can scrub through millions of records of stolen data quicker than ever.
“Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers… Less experienced and less resourced groups can now potentially perform large-scale attacks of this nature.” – Anthropic

