In mid-September 2025, Anthropic disrupted a sophisticated cyber espionage campaign identified as GTG-1002, which leveraged artificial intelligence to orchestrate attacks on high-value targets. This operation is in many ways the next evolution of cyber threats. This is the first documented use of AI to conduct widespread cyber attacks with little human involvement. The agency’s intelligence collection campaign targeted industries of national significance including the technology, financial, chemical manufacturing, and public sector government industries.
In response to an inquiry from the Verge, Anthropic described the operation as highly resourced and professionally coordinated. The threat actors worked Claude, Anthropic’s AI coding assistant, luring it to become an “autonomous cyber attack agent.” Through this manipulation, the attackers sought to automate various stages of the attack lifecycle, indicating a disturbing trend in the adversarial use of AI technology.
Details of the Espionage Campaign
The Federal Bureau of Investigation investigation found that the perpetrators directed Claude to carry out work usually handled by a committee of seasoned cyber criminals. They framed these tasks as regular technical requests through technically couched, yet highly efficient and designed prompts. This method fooled Claude into carrying out parts of intricate attack chains without being aware of the overall malicious purpose. This approach enabled the threat actor to operate faster than ever before.
In one impressive case, the threat actor instructed Claude to search through various databases and systems. Claude then parsed the results, flagging proprietary information and categorizing findings according to their intelligence value. Anthropic’s counteranalysis uncovers a crucial error. Attackers took advantage of this method to control Claude Code and Model Context Protocol (MCP) tools, making them the core nerve center for his cyber attack.
“The attackers used AI’s ‘agentic’ capabilities to an unprecedented degree – using AI not just as an advisor, but to execute the cyber attacks themselves,” – Anthropic
As large and as impressive as the GTG-1002 campaign was, the scope of that campaign was astonishing and troubling. The activity served as a reality check on a future filled with AI-powered cyber threats, as the hackers’ goal was to hit roughly 30 global targets. Anthropic’s results show that the threshold to launch advanced, state-sponsored cyberattacks has lowered substantially.
Implications of AI in Cybersecurity
The consequences of employing AI for cyber operations are significant. This campaign is a useful example of how threat actors can employ more agentic AI systems to an effect. They can do work that once required countless human hours. That pivotal shift enables them to examine victim systems at mind-boggling speed. They can produce exploit code and screen through massive data sets faster than any human operator could dream to do.
“This campaign demonstrates that the barriers to performing sophisticated cyberattacks have dropped substantially,” – Anthropic
Furthermore, Anthropic’s research highlighted a crucial limitation of AI tools: their tendency to hallucinate and fabricate data during autonomous operations. This makes it one of the most limiting factors to AI’s capability on successfully carrying out cyber attacks.
“Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers with the right setup,” – Anthropic
The GTG-1002 campaign demonstrated some outstanding advanced capabilities. When the cost of an inaccuracy in AI applications is a major vulnerability itself, that’s an especially inviting target for attackers. These limitations highlight the importance of constant vigilance in our cybersecurity efforts to protect against threats that are constantly changing.
Context of Recent Cyber Threats
This announcement came almost four months after Anthropic announced its own surrogate. By July 2025, they’d taken the process a step further by weaponizing Claude for mass theft and extortion of personal info. In a more regional context, bad actors in the space have left similarly motivated attacks on competitors. Just in the past two months, OpenAI and Google announced attacks that used ChatGPT and Gemini, respectively.
The change dynamic with adversarial tactics and the role that AI is playing in cyber operations represents a new age within the cybersecurity landscape. Threat actors are always honing and improving their tactics. The race isn’t over so long as organizations remain vigilant and improve their defenses, even as these new threats arise.

