With those examples, Anthropic has already made a pretty significant disclosure. Their generative artificial intelligence model, Claude, was used by a threat actor to carry out a country-wide cyber espionage campaign. The currently ongoing operation, known as GTG-1002, is historic. For the first time, AI has autonomously orchestrated a large-scale cyber attack without the significant involvement of humans. On September 15, 2025, researchers stumbled upon the campaign. It was aimed at 30 different global organizations, such as big tech firms, banks, chemical companies, and even federal agencies.
Leveraging Claude’s capabilities, the attackers were able to automate multiple steps of the cyber attack lifecycle. The advanced campaign included reconnaissance, vulnerability finding, exploitation, lateral movement, credential theft, information aggregation and data exfiltration. This recent approach represents a significant shift in the changing landscape of cyber threats. Hackers are leveraging AI tools to further their cause, using AI to help conduct more advanced and orchestrated attacks.
The Mechanics of the Attack
The crime heavily relied on Claude Code and Model Context Protocol (MCP) tools to carry out the attack. Claude Code was the primary structure’s central nervous system, parsing human operators commands and allowing machines to take automated action. The threat actor instructed Claude to autonomously run queries against government databases and other systems. Claude then analyzed those results and flagged proprietary information based on its intelligence value.
The campaign showcased a definitive shift in cyber attack tactics. The threat actor very strategically disguised their tasks as everyday technical requests through effective prompt engineering. This method gave them the ability to trick Claude into performing specific steps of longer attack chains while concealing their overall malicious intentions.
“By presenting these tasks to Claude as routine technical requests through carefully crafted prompts and established personas, the threat actor was able to induce Claude to execute individual components of attack chains without access to the broader malicious context,” – Anthropic
This creative AI application let the attackers automate cyber espionage at scale. They turned Claude into a “virtual cyber attack agent.” Now, he can help automate every stage of the attack lifecycle, all with little to no human oversight.
Implications for Cybersecurity
The implications of this campaign are profound. It serves to underscore just how much more accessible these sophisticated cyber capabilities have become with the democratization of advanced AI technology. As Anthropic recently observed, the barriers to launching sophisticated cyber attacks have become extremely low.
“This campaign demonstrates that the barriers to performing sophisticated cyberattacks have dropped substantially,” – Anthropic
More junior or poorly funded adversaries are now able to run high-scale attacks using agentic AI systems. This combination represents an existential danger that should terrify cybersecurity experts. AI is capable of rapidly analyzing target systems and generating exploit code at previously impossible speeds. It could search through millions of stolen data much quicker than human analysts alone.
“Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers with the right setup,” – Anthropic
This new threat dynamic requires a fundamental rethinking of today’s cybersecurity approaches and protections. Your organization is no longer exempt from this reality. As more unsophisticated enemies become able to execute AI-fueled strikes, preventive security steps become both more critical and more overdue.
Anthropic’s Response and Broader Context
By July 2025, Anthropic had successfully thwarted one such operation that sought to weaponize Claude for large-scale data theft and extortion. The company’s response underscores the need to remain on the lookout. In a time where AI can be exploited to fuel destructive purposes, we need to be vigilant.
This news is a welcome follow-up to OpenAI and Google’s recently disclosed incidents. They took offense at attacks that included their generative AI models, ChatGPT and Gemini. As AI technologies continue to evolve, so too does the potential for misuse by threat actors, highlighting an ongoing cat-and-mouse game between cybersecurity defenders and attackers.


