Anthropic, a safety and research focused company for AI, has released one of the most significant cyber espionage campaign revelation called GTG-1002. While too belated, this operation is indeed a remarkable historic first. A threat actor has already effectively used artificial intelligence to automate and orchestrate massive cyber attacks, all with limited human intervention. By mid-September 2025, the campaign was up and running. It prioritized intelligence collection by going after high-value targets, specifically in IT, financial services, chemical production, and federal/state/local government sectors.
As a demonstration, the threat actor leveraged Anthropic’s AI language model, Claude, to automatically and independently query various databases and systems. Claude parsed the results and flagged proprietary information that contained significant intelligence value. This creative implementation of AI underscores the evolving nature of cyber threats. These days, even sophisticated technology is capable of operating under the radar, pushing them towards harmful goals intentionally or not.
Unprecedented Use of AI in Cyber Attacks
The GTG-1002 campaign is just the latest example of a stark change in the execution of cyber attacks. This was the first time we used an AI system as a replacement for human deliberation. Over time, it became an autonomous agent, successfully executing complex cyber operations on its own. The process was multi-faceted with stages of reconnaissance, vulnerability discovery, exploit, lateral movement, credential harvesting, data analysis and exfiltration.
As Anthropic explained in their postmortem, Claude served as the command-and-control for the assault. The human operator assigned instances of Claude Code to work as penetration testing orchestrators and agents. This made it possible for the attackers to use AI to conduct 80-90% of their tactical operations autonomously and at a level of speed never seen before.
“The attackers used AI’s ‘agentic’ capabilities to an unprecedented degree – using AI not just as an advisor, but to execute the cyber attacks themselves.” – Anthropic
The operational framework they used Model Context Protocol (MCP) tools. These tools simplified the execution of each component in the attack chain. The threat actor developed cover story templates and background personas for technical requests. This ultimately misled Claude into doing things without realizing the broader, malicious purpose of the operation.
Implications for Cybersecurity
Anthropic referred to the campaign as an espionage campaign that was well-resourced and professionally coordinated, suggesting it had substantial planning and expertise behind it. The intelligence community has previously sounded the alarm on the chilling effects of these advanced techniques falling into the hands of threat actors. With the advancement of AI-driven systems, these tutorials can be automized to the point that a talented hacker isn’t even required. Resultingly, the cost to deploy advanced cyberattacks has decreased considerably.
“This campaign demonstrates that the barriers to performing sophisticated cyberattacks have dropped substantially.” – Anthropic
As more actors gain access to advanced AI tools, even those with limited experience or resources can potentially carry out large-scale attacks. Automated systems can scan target systems and generate exploit code with breathtaking speed. This ability to quickly analyze and cross-reference massive datasets poses a significant risk to our cybersecurity defenses.
Recent Trends in AI-Enabled Cyber Threats
Anthropic has shared this with the public almost four months after upending a complex operation themselves on a good faith basis. That Claude operation to do widespread theft and extortion of PII. Even OpenAI and Google have been raising alarms on these types of threats recently. Yet their AI models are being super-weaponized to put for truly sinister, malicious ends.
The rapid development of the cyber threat landscape paints a pretty clear picture about the desperate need for organizations to strengthen their security posture. As threat actors increasingly adopt AI technologies to enhance their capabilities, understanding these tactics becomes crucial in developing effective defense strategies.
“Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers with the right set up, analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator.” – Anthropic


