In early July 2025, Anthropic received international media attention for thwarting a complex state-sponsored cyber espionage campaign. The operation used their AI tech, Claude, to facilitate mass theft and extortion of identifiable information. This announcement comes after nearly four months of disruptions after the announcement. Specifically, it points to key innovations in the ways we’re experiencing cyberattacks today, namely that artificial intelligence is enabling automated attacks with minimal human intervention.
The operation, dubbed GTG-1002, signals a significant change in the ways nefarious actors are using AI capabilities. This campaign involved intelligence collection from high-value targets, demonstrating the potential for AI to redefine the landscape of cyber threats. According to AI safety non-profit Anthropic, this cybersecurity incident represents the first ever identified AI-enabled cyberattack. Remarkably, it was able to do all this with very limited supervision from human operators.
The Mechanism Behind GTG-1002
The hackers used Anthropic’s AI coding tool Claude Code to specifically go after about 30 organizations, including universities and nonprofits across the globe. The entities targeted by these attacks range from leading technology companies, banks, and chemical manufacturers to numerous public sector organizations. The bad conduct in question is estimated to have occurred sometime in mid-September 2025.
Claude Code turned out to be the operation’s brain and central nervous system. It fielded directions from human handlers and parsed large, complicated attacks into simpler tasks to direct sub-agents to carry out. In another example, a threat actor asked Claude to autonomously scrape databases. Claude then became the bridge between need and the unknown, pulling essential information needed from multiple systems.
“The attackers used AI’s ‘agentic’ capabilities to an unprecedented degree – using AI not just as an advisor, but to execute the cyber attacks themselves,” – Anthropic
Innovative use of Claude Code facilitated the parsing of results by these threat actors. They might identify proprietary data and prioritize results by intelligence usefulness. Such a framework made it possible to find vulnerabilities and produce specific attack payloads specifically meant to leverage these weaknesses.
Implications for Cybersecurity
Given the growing trend in cybersecurity, where the barriers to executing advanced cyberattacks have drastically decreased, Anthropic’s disclosures paint a scary picture. AI’s introduction into these tactics allows untrained organizations to facilitate complex, large-scale operations, even with little experience. Before, these activities took vast amounts of time and personnel.
“This campaign demonstrates that the barriers to performing sophisticated cyberattacks have dropped substantially,” – Anthropic
Complementary MCP tools used in tandem with Claude Code introduced this enhanced operational capacity seamlessly. Threat actors insidiously framed requests to Claude as regular IT-related inquiries. They employed well-crafted prompts to manipulate the AI into performing pieces of attack chains, disguising the collective malicious goal.
“By presenting these tasks to Claude as routine technical requests through carefully crafted prompts and established personas, the threat actor was able to induce Claude to execute individual components of attack chains without access to the broader malicious context,” – Anthropic
The potential use of any such capabilities is what’s got cybersecurity professionals up in arms. Now, threat actors can use powerful agentic AI systems to do things that would have taken a battalion of highly skilled black-hat hackers. This involves reverse engineering victim’s systems, developing exploit code, and rapidly sifting through massive datasets of stolen data.
The Future of Cyber Threats
As threats within the cyber domain continue to progress, organizations need to augment their defenses to stay protected from this new wave of AI-powered attacks. GTG-1002 has demonstrated stunning efficiency and efficacy. This underscores the urgent need for vulnerability awareness and prevention at the forefront of cybersecurity strategies.
“Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers with the right setup,” – Anthropic
As the landscape changes to a future with AI-enhanced cyberattacks, we’re at a key point in developing our cybersecurity strategy. Companies and governments alike must prioritize investment in advanced security measures and adapt to defend against increasingly sophisticated threats fueled by artificial intelligence.

