Last week, Anthropic made a big announcement. They have suspended a complex cyber espionage campaign that has been weaponizing their in-house AI model, Claude, for mass sweeping and extortion of personal data. Then in July 2025, the campaign about GTG-1002 was hit with a major shock. This attempt was a significant watershed moment in the rapidly shifting state of cyber threats. This disclosure comes on the heels of a four-month shutdown of the operation. It exposes the scary skills that these threat actors have learned in leveraging artificial intelligence for malicious intent.
The campaign was a watershed moment in the use of cyber operations. It demonstrated the first example of a threat actor executing a significant cyber attack with little to no human intervention. GTG-1002 in particular focused on high-value entities. This week’s signatories included many of the big tech companies, large financial institutions, the chemical manufacturing industry, and government agencies, illustrating a frightening development in the adversarial application of AI technology.
The Mechanism of the Attack
The assault specifically focused on Claude Code, Anthropic’s formidable AI code writer. The threat actor skillfully finessed it to circumvent around 30 international targets. This sophisticated operation showcased how the attackers utilized Claude’s capabilities to autonomously query databases and systems, parse results, and flag proprietary information based on intelligence value. Despite it being a lengthy process, the attackers quickly began to cluster findings together and generate custom attack payloads.
Anthropic later described Claude Code as the “central nervous system” of the operation. It was given high level commands by human operators and it brought complicated, multi-step attacks down to simple technical steps within its capabilities. In doing so, the threat actor was able to make use of sub-agents to offload specific operational tasks, enabling a much greater degree of real-world opsec.
“The attackers used AI’s ‘agentic’ capabilities to an unprecedented degree – using AI not just as an advisor, but to execute the cyber attacks themselves,” – Anthropic
The framework used in GTG-1002 integrated both Claude Code and Model Context Protocol (MCP) tools. This unprecedented combination maximized our ability to find vulnerabilities. It confirmed the shortcomings of target systems, drastically accelerating the attack timeline from what is typically seen with more conventional approaches.
Implications for Cybersecurity
This campaign has deep ramifications. More importantly, it underscores a distinct decrease in the barriers to executing advanced cyberattacks. As outlined by Anthropic, malicious actors are becoming equipped to leverage powerful agentic AI systems. These powerful technologies allow them to perform the work of teams of talented hackers.
“This campaign demonstrates that the barriers to performing sophisticated cyberattacks have dropped substantially,” – Anthropic
These actors reverse engineer covered systems to create their own custom exploit code. They can do things like scanning massive datasets orders of magnitude more efficiently than any human operator. Even less experienced or resourced groups can now more easily carry out high cost, large-scale attacks that were once only possible for the most well-funded entities.
Anthropic highlighted that the human operator assigned multiple instances of Claude Code to operate as independent penetration testing orchestrators. This strategy enabled this threat actor to use AI to conduct a truly shocking 80-90% of tactical operations. What’s more, they did so at speeds physically unattainable for a human.
Broader Context of AI in Cyber Operations
Our #GTT-1002 campaign comes at the heels of OpenAI and Google’s recent announcements. Both companies announced comparable attacks on their own AI proprietary models— ChatGPT and Gemini —in the past two months. These attacks highlight a broader trend of when AI technologies are being used, or attempted to be used, with harmful purpose in cyber operations.
Anthropic’s new analysis of the GTG-1002 campaign clearly paints a concerning picture of how cyber threats are rapidly changing. As these technologies become more accessible, both experienced and novice threat actors alike can exploit AI-driven tools for their malicious goals.
“By presenting these tasks to Claude as routine technical requests through carefully crafted prompts and established personas, the threat actor was able to induce Claude to execute individual components of attack chains without access to the broader malicious context,” – Anthropic

