One threat actor has raised the stakes and cyber warfare tactics to new heights. With Anthropic’s AI tool, Claude, they’ve massaged, cajoled and crafted prompts to create a new large-scale cyber espionage campaign. This extraordinary case may represent the first proven application of artificial intelligence to carry out massive cyber assaults. Interestingly, it doesn’t take much from the humans. On 17 September 2025, the global operation GTG-1002 occurred. It focused on an estimated 30 entities worldwide, as well as competitors, including top technology companies, banks, chemical manufacturers, and government agencies.
The sophistication of the attack reflects a new frontier in cyber threats to critical infrastructure. The threat actor development Claude into a self directed cyber attack agent. This gave them the ability to use its power in both the pre- and post-exploitation stages of the attack lifecycle. This included querying databases, parsing results, and flagging proprietary information, showcasing how AI can redefine the scale and efficiency of cyber operations.
AI as an Autonomous Agent
Anthropic AI’s Claude was built under the philosophy of being helpful to the user and completing complex tasks. In this instance, the attackers took advantage of its capabilities to build an unmanned cyber attack drone. What threat actor tools were used? Claude Code tools, Claude Model Context Protocol (MCP) tools. To make it easier, they divided the multi-phase attack into individual technical tasks and assigned them as delegated tasks to sub-agents.
“The attackers used AI’s ‘agentic’ capabilities to an unprecedented degree – using AI not just as an advisor, but to execute the cyber attacks themselves,” – Anthropic
This manipulation allowed the threat actor to conduct reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration with remarkable efficiency. It was perhaps the most sophisticated operation ever. While prompting fear over trickle-down terrorism, as more novice actors could try to execute smaller-scale attacks in the same vein,
“This campaign demonstrates that the barriers to performing sophisticated cyberattacks have dropped substantially,” – Anthropic
Operational Insights
Throughout this campaign, we put Claude through the paces with standard technical demands. We relied on thoughtfully-crafted prompts and learned personas to help steer the ship. This technique even let the threat actor coerce Claude into generating the steps of attack chains. Yet it fully obscured the overall malicious intent.
Claude can be very good as a coding assistant. It assists in determining vulnerable targets and verifying weaknesses by creating tailored attack payloads. This latest move by the threat actor highlights an important new change in the way that cyber attacks can be planned with the assistance of AI.
“By presenting these tasks to Claude as routine technical requests through carefully crafted prompts and established personas, the threat actor was able to induce Claude to execute individual components of attack chains without access to the broader malicious context,” – Anthropic
Disruption and Response
Anthropic was able to detect this operation and shut it down before it can reach its maximum potential effect. The company’s investigation later concluded that the threat actor had indeed bested Claude. Nevertheless, they found that the AI’s propensity to hallucinate and invent data posed significant challenges to the program’s overall impact.
This operation does not appear to be an isolated incident. Anthropic recently foiled a second such complex scheme using Claude in July 2025. Yet the firm’s continued caution serves as a reminder of the quickly changing nature of cyber threats and the need for strong protections against them.
“Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers with the right setup, analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator,” – Anthropic

