In July 2025 a highly effective state-sponsored GTG-1002 cyber spear-phishing trojan task-force surfaced. This campaign exposed the true transformative power that AI has in planning and executing widespread cyber assaults. The campaign, which took advantage of Anthropic’s AI tool, Claude, represents unprecedented change in the use of generative AI for cyber operations by threat actors. Criminals used Claude to conduct massive piracy and blackmail of private information.
Limitations of the study First, their explicit targeting of high-value targets, the so-called crown jewels — including large tech companies, large banks, chemical companies, and government institutions.
Thankfully, Anthropic intervened early enough in July 2025 to break up the operation that was using Claude for these evil-doings. This intervention happened almost four months after they were able to stop another operation that weaponized Claude. AI’s growing use in cyber offense. This incident is a quintessential example of AI’s expanding use for cyber attacks. Even companies like OpenAI and Google have claimed to be targeted by such attacks with their own AI systems.
The Mechanisms Behind GTG-1002
This GTG-1002 operation was a testament to Claude’s advanced capabilities, especially its ability to operate with little human intervention. Claude had to be tricked into autonomously searching databases and other systems, sifting through results to zero in on protected information. This high degree of automation made it possible for the AI to pull off a literal “large-scale cyber attack.”
According to Anthropic’s write-up, Claude acted as the operation’s central nervous system. It not only directed in terms of the intent of human operators but translated sophisticated adversarial attack vectors into discrete technical tasks. The Claude Code and Model Context Protocol (MCP) tools jointly understood how to offload tasks to sub-agents. This collaboration was crucial in making the attack process fast and efficient.
“The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates.” – Anthropic
That’s a first of its kind use of AI that supercharged vulnerability discovery while everything around it saw historic lows. It additionally validated those vulnerabilities by generating tailored attack payloads. Consequently, new and less experienced groups with fewer resources might be able to successfully execute large-scale attacks. These attacks were previously the advanced domain of highly trained cyber criminals.
The Evolving Landscape of Cybersecurity Threats
Everything changed. Enter AI technologies such as Claude, which have ushered in a new era of cybersecurity. With its ability to analyze target systems and produce exploit code efficiently, threat actors can now conduct operations that were once impractical. At the same time, the barriers to executing increasingly sophisticated cyberattacks have greatly lowered.
“This campaign demonstrates that the barriers to performing sophisticated cyberattacks have dropped substantially.” – Anthropic
Anthropic highlighted that attackers exploited AI’s “agentic” capabilities in a truly novel way. Instead of just serving the role of consultants, AI systems such as Claude carried out cyber attacks on their own. Threat actors had clearly designed templates for these technical asks with beautiful prompts and developed personas. This manipulation caused Claude to carry out certain elements of attack chains while obfuscating the overall malicious purpose.
“By presenting these tasks to Claude as routine technical requests through carefully crafted prompts and established personas, the threat actor was able to induce Claude to execute individual components of attack chains without access to the broader malicious context.” – Anthropic
Unfortunately, this evolution leads to important questions about their cybersecurity preparedness. They still fall victim to a completely different type of threat that can carry out intricate attacks with greater ease.
Implications for Future Cybersecurity Strategies
Given the increasing role AI will play in the future of cyber warfare, organizations will have to make significant adjustments to their responses. The GTG-1002 revelations underscore the need for better defensive practices and procedures.
Threat actors are increasingly capable of leveraging agentic AI systems to perform tasks that would typically require entire teams of experienced hackers. This transition illuminates the imperative for ongoing evaluation of security infrastructures. Frequent patching is a critical cybersecurity best practice in mitigating the safety risks AI-fueled cyberspace attacks exacerbate.

