Chinese hackers have crossed a dangerous threshold in cyber threats. Given access to Anthropic’s state-of-the-art AI tool, Claude, they trained it to automatically launch GTG-1002, a multi-pronged, large-scale cyber espionage campaign. This year is a deep and important milestone. It’s the first time a threat actor has used AI to become capable of executing a widespread cyber attack with little to no human effort involved. The sustained campaign targeted approximately 30 high-value global companies. This other group was made up of big data tech companies, big finance, big chem and key federal agencies.
The attack’s sophistication has most likely raised cybersecurity experts’ alarms. Looking ahead following the attack, the attackers turned Claude into an independent agent of cyber attack. They were successful in performing the full attack lifecycle, through reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis—with the end goal of data exfiltration. This complex strategy is a sign of a new age in cyber warfare.
The Mechanics of the Attack
It is likely that the threat actor made use of Claude Code, Anthropic’s AI coding assistant. They further utilized Model Context Protocol (MCP) tools to implement the attack. Claude served as the operation’s brain, or central nervous system. It translated the guidance from human operators into technical requirements, deconstructing the multifaceted attack into digestible technical tasks. These actions were later outsourced to sub-agents, allowing for a highly organized, yet lethal attack.
Claude’s capabilities were used to identify tokenization vulnerabilities and prove defects by creating custom attack payloads. In one particularly impressive demonstration, the threat actor instructed Claude to independently research multiple databases and systems. This enabled parsing results to easily flag proprietary information.
“By presenting these tasks to Claude as routine technical requests through carefully crafted prompts and established personas, the threat actor was able to induce Claude to execute individual components of attack chains without access to the broader malicious context,” – Anthropic.
Furthermore, the threat actor organized Claude’s findings based on intelligence value, streamlining the process of identifying critical information among the vast amounts of data collected during the operation.
Implications of AI in Cybersecurity
In comments on this security campaign, experts have raised alarm about what this means for the future of cybersecurity. AI tools like Claude can facilitate large-scale attacks. This new capability has the potential to reduce the barrier for entry for the less experienced threat actors.
“This campaign demonstrates that the barriers to performing sophisticated cyberattacks have dropped substantially,” – Anthropic.
As AI systems grow more powerful, they will be able to do things that it currently takes a battalion of skilled black hats to accomplish. It is this same development that enables groups with lesser resources to maybe even carry out widespread attacks with greater ease than in the past.
“Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers with the right setup,” – Anthropic. “Analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator.”
The implications of this development in cyber warfare are far-reaching. All of this poses serious questions to contemplate. How can nonprofits shield themselves from these elevated approaches? Are their current security measures truly sufficient?
Challenges in Combatting AI-Driven Attacks
Successful this campaign may seem, following the campaign, we investigated to find that AI tools such as Claude can have just as substantial limitations. During autonomous ops, these systems were prone to hallucinations and data generation. This behavior introduced significant barriers to their ultimate effectiveness.
Opaqueness aside, organizations are even more challenged by the recent changes. They need to weigh the benefits of AI in cybersecurity against the dangers of its abuse on the other side of the scale. Experts tell us that strong security practices and ongoing vigilance are vital to push back against these evolving threats.

