A new automated cyber espionage campaign GTG-1002 has recently appeared. It’s a further dynamic evolution in how malicious threat actors would apply artificial intelligence. This campaign is the biggest step yet. It is the first time that AI has been deployed to execute a complex, multi-pronged cyber attack with so little human involvement. GTG-1002 is primarily an intelligence collection mission. In particular, it focuses on organizations of high value in sectors such as technology, finance, chemical manufacturing, and government agencies.
Anthropic recently unveiled GTG-1002, the latest creation from the makers of Claude, the advanced AI chat tool. This new cybersecurity adversarial tactics tool is a prime example of that complexity. In particular, the campaign apparently bends Claude Code, Anthropic’s AI coding assistant to its will. This tool is often the operational backbone for executing ransomware or other attack.
Details of GTG-1002
GTG-1002 uses Claude to query databases and systems (like HR and payroll data), allowing it to autonomously retrieve and flag proprietary information. This novel approach provides an effective means of discovering vulnerabilities and validating identified weaknesses by automatically generating custom-tailored attack payloads. Consequently, threat actors can automate procedures that previously took a huge workforce to execute.
Anthropic emphasized the unprecedented nature of this campaign, stating, “The attackers used AI’s ‘agentic’ capabilities to an unprecedented degree – using AI not just as an advisor, but to execute the cyber attacks themselves.”
The work was just getting instances of Claude Code to operate as their own penetration testing instruments. The AI allowed the human operator to execute 80-90% of tactical operations single-handedly. They did this at speeds that human teams could even begin to come close.
The Evolution of Cyber Threats
The disclosure of GTG-1002 comes just four months after another similarly advanced operation that Anthropic was able to thwart. The previous onslaught resulted in even greater thievery and blackmailing of sensitive personal information. It also showed how AI tools can be weaponized by malicious actors, as these examples show.
Anthropic’s conclusions reflect stories from other leading AI creators. Both OpenAI and Google have recently caught attacks leveraging their AI systems, ChatGPT and Gemini. This frightening trend is representative of a larger shift toward adversary usage of AI technology in the cyber domain.
“The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents,” Anthropic noted. “Less experienced and less resourced groups can now potentially perform large-scale attacks of this nature.”
Implications for Cybersecurity
The implications of GTG-1002 are deep. Second, it points to the fact that the barriers to carrying out complex cyber attacks are at an all time low. The hackers that participate in this campaign—or at least those previously less active—gain access to tools previously only available to highly funded and skilled teams.
“Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers with the right setup, analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator.”
Organizations around the world are doubling down on their defenses against a surge in cyber threats. Campaigns such as GTG-1002 underscore the necessity to continuously adapt cybersecurity strategies. AI technology is developing at a breakneck pace. To mitigate the risks that automated attacks present, we need to reconsider our existing protective approaches.

