New AI-Powered Ransomware Exploits ChatGPT’s Vulnerabilities

One of the latest types of ransomware, recently labeled PROMISQROUTE, has become one of the biggest cyberthreats in today’s world. Titled BadChatGPT, this creative strike takes advantage of a key flaw in new ChatGPT’s model routing. This way, bad actors can easily hack through safety filters and create toxic outputs. This attack was recently discovered…

Tina Reynolds Avatar

By

New AI-Powered Ransomware Exploits ChatGPT’s Vulnerabilities

One of the latest types of ransomware, recently labeled PROMISQROUTE, has become one of the biggest cyberthreats in today’s world. Titled BadChatGPT, this creative strike takes advantage of a key flaw in new ChatGPT’s model routing. This way, bad actors can easily hack through safety filters and create toxic outputs. This attack was recently discovered by Adversa AI, who released a report describing the exploit last week.

PROMISQROUTE takes advantage of the model routing mechanism that AI vendors utilize to reduce costs. By initiating a downgrade in the system, the attack causes prompts to be sent to an older, less secure model. This allows bad actors to bypass all of the robust protective actions taken so far, resulting in unintended – and often grave – consequences.

ESET has identified a related ransomware strain utilizing OpenAI’s gpt-oss:20b model via the Ollama API. This strain is not likely directly related to PROMISQROUTE. It dynamically generates evasive Lua scripts on-the-fly that are effective regardless of the underlying operating system, whether it is Windows, Linux, or macOS-based. OpenAI recently released their gpt-oss model for public use on the same infrastructure, so these vulnerabilities are even more worrisome as they become more available.

Detailed Mechanics of PROMISQROUTE

The mechanics behind PROMISQROUTE expose a genius-med student-level flex on how to rizz the functionalities of AI. By incorporating phrases like “use compatibility mode” or “fast response needed,” attackers can bypass millions of dollars invested in AI safety research. Unfortunately, this type of prompt injection largely obviates the effectiveness of security measures currently in place.

Adversa AI emphasizes the implications of such attacks, stating, “Adding phrases like ‘use compatibility mode’ or ‘fast response needed’ bypasses millions of dollars in AI safety research.” This should underscore to AI developers the need to strengthen their systems against these new, rising threats.

The aesthetic of the attack goes straight for the economic-profit model routing systems that currently dominate AI systems. As AI technologies become more embedded in the operations of organizations of all kinds, the greater the risk for such exploitations. Given how easily these vulnerabilities can be exploited, it requires the urgent attention of cybersecurity experts and AI developers to be taken seriously.

Threat Actors and Their Strategies

The ascent of PROMISQROUTE is far from the only story here. Anthropic also recently announced that it has blacklisted accounts associated with malicious actors. These criminals leveraged its Claude AI chatbot in order to organize nationwide theft and extortion on a massive scale. These suspects hacked at least 17 different organizations, demonstrating a highly coordinated effort to use AI technologies for criminal profit.

“New forms of prompt injection attacks are constantly being developed by malicious actors,” stated a representative from Anthropic. This announcement, like the frequent updates it aims to make, highlights the ever-changing nature of threats in today’s digital environment.

Threat actors behind PROMISQROUTE have created multiple ransomware variants that use advanced evasion techniques. In addition, these variants add encryption and anti-recovery mechanisms, which means they are doubly difficult to fight against. These sophisticated developments in cyber attacks have recently put security firms in a race against time to spot and stifle these threats while they’re still in their infancy.

Emerging Ransomware Strains

In addition to PROMISQROUTE, ESET has reported the discovery of a ransomware strain utilizing Lua scripts generated by the gpt-oss:20b model. This strain is notable for its ability to enumerate local filesystems, inspect target files, exfiltrate data, and perform encryption through a unique capability called PromptLock.

“PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption,” explained ESET. This aspect of adaptability means that indicators of compromise can change from execution to execution, making detection more challenging.

Furthermore, ESET noted that “PromptLock does not download the entire model, which could be several gigabytes in size.” This unique feature allows for faster execution and deployment of attacks. In doing so, it causes alarm about the real possible scope of these ransomware businesses.