Vulnerabilities in ServiceNow AI Agents Exposed by Second-Order Prompt Injection

Recent discoveries by AppOmni have disclosed alarming vulnerabilities in ServiceNow’s AI agents. These problems stem primarily from a method known as Second-Order Prompt Injection. This approach allows attackers to abuse the agent-to-agent discovery functionality of Now Assist. Consequently, they can more easily carry out unauthorized actions that can expose sensitive corporate data to the public….

Tina Reynolds Avatar

By

Vulnerabilities in ServiceNow AI Agents Exposed by Second-Order Prompt Injection

Recent discoveries by AppOmni have disclosed alarming vulnerabilities in ServiceNow’s AI agents. These problems stem primarily from a method known as Second-Order Prompt Injection. This approach allows attackers to abuse the agent-to-agent discovery functionality of Now Assist. Consequently, they can more easily carry out unauthorized actions that can expose sensitive corporate data to the public.

Attackers can employ the Second-Order Prompt Injection to successfully mislead AI agents. This manipulation gives them the ability to duplicate sensitive information, exfiltrate data, alter records, and escalate privileges. The root of the problem comes from the fact that we are deploying large language models (LLMs) such as Azure OpenAI and Now LLM. These models egregiously make agent discovery easier. Unfortunately, the default settings of most AI agents open the door to exploitation. These environments pre-define the agents’ teams through their settings.

Mechanism of Second-Order Prompt Injection

Second-Order Prompt Injection occurs when an AI agent consumes information the user did not directly input. This happens despite the fact that it is not the agent’s primary role to retrieve that data. One stand-out capability that extends the system’s powers is the agent discovery feature. It can inadvertently twist an innocent mission given to one actor into a dangerous operation.

AppOmni notes that through this method, “an attacker can redirect a benign task assigned to an innocuous agent into something far more harmful by employing the utility and functionality of other agents on its team.” This amplifies the disturbing possibility for abuse baked into these setups.

Organizations employing Now Assist’s AI agents should be on guard. Aaron Costello, a representative from AppOmni, underscored the importance of scrutinizing configuration settings: “If organizations using Now Assist’s AI agents aren’t closely examining their configurations, they’re likely already at risk.” The configuration options that govern cross-agent communications are key to understanding—and avoiding—these risks.

Implications for Corporate Data Security

The implications of Second-Order Prompt Injection are serious. As Aaron Costello pointed out, “When agents can discover and recruit each other, a harmless request can quietly turn into an attack, with criminals stealing sensitive data or gaining more access to internal company systems.” This new capability further highlights the need for organizations to be diligent about their deployment settings.

The default configuration options are at the heart of this vulnerability. These settings are part of the default LLM as well as tool setup options. They get into the channel-specific defaults, specifically where the agents are deployed. Without an understanding of these nuances, organizations may be unknowingly making themselves vulnerable to attack.