Salesforce have thus far positively patched a critical vulnerability dubbed ForcedLeak. This vulnerability represented a critical risk to any organization using the Agentforce platform that had Web-to-Lead functionality turned on. In this case, Noma Security’s exploitation of the vulnerability found on July 28, 2025. It has a CVSS score of 9.4, illustrating exactly how catastrophic it could be. In reality, bad actors have figured out a complicated five-step process to defraud, overwhelm, and abuse the system. This opens the door for them to mine sensitive customer data.
The ForcedLeak vulnerability is an instance of the EchoLeak primitives class. This new breed of security threat has quickly become a major threat to generative artificial intelligence (GenAI) systems. The incident serves as a grim reminder of the urgent need for proactive security measures and responsible governance in AI technologies. Sasi Levi, the security research lead at Noma, emphasized the importance of addressing these vulnerabilities. By taking these actions, organizations can further protect themselves against potential data breaches.
Understanding the ForcedLeak Process
Increased surface area Through a precisely planned five-step process that can be started by an attacker, this vulnerability exists. It is initiated once a criminal submits a Web-to-Lead form with harmful directives. On an internal employee’s lead sourcing through everyday AI prompts, Agentforce triggers both valid and behind-the-scenes actions Author Indra K.
Using triggers and API calls, the system automatically queries the CRM to retrieve sensitive lead information. Next, it exfiltrates that information to an attacker-controlled domain, concealing it within a PNG image. This intriguing form of data exfiltration showcases the profound dangers of indirect prompt injection. Malicious prompts can indirectly make their way into AI’s deployment through third-party data inputs.
“By exploiting weaknesses in context validation, overly permissive AI model behavior, and a Content Security Policy (CSP) bypass, attackers can create malicious Web-to-Lead submissions that execute unauthorized commands when processed by Agentforce.” – Noma
Salesforce’s Response to ForcedLeak
To mitigate this vulnerability, in addition to other security measures, Salesforce has introduced a URL allowlist mechanism. To address this measure, here’s how to prevent outputs generated by Agentforce and Einstein AI agents from being sent to untrusted URLs. The company stated,
“Our underlying services powering Agentforce will enforce the Trusted URL allowlist to ensure no malicious links are called or generated through potential prompt injection.” – Salesforce
Salesforce has been able to regain the expired domain that attackers had used before them. This move further demonstrates their dedication to protecting customer information and regaining trust in their service.
Broader Implications for AI Security
The ForcedLeak incident should be a wake-up call about the vulnerabilities that can arise within AI systems. As noted by Sasi Levi, “The ForcedLeak vulnerability highlights the importance of proactive AI security and governance.” The changing world of cyber threats means that staying a step ahead and being proactive with security measures is crucial.
Itay Ravia, another expert in the field, reflected on the broader implications of such vulnerabilities:
“When Aim Labs disclosed EchoLeak (CVE-2025-32711), the first zero-click AI vulnerability enabling data exfiltration, we said that this class of vulnerability was not isolated to Microsoft.” – Itay Ravia
Ravia notes that most other agent platforms currently are susceptible to the same attacks. Such vulnerability arises from their poor intelligence on dependencies and absence of crucial protective infrastructures. He stated,
“In our investigations it has become quite clear that many other agent platforms are also susceptible. ForcedLeak is a subset of these same EchoLeak primitives. These vulnerabilities are endemic to RAG-based agents and we will see more of them in popular agents due to poor understanding of dependencies and the need for guardrails.” – Itay Ravia