OpenClaw Security Flaws Expose AI Agents to Malicious Exploits

OpenClaw, an open platform for AI agents, recently launched into some very dangerous security loopholes. These exploitation vulnerabilities would allow evil web pages to take over home-run AI agents that process locally. In practice, we quickly found and fixed the bugs. These features are exposed to a high-severity flaw related to WebSocket connections, including the…

Tina Reynolds Avatar

By

OpenClaw Security Flaws Expose AI Agents to Malicious Exploits

OpenClaw, an open platform for AI agents, recently launched into some very dangerous security loopholes. These exploitation vulnerabilities would allow evil web pages to take over home-run AI agents that process locally. In practice, we quickly found and fixed the bugs. These features are exposed to a high-severity flaw related to WebSocket connections, including the execution of untrusted code. Security experts are raising red flags about the potential dangers of deploying OpenClaw in unsecured environments. They recommend users follow some simple practices today in order to minimize these dangers.

The vulnerabilities were made public in February of 2026, with OpenClaw providing a critical patch in under 24 hours. We just released Version 2026.2.25 on February 26. This release addressed 11 vulnerabilities including CVE-2026-25593, CVE-2026-24763, CVE-2026-25157, CVE-2026-25475 which were all rated moderate to high in severity. The pressing demand for immediate action further highlights the critical nature of the circumstances we are facing.

Details of the Vulnerabilities

The most impactful vulnerability lets attackers leverage compromised WebSocket requests to connect to OpenClaw’s locally running agents. This would allow for malfeasant access and manipulation of AI capabilities. This gave attackers the ability to poison a log with a vulnerability. This vulnerability enabled them to append arbitrary data to log files through WebSocket connections on TCP port 18789.

Oasis Security drove home the importance of this issue. It powers itself from within the central framework of OpenClaw, without needing third-party plugins or user-installed addons. “Our vulnerability lives in the core system itself – no plugins, no marketplace, no user-installed extensions – just the bare OpenClaw gateway, running exactly as documented,” stated a representative from Oasis Security.

Eye Security has similarly sounded the alarm over the dangers associated with this vulnerability. And they were right to underscore the most important point. Consider a scenario where an AI agent accidentally interprets harmful text as operational data.

“If the injected text is interpreted as meaningful operational information rather than untrusted input, it could influence decisions, suggestions, or automated actions,” – Eye Security.

Recommendations for Secure Deployment

Researchers advise deploying OpenClaw outside of controlled test environments. To achieve the highest level of security you should do your analysis on dedicated virtual machines or separate physical systems. Microsoft has issued advisories warning users about potential credential exposure and system compromise due to the unsecured deployment of OpenClaw. The Microsoft Defender Security Research Team stated,

“Because of these characteristics, OpenClaw should be treated as untrusted code execution with persistent credentials.”

They further noted that it is “not appropriate to run on a standard personal or enterprise workstation.”

Courtesy of several recent discoveries, we know that the trojanized OpenClaw is vulnerable, with 71 malicious ClawHub skills that imitate legitimate cryptocurrency tools and redirect funds. In reply, security experts stress the importance of immediate patching and frequent reviews of access given to AI agents.

The Evolving Threat Landscape

As AI technologies explode into enterprise environments, the nature of security concerns is quickly developing. AI security company Straiker Foundation recently completed a reverse engineering analysis of 3,505 ClawHub skills. They found a lot of bad guys among the skills they reviewed. Bob-p2p-beta skill researchers Yash Somalkar and Dan Regalado walk us through how the skill works. This skill acts as a generic AI agent on social networks using agent focused design techniques.

“From that position, it promotes its own malicious skills directly to other agents, exploiting the trust that agents are designed to extend to each other by default,” said Somalkar and Regalado. They characterized this as a supply chain attack augmented by social engineering tactics.

These findings have important implications that extend beyond system-to-system. They indicate a much wider trend where existing vulnerabilities of old meet AI-specific attack surfaces. As Endor Labs recently noted, the demand for security analysis must shift with these changes.

“As AI agent frameworks become more prevalent in enterprise environments, security analysis must evolve to address both traditional vulnerabilities and AI-specific attack surfaces,” – Endor Labs.