Docker was recently faced with an urgent security flaw in its Ask Gordon AI. This vulnerability would have potentially allowed bad actors to run arbitrary code and access confidential information. Cybersecurity firm Noma Labs recently discovered a vulnerability within Docker — dubbed DockerDash. This new vulnerability represents an incredibly dangerous risk to users because it exploits the way Ask Gordon parses container metadata. Whether in the cloud, command-line interface (CLI), or desktop application, the vulnerability is widespread.
The vulnerability as found by Pillar Security and disclosed responsibly to Docker. The flaw was explained by researchers as a trust boundary violation during parsing of Docker image metadata. An attacker can exploit this vulnerability by publishing a Docker image that contains weaponized LABEL instructions within the Dockerfile.
How the Vulnerability Works
Once the right malicious Docker image is pushed, Ask Gordon AI inspects the image metadata and reads all LABEL fields. The system as it stands today has no ability to distinguish truly helpful metadata from dangerous instructions. This oversight is a golden gateway for attackers to hijack the assistant to gain unauthorized access and manipulate sensitive data.
Sasi Levi, security research lead at Noma Labs, explained the mechanics of the attack:
“In DockerDash, a single malicious metadata label in a Docker image can be used to compromise your Docker environment through a simple three-stage attack: Gordon AI reads and interprets the malicious instruction, forwards it to the MCP [Model Context Protocol] Gateway, which then executes it through MCP tools.”
This vulnerability is beyond serious. As a result, it causes high-impact remote code execution across cloud and CLI environments. In addition, for desktop applications, it could cause widespread, high-impact data exfiltration.
Mitigation Efforts
Docker has added response to mitigations for the vulnerabilities described above. They later released version 4.50.0 of Ask Gordon AI, which successfully patched the prompt injection vulnerability. This change is one of several intended to improve the security of users who use Docker Desktop as well as the users of the Docker CLI. The experts’ call for an immediate path to zero-trust validation. This change is important and necessary for addressing the new generation of attacks on AI models.
He elaborated on how existing systems fail to recognize harmful inputs:
“The DockerDash vulnerability underscores your need to treat AI Supply Chain Risk as a current core threat.”
This points to a significant vulnerability in our existing security practices. If we do not fix it, we cannot hope to be better protected from similar threats in the future.
“MCP Gateway cannot distinguish between informational metadata (like a standard Docker LABEL) and a pre-authorized, runnable internal instruction.”
The development of DockerDash is a timely reminder that, even with the most advanced generative AI systems, there are still fundamental barriers to securing AI systems. And as for-profits rapidly incorporate AI into their business practices, so too do the opportunities for exploitation. The DockerDash vulnerability is a perfect example of this, as it shows that even trusted input sources can hide malicious payloads capable of altering an AI’s execution path.
The Future of AI Security
The discovery of DockerDash serves as a reminder of the ongoing challenges in securing AI systems. As organizations increasingly integrate AI into their operations, the potential for exploitation grows. The DockerDash vulnerability exemplifies how trusted input sources might conceal malicious payloads that can manipulate an AI’s execution path.
To safeguard against such risks, Levi noted:
“It proves that your trusted input sources can be used to hide malicious payloads that easily manipulate AI’s execution path. Mitigating this new class of attacks requires implementing zero-trust validation on all contextual data provided to the AI model.”

