Cybersecurity engineers have perhaps made the most important discovery of the digital age. In the process, they’ve discovered a new attack technique named EchoLeak that poses a serious threat to Microsoft 365 (M365) Copilot users. This zero-click artificial intelligence (AI) vulnerability allows malicious actors to exfiltrate highly sensitive data with zero user interaction required. The technique plays to M365 Copilot’s approach for retrieving and ranking data. This leverages internal document access privileges to inject personal information.
EchoLeak works by hiding malicious payloads inside otherwise inert content like meeting notes or email threads. The vulnerability is only triggered when a user inadvertently visits a malicious site set up by an attacker. So it is when they open a hostile email. It’s why attackers have adopted this sneaky technique to distort the AI system’s results. They can access and manipulate sensitive proprietary internal data without the user’s conscious knowledge or intention.
EchoLeak’s implications are nothing short of terrifying. Its unique ability to open the door to massive data breaches can lead to devastating impacts on individuals and companies.
How EchoLeak Operates
Another notable aspect of the EchoLeak attack technique is its use of zero-click activation. EchoLeak’s approach is unique from other phishing campaigns. It triggers immediately when a user encounters a harmful prompt buried in markdown-formatted text, without any user input at all required.
The attacker sends a seemingly innocuous email to an employee’s Outlook inbox, embedding the payload designed to exploit the AI system’s capabilities. This payload is what’s parsed by the retrieval-augmented generation (RAG) engine that powers M365 Copilot.
In practice, when a user poses a business-related question to Copilot, the attack occurs as the AI mixes untrusted input with sensitive data. This mixing is referred to as a scope violation. Consequently, the LLM (large language model) leaks sensitive information to the adversary.
“As a zero-click AI vulnerability, EchoLeak opens up extensive opportunities for data exfiltration and extortion attacks for motivated threat actors,” – Aim Security.
The RAG engine’s unique ability to introduce external sources into its context increases the threat this vulnerability poses. Attackers know that they can take advantage of this capability by hiding malicious instructions inside content that looks benign. As a result, they’re able to pull information from private Microsoft Teams and SharePoint URLs without needing a single user click first.
The Attack Sequence
The EchoLeak attack sequence starts when an employee opens an email with the malicious payload. Instead, the moment the user previews (or worse, opens) the email, the action starts. M365 Copilot’s RAG engine is primed and functional, waiting for the user to ask it a question.
When this interaction occurs, a scope violation occurs. Consequently, the AI unintentionally combines untrusted, attacked input with sensitive internal data in context within its LLM. This process results in the inadvertent exfiltration of private data back to the adversary.
“The attack results in allowing the attacker to exfiltrate the most sensitive data from the current LLM context – and the LLM is being used against itself in making sure that the MOST sensitive data from the LLM context is being leaked, does not rely on specific user behavior, and can be executed both in single-turn conversations and multi-turn conversations.” – Aim Security.
This technique highlights an important nuance about the misconception around tool poisoning attacks. According to Simcha Kosman, “While most of the attention around tool poisoning attacks has focused on the description field, this vastly underestimates the other potential attack surface.” He further emphasizes that “every part of the tool schema is a potential injection point, not just the description.”
Implications and Concerns
Cybersecurity specialists are sounding warnings about what the implications of EchoLeak could be. This technique puts at risk the privacy of each user’s data. It poses a risk to organizations using M365 Copilot for sensitive business operations.
Jaroslav Lobacevski notes that there exists “a disconnect between the browser security mechanism and networking protocols,” which can be exploited by attackers. This disconnect allows attackers to leverage long-lived connections to pivot from an external phishing domain to target internal servers effectively.
As organizations race to adopt AI tools into their workflows, knowing about a vulnerability such as EchoLeak is of utmost importance. Invariant Labs researchers highlight that “the issue contains a payload that will be executed by the agent as soon as it queries the public repository’s list of issues.”
The rise of such sophisticated attack techniques raises questions about current security measures in place for AI systems and how organizations can better protect themselves against threats that take advantage of these vulnerabilities.