CometJacking: A Malicious Link Transforms AI Browser Into a Data Thief

A recently uncovered cyber threat, dubbed CometJacking, exploits vulnerabilities in the new Comet AI browser. Making it exceedingly simple for attackers to steal sensitive user data with just a single click. Unlike the previous prompt injection attack, this one plays out through a surprisingly tricky URL that once clicked, executes harmful actions in the browser….

Tina Reynolds Avatar

By

CometJacking: A Malicious Link Transforms AI Browser Into a Data Thief

A recently uncovered cyber threat, dubbed CometJacking, exploits vulnerabilities in the new Comet AI browser. Making it exceedingly simple for attackers to steal sensitive user data with just a single click. Unlike the previous prompt injection attack, this one plays out through a surprisingly tricky URL that once clicked, executes harmful actions in the browser. Victims are often unawares that their information has been compromised. Another example of a malicious URL deceives the AI into executing concealed commands that seize data from their accounts, such as Gmail.

The attack uses Base64-encoding to obfuscate exfiltrated data, evading current data loss prevention measures. This complex new way to take data out of a network poses a grave danger. It puts AI browser security in peril, particularly in enterprise environments where sensitive data is entangled.

Understanding CometJacking

Moving in five steps, CometJacking’s attack starts when the victim clicks on a maliciously-designed URL. This URL includes a collection parameter, which embeds malicious instructions directly into the Comet browser’s AI. Avoid doing a real-time web search. Rather than directly asking the user what they need, it goes digging through its memory and pulls personal data from all of the users’ accounts.

When the secret command is enabled, any personal information that the user inputs is immediately collected and retained, ranging from emails and calendar meetings to connector information. The attacker ends up getting all the encoded data at their controlled callback URI. In this form of theft, the transaction goes through seamlessly, leaving the user none the wiser.

“CometJacking shows how a single, weaponized URL can quietly flip an AI browser from a trusted co-pilot to an insider threat.” – Michelle Levy, Head of Security Research at LayerX

The Role of Base64-Encoding

Base64-encoding is an attack, but it’s not a binary attack. This technique does not only hide the stolen data, it allows attackers to avoid specific data exfiltration detections that would normally be in place. These trivial obfuscation methods demonstrate how quickly arbitrary security measures can be circumvented. This is critically important when employing cutting-edge technology, such as AI browsers.

AI-powered browsers are quickly proliferating in corporate environments. Security developers should incorporate security-by-design principles into their applications.

“This isn’t just about stealing data; it’s about hijacking the agent that already has the keys. Our research proves that trivial obfuscation can bypass data exfiltration checks and pull email, calendar, and connector data off-box in one click.”

As cyber threats continue to advance, AI browsers are becoming the next big frontier for bad actors. Or Eshed, co-founder and CEO of the enterprise platform LayerX, claims that these platforms are a new enterprise cybersecurity frontier. Businesses need to be vigilant and be proactive in their defense approach. This will enable them to better protect sensitive information from emerging threats such as CometJacking.

A New Battleground for Cybersecurity

The effects from this attack have far reaching consequences. It highlights the immediate requirement for robust security measures that protect against threats not just to webpage material, but to memory storage and agent instructions in AI.

The implications of this attack are profound, highlighting the necessity for robust security protocols that address not only webpage content but also memory access and agent prompts within AI systems.