Google has just announced a pretty amazing upgrade to its Chrome web browser. Here’s what’s new This update introduces a robust suite of new security features designed to better protect you from would-be attackers. This release adds new, agentic AI-based features. These features combine to proactively defend against malicious efforts to exfiltrate sensitive content or hijack user intent. These enhancements are indicative of Google’s ongoing effort to create a safer online space in the face of growing cybersecurity threats.
The new features go beyond the surface to address indirect prompt injection threats directly. These threats allow adversaries to mislead the system into taking actions not intended by users, without any user consent. Google has categorized these indirect prompt injections into three main types: executing unauthorized actions, exfiltrating confidential data, and bypassing existing security mitigations. Google has done their part to defend against these threats. They’ve gone beyond public expectations with robust safety features such as the new User Alignment Critic and a new gating function.
Introduction of the User Alignment Critic
The User Alignment Critic lies at the heart of Google’s new security framework. It introduce a powerful new layer of scrutiny on the browser’s actions. This system accomplishes this by evaluating the AI agent’s choices immediately after the planning phase. It re-validates every proposed action prior to execution for correctness.
“The User Alignment Critic runs after the planning is complete to double-check each proposed action,” – Google.
This ultimate critic’s job is to make sure you stay on the right tasks. It tests whether the action you are about to take aligns with your goals as a user. If that action is considered misaligned, the corporate critic can veto that action, cutting off the pathway to a potential security breach before it ever happens.
“Its primary focus is task alignment: determining whether the proposed action serves the user’s stated goal. If the action is misaligned, the Alignment Critic will veto it,” – Google.
Google builds user trust in their popular Chrome web browser by subjecting the product to this additional layer of oversight. Simultaneously, they prevent nefarious efforts to endanger data integrity.
Implementing Deterministic Safeguards
In addition to the User Alignment Critic, Google has heeded advice from its advisory firm regarding the importance of deterministic safeguards. The company underscored the need for security designs that prioritize protections not based on AI. These safeguards should limit what systems are allowed to do, not just rely on preventing bad stuff from being ingested by AI models.
“Design protections need to therefore focus more on deterministic (non-LLM) safeguards that constrain the actions of the system, rather than just attempting to prevent malicious content reaching the LLM,” – David C, NCSC technical director for Platforms Research.
To this end, Google has introduced a gating function that sorts origins related to tasks into two categories: read-only and read-writable origins. Read-only origins permit Google’s Gemini AI model to consume content without making any changes, while read-writable origins enable interaction through typing or clicking.
“This delineation enforces that only data from a limited set of origins is available to the agent, and this data can only be passed on to the writable origins,” – Google.
The gating function operates completely outside of untrusted web content. This makes sure that the AI agent draws from only safe, approved, and vetted data sources.
Enhanced Security with Financial Incentives
In addition to improving Chrome’s security through cutting edge technology, Google is taking steps to promote ethical hacking practices. Google.org, the charitable arm of the tech giant, just announced a really cool initiative! They’re willing to pay up to $20,000 for successful demonstrations that break their brand new security perimeters. This initiative aims to encourage researchers and security professionals to identify vulnerabilities within the system before malicious actors can exploit them.
Nathan Parker, from the Chrome security team, described how the User Alignment Critic helps to hone AI behavior.
“When an action is rejected, the Critic provides feedback to the planning model to re-formulate its plan, and the planner can return control to the user if there are repeated failures,” – Nathan Parker from the Chrome security team.
Feedback loops propel these AI decision-making processes to ensure they’re always getting better. By doing so, they give users more agency to make informed choices about how they engage with technology.

