Researchers Uncover Significant Flaws in Google Gemini AI Suite

In fact, many have recently discovered critical security gaps in Google’s Gemini AI suite. These defects directly impact its Cloud Assist, Search Personalization model, and Browsing Tool elements. These bugs allow adversaries to misuse cloud resources and access sensitive users’ data. These disclosures deepen troubling questions about security practices in the emerging world of artificial…

Tina Reynolds Avatar

By

Researchers Uncover Significant Flaws in Google Gemini AI Suite

In fact, many have recently discovered critical security gaps in Google’s Gemini AI suite. These defects directly impact its Cloud Assist, Search Personalization model, and Browsing Tool elements. These bugs allow adversaries to misuse cloud resources and access sensitive users’ data. These disclosures deepen troubling questions about security practices in the emerging world of artificial intelligence.

The major problem is with the Gemini Cloud Assist. It has a serious prompt injection vulnerability that allows malicious actors to manipulate cloud-based services like these. This flaw allows attackers to reconstruct logs in real time from raw data, thus exposing sensitive information as they access it. Or they could obfuscate bad prompts inside an innocuous User-Agent header. This tactic being inside of an HTTP request drastically raises the risk of exploitation.

Gemini Cloud Assist has vulnerabilities that affect multiple other Google services. These are just some of the ones so far— Cloud Function, Cloud Run, App Engine, Compute Engine, Cloud Endpoints, Cloud Asset API, Cloud Monitoring API, Recommender API. As organizations continue to adopt these tools to help manage the growing complexity of their cloud adoption, the consequences of these types of major flaws can be catastrophic.

Risks Associated with Search Personalization

It is outdated and has a search-injection vulnerability that requires critical attention. This vulnerability makes it possible for malicious actors to inject prompts that can coerce the generative AI chatbot into producing harmful outputs. Attackers have quickly turned to this vulnerability to leak users’ saved passwords. They moreover obtained location tracking data via JavaScript manipulation of search history on Chrome.

To make matters worse, the model clearly lacks the capability to tell apart actual user queries from prompts injected by third party players. This agency oversight leaves a dangerous security gap, allowing for the disclosure of individuals’ sensitive private information without their knowledge or consent.

“One impactful attack scenario would be an attacker who injects a prompt that instructs Gemini to query all public assets, or to query for IAM misconfigurations, and then creates a hyperlink that contains this sensitive data,” – Matan

Vulnerabilities in Browsing Tool

The new Gemini Browsing Tool was recently discovered to contain a similar, albeit indirect, prompt injection vulnerability. This vulnerability can allow attackers to exfiltrate users’ saved identities and location data in real-time to a server controlled by the attacker. Exploiting this flaw might seem cumbersome, but by exploiting the internal calls that Gemini makes to summarize web page content, we got to improving our workflow.

The shortcomings lie in three separate parts of the Gemini suite. Beyond the implications for national security, their concerns speak to a broader imperative regarding the rapid advancement of artificial intelligence. The Gemini Trifecta is a case study in how we turn AI into an attack vehicle. It’s more than just being an aspirational target.

“The Gemini Trifecta shows that AI itself can be turned into the attack vehicle, not just the target. As organizations adopt AI, they cannot overlook security,” – Matan

Implications for AI Security Practices

The vulnerabilities mentioned above in Gemini’s architecture show the glaring need for increased security standards in AI use and implementation. As CodeIntegrity points out, “An agent with broad workspace access can chain tasks across documents, databases, and external connectors in ways RBAC never anticipated.”

This creates a new and huge threat surface. Sensitive data and actions can get exfiltrated or abused via intricate multi-step automated workflows. The results should remind us that when technology advances, the security controls that oversee that technology should advance with it.