Critical Vulnerabilities Exposed in AI Platforms: A Deep Dive into Base44 and Grok 4 Flaws

Now, artificial intelligence is moving faster than anyone expected which means even more dangerous vulnerabilities are coming to light. This has led to growing panic among security experts and developers. A new report from cloud security technology company Wiz has highlighted some of these alarming shortcomings on Base44, an AI-powered Vibe coding platform. It stands…

Tina Reynolds Avatar

By

Critical Vulnerabilities Exposed in AI Platforms: A Deep Dive into Base44 and Grok 4 Flaws

Now, artificial intelligence is moving faster than anyone expected which means even more dangerous vulnerabilities are coming to light. This has led to growing panic among security experts and developers. A new report from cloud security technology company Wiz has highlighted some of these alarming shortcomings on Base44, an AI-powered Vibe coding platform. It stands to illustrate the vulnerabilities in xAI’s Grok 4 model. Creative methods took advantage of these weaknesses. Attackers leveraged jailbreaking techniques known as Echo Chamber and Crescendo to break through safety guardrails that stop dangerous answers.

The report explains how all safety protocols for their Grok 4 model were ignored leading to the elicitation of harmful outputs. At the same time, threat actors breached Meta’s Llama Firewall. They accomplished this by writing their prompts in non-English languages or using commonly known obfuscation methods. Base44 is actually vulnerable because of misconfigurations. Impact This vulnerability leaks two authentication related endpoints, letting an attacker quickly access private application with resource owner password credentials grant type.

The Base44 Vulnerability

Wiz’s research identified a critical security flaw in Base44. It provided a way for basically anyone to register for private applications just by knowing an “app_id” value. This numeric value is not proprietary information. You can observe it in the application’s URL, but it’s found in the manifest.json file directory.

“The vulnerability we discovered was remarkably simple to exploit — by providing only a non-secret ‘app_id’ value to undocumented registration and email verification endpoints, an attacker could have created a verified account for private applications on their platform,” – Wiz

As of July 9, 2025, a major vulnerability was responsibly disclosed. Wix, the parent company of Base44, wasted little time and released an official fix in less than 24 hours. This prompt response to ChatGPT demonstrates the need for companies developing highly interactive and generative AI platforms to ensure strong security measures.

Exploiting AI Models: Grok 4 and Google Gemini

These vulnerabilities weren’t just a matter of bypassing Grok 4 in the most basic ways. Attackers were able to trick OpenAI’s ChatGPT into revealing legitimate Windows product keys using a numbers game. Google Gemini for Workspace was bent to produce email executive summaries. These summaries were loaded with harmful instructions obscured with HTML and CSS tricks.

“After confirming our email address, we could just login via the SSO within the application page, and successfully bypass the authentication,” – Gal Nagli

The toxic flow analysis (TFA) method have been implemented to bolster agentic systems. It acts as a mostly proactive approach to shield from Model Control Protocol (MCP) exploits. TFA’s primary purpose, however, is to foresee the worst possible attack scenarios. It takes advantage of a poor understanding of an AI system’s capabilities and the dangers of misconfiguration.

Gemini CLI looks to have a toxic combination of non-wayland handling. It suffers from the same things – improper validation of context files, prompt injection vulnerabilities, and a dangerous user experience (UX). This unfortunate combination created the potential for silent shadow execution of malicious commands when auditing untrusted code.

“Instead of focusing on just prompt-level security, toxic flow analysis pre-emptively predicts the risk of attacks in an AI system by constructing potential attack scenarios leveraging deep understanding of an AI system’s capabilities and potential for misconfiguration,” – Invariant Labs

The Evolving Landscape of AI Security

As these security vulnerabilities continue to play out, the experts understand that more security has to be built into AI platforms. The problem is that AI technology is evolving much faster than the security protocols can keep up with. Therefore, developers need to think about security from the ground up.

“The AI development landscape is evolving at unprecedented speed,” – Nagli

Designing security into the foundational structure is key. It focuses on how we maximize the transformative potential of AI while keeping enterprise data safe—not how we layer security as an afterthought. As we navigate a digital landscape filled with evolving threats, organizations need to stay one step ahead and beyond vulnerability management.

“Attackers may find and extract OAuth tokens, API keys, and database credentials stored on the server, granting them access to all the other services the AI is connected to,” – Knostic