Witness AI has already made some serious waves with their announcement of $58 million in funding. This monumental investment underscores the urgency to address AI security amidst growing fears of rogue artificial intelligence agents. In return, the company has been well-protected from competition at the infrastructure layer and enjoyed unparalleled growth. In fact, just within the past 12 months it broke through 500% growth in annual recurring revenue (ARR) and quintupled its employee headcount!
With this funding round, new agentic AI security protections were introduced. This step is a big confirmation of the escalating demand for effective solutions to the harms that generative AI models are producing. Witness AI, a portfolio company of Ballistic Ventures, focuses on creating guardrails to ensure the safe operation of these models by monitoring interactions between users and AI systems.
Rick Caccia, co-founder & CEO of Witness AI said, “The urgency for these types of security measures couldn’t be greater.
“People are building these AI agents that take on the authorizations and capabilities of the people that manage them, and you want to make sure that these agents aren’t going rogue, aren’t deleting files, aren’t doing something wrong.” – Rick Caccia
Barmak Meftah, a partner at Ballistic Ventures, emphasizes an interesting convergence. Agentic AI usage is surging throughout enterprises. Of course, he noted the greater danger from ill-intentioned misaligned agents that could create extreme new security threats.
“AI safety and agentic safety is so huge,” Meftah stated, illustrating the urgency of developing advanced security measures in this rapidly evolving field.
Meftah recounted a scenario where an enterprise employee worked with a rogue AI agent, underscoring the real-world implications of these security concerns. He explained that Witness AI’s goal was to find problems even large companies like OpenAI would have trouble addressing. This strategic decision is what distinguishes them in a crowded field.
“We purposely picked a part of the problem where OpenAI couldn’t easily subsume you,” Meftah explained.
Caccia provided further insight into the competitive landscape. He emphasized that Witness AI doesn’t just compete directly with other AI model developers, but rather it’s going head-to-head with legacy security companies.
“So it means we end up competing more with the legacy security companies than the model guys. So the question is, how do you beat them?” – Barmak Meftah
That call for action on defending our AI has been mirrored by industry experts. Lisa Warren at GQR predicts the future market for AI security software will skyrocket to $800 billion — $1.2 trillion by 2031. Both optimistic estimates and cautious projections notice this immense chance for investment and growth in the sector.
Witness AI’s smart, collaborative approach seeks to reduce the developmental risks that arise from the utilization of generative AI models by incorporating sound guardrails. Their technology functions at the level of infrastructure. It ensures through ongoing evaluation of user interactions with AI models to proactively mitigate harmful actions and safeguard data from exploitation and breaches.
The company’s growth has taken on new relevance as companies across the globe count themselves more and more dependent on AI technologies. Now that organizations have started admitting these systems into their operations, security and ethical use concerns have spiked.
Rebecca is a senior transportation reporter at TechCrunch. As she reports, Witness AI, and the company’s focus on protecting AI interactions is what sets it apart in the fiercely competitive tech landscape. As enterprises continue to embrace increasingly complex AI-powered solutions, the imperative for stronger security solutions is greater than ever.

