Anthropic Unveils Claude Code Security to Enhance Software Reliability

Anthropic also recently released Claude Code Security, a new tool to help make code reviews more secure. This innovation is a direct reaction to the increasing demand for trustworthy and predictable software development. In a real world test, conducted over a two-week period, Anthropic’s AI model Claude successfully identified 22 vulnerabilities in the Firefox web…

Lisa Wong Avatar

By

Anthropic Unveils Claude Code Security to Enhance Software Reliability

Anthropic also recently released Claude Code Security, a new tool to help make code reviews more secure. This innovation is a direct reaction to the increasing demand for trustworthy and predictable software development. In a real world test, conducted over a two-week period, Anthropic’s AI model Claude successfully identified 22 vulnerabilities in the Firefox web browser. This exciting development highlights the urgent need for powerful code analysis tools. For coders, software complexity is ever-increasing and we have to stay ahead of it.

Claude Code Security delivers a deeper security analysis than other alternatives. It leverages a multi-agent architecture, allowing multiple agents to operate in parallel. There is a place for style guides. Each agent scans the codebase with different lenses, looking mostly for logical errors rather than just stylistic choices. That way, the most dangerous vulnerabilities are always prioritized first, allowing engineers to focus on improving code quality in the right way.

Since its launch, Claude Code Security has quickly captured the imagination of the enterprise sector. The tool’s run-rate revenue has surpassed $2.5 billion, reflecting its utility and the rising demand for advanced code review solutions. We spoke with Cat Wu, head of product at Anthropic, to learn about the new offering’s implications for large-scale enterprises.

So we made the commitment that we’ll just going to work on algorithmic logic errors only. Wu stated. “This way we’re catching the highest priority things to fix.” That precision-focused effort accelerates the creation pipeline. It allows software engineers to catch significant errors early before they are introduced into the software’s codebase.

This multi-agent architecture increases the overall security of the system while decreasing the friction of software development. Additionally, Claude Code Security engineers have said they’ve seen a significant reduction in roadblocks when developing new features. This is even more important today, Wu said, as enterprises increasingly need this capability to speed up their development cycles and become more agile.

As engineers build with Claude Code, they’re experiencing the tension to building a new feature lower, Wu explained. In addition, they’re experiencing a much greater demand for code review. We think that this new feature will give enterprises the super power to build infinitely faster. Moreover, they’ll find many more bugs—deeply hidden bugs where they would otherwise have missed one.

Claude Code Security also adopts a token-based pricing model. The new price is determined by how complicated the code to be checked is. On average, it means anywhere from $15 to $25 per review. For now, this service is in research preview for Claude for Teams and Claude for Enterprise customers.

The need for code review solutions has never been greater, given the increasing adoption of Claude Code Security by enterprises. Wu was adamant that the demand is being driven by an immense market push. The biggest companies on the planet, from Uber to Salesforce to Accenture, are already deploying Claude Code deeply into their tech stacks and operations—fueled by this tsunami.

That’s how fast Anthropic’s enterprise business is growing — industrial subscriptions have already quadrupled since the start of the year. This trend indicates an impressive and powerful market acceptance of AI-driven tools to improve software reliability and security.