Anthropic Challenges DOD with Lawsuits Amidst Launch of AI Code Review Tool

Anthropic, an AI company recognized for its innovative products, has recently filed two lawsuits against the Department of Defense (DOD) after the agency designated it as a supply chain risk. This lawsuit comes at a pivotal moment for the corporation. It has been riding some extraordinary momentum in its enterprise business. The lawsuits aim to…

Lisa Wong Avatar

By

Anthropic Challenges DOD with Lawsuits Amidst Launch of AI Code Review Tool

Anthropic, an AI company recognized for its innovative products, has recently filed two lawsuits against the Department of Defense (DOD) after the agency designated it as a supply chain risk. This lawsuit comes at a pivotal moment for the corporation. It has been riding some extraordinary momentum in its enterprise business. The lawsuits aim to contest the DOD’s classification and are expected to strengthen Anthropic’s position as it continues to expand its operations.

With the release of Code Review, Anthropic has made an important leap in the right direction. This new AI-powered solution improves software development proactively, stopping bugs in their tracks and preventing them from entering the codebase. This tool is part of Anthropic’s product suite, Claude Code. Since its launch, Claude Code has been a tremendous success with a run-rate revenue exceeding $2.5 billion. Code Review will initially be available to users of Claude for Teams and Claude for Enterprise. We will be releasing this feature in a limited, research preview phase.

As Anthropic’s head of product, Cat Wu, stresses, Code Review is tailored for even the biggest enterprise users. This list features impressive company signatories as well, including heavyweights like Uber, Salesforce, and Accenture. All of these organizations are already using Claude Code. Today, they require assistance keeping up with the vastly increased number of pull requests the new software has created.

“This product is very much targeted towards our larger scale enterprise users, so companies like Uber, Salesforce, Accenture, who already use Claude Code and now want help with the sheer amount of [pull requests] that it’s helping produce,” – Cat Wu

The introduction of Code Review comes in direct response to a growing demand in the market for better code analysis. Wu noted that the solution provides automated, high-level security analysis. It gives engineering leads the flexibility to customize more sophisticated checks to enforce their own internal best practices. This amount of flexibility is important, as organizations work to adopt the highest quality code standards.

“[Code Review] is something that’s coming from an insane amount of market pull,” – Cat Wu

Claude Code employs a multi-agent architecture that allows multiple agents to work in parallel, each examining the codebase from different perspectives. This has an amazing effect on reducing efficiency and inverting the pyramid. It allows you to focus on finding logical errors rather than solely focusing on improving style. How expensive code is to use Code Review corresponds to the complexity of the code. Realistically, you’re looking at anywhere from $15 to $25 on average.

Wu noted that now that developers are developing with Claude Code, they’re feeling less friction to develop completely new features. As a result, we’ve seen an increasing call for more robust code review processes. This trend is representative of a larger, pervasive shift towards AI-powered solutions across the software development landscape.

“As engineers develop with Claude Code, they’re seeing the friction to creating a new feature [decrease], and they’re seeing a much higher demand for code review. So we’re hopeful that with this, we’ll enable enterprises to build faster than they ever could before, and with much fewer bugs than they ever had before,” – Cat Wu

Alongside Code Review, Anthropic has released Claude Code Security, providing a deeper security-focused analysis. We’ve built this new offering to help with those complexities and vulnerabilities that may arise throughout the software development life cycle. Most impressively, Claude has already proven itself in the wild, finding 22 vulnerabilities in Firefox over a two-week time span.

Anthropic is not running from the DOD’s designation. At the same time, the company is poised to capitalize on its growing enterprise business, which has as of late saw its subscriptions quadruple since the beginning of this year. The ongoing legal disputes may serve to reinforce Anthropic’s commitment to innovation and its role as a key player in the AI landscape.