
Anthropic has introduced Code Review to Claude Code, a new feature that performs deep, multi-agent code reviews that catch bugs humans often miss, the company said.
Introduced March 9, Code Review is available in a research preview stage for Claude for Teams and Claude for Enterprises customers. Dispatching agents on a pull request, Code Review dispatches a team of agents that look for bugs in parallel, verify bugs to filter out false positives, and rank bugs by severity, according to Anthropic. The result appears in the pull request as a single, high-signal overview comment, plus in-line comments for specific bugs. The average review takes around 20 minutes, Anthropic said.
Anthropic has been running Code Review internally for months. On large pull requests (more than 1,000 lines changed), 84% get findings, averaging 7.5 issues. On small pull requests of fewer than 50 lines, the rate of findings drops to 31%, averaging 0.5 issues. Anthropic has found that its engineers mostly agree with what Code Review surfaces, marking less than 1% of findings as incorrect.

