AI coding platforms are growing rapidly. Codex, Claude, and other platforms are generating code faster than ever. However, one of the bottlenecks remains the review of the large amount of machine-generated code. Amidst this, Anthropic, the leading AI company, has launched a new Code Review, a multi-agent system integrated within its coding platform of Claude Code. It’s an AI reviewer that catches the bug before it reaches the software production.
Rise of Vibe Coding and the Need for an AI Code Review Platform
While vibe coding, a commonly used practice that generates code from natural language prompts, has accelerated development; it has also introduced security risks, bugs, and poorly understood code. The main aim of the Code Review tool is to help development teams manage the massive volume of software generated with AI coding assistants. The traditional or manual review process is no longer an option.
Cat Wu, Anthropic’s Head of Product, shared a statement regarding the importance of code reviewers: “There is a lot of growth in Claude Code, mainly within the enterprise, and one of the questions that the company gets from leaders is, as Claude Code is putting a range of pull requests, how to make sure those are reviewed efficiently.”
The feature is mainly targeted for large-scale organizations such as Salesforce, Uber, and Accenture that already use Claude Code.
How Does Anthropics’s Code Review Tool Work?
The Code Review tool automatically analyzes pull requests, primarily the code changes developers submit for approval before integrating into the codebase. The tool integrates with GitHub, automatically pulls requests, provides code comments that mention potential errors, and shares fixes.
The following are the color indicators:
- Red: Critical Issues
- Yellow: Potential Concerns
- Purple: Issues related to historical or existing code
Anthropic's new tool uses multiple agents that analyze code simultaneously from different perspectives. A coordinating agent then combines the results, removes duplicate findings, and focuses on the most critical issues for developers to fix.
Even though automation is at the top, Anthropic emphasizes that human checks remain the final step. Engineering teams need to configure additional checks to approve or reject pull requests after reviewing the AI’s suggestions.
How About Availability and Pricing?
Code Review is currently available in research previews for Claude Code Teams and Enterprise customers. According to Anthropic, the cost of each review will range from $15 to $25, based on code complexity and the amount of computing power required for multi-agent systems.
AI is changing the software development process entirely. From simply writing code, it is now expanding to reviewing the code for clarity, accuracy, and productivity. The massive generation of AI code is no longer a concern!
At WisdomPlexus, we keep you informed about the latest news around the tech landscape! Visit our website now!
Recommended For You:
Claude 3.7 Sonnet Vs Claude 3.5 Sonnet – A Detailed Comparison for AI Users




