I built Magpie because I was tired of AI code reviewers being too "nice."
Most AI tools just say "LGTM" or nitpick formatting. To fix this, Magpie uses an adversarial approach: it spawns two different AI agents (e.g., a Security Expert and a Performance Critic) and forces them to debate your changes.
They don't just list bugs; they attack each other's arguments until they reach a consensus. This cuts down on hallucinations and lazy approvals.
Features:
Adversarial Debate: Watch Claude and GPT-4o fight over your code.
Local & CI: Works on local files or GitHub PRs.
Model Agnostic: Supports OpenAI, Anthropic, and Gemini.
The Experiment: This is also an experiment in "coding without coding." I didn't write a single line of TypeScript for this project manually. The entire repo was built using Claude Code.
I'd love to hear your feedback—especially if you manage to make the models get into an infinite argument.