Unlike typical SaaS bots, AI Review doesn’t send your code anywhere. It runs entirely within your infrastructure, using whichever LLM or VCS you prefer.
Supports any LLM provider — OpenAI, Claude, Gemini, Ollama, OpenRouter, or your own inference server. Works with any VCS — GitHub, GitLab, Bitbucket, Gitea. Runs anywhere — Docker, GitHub Actions, GitLab CI, Jenkins, or locally.
The setup takes 15–30 minutes. It automatically analyzes diffs, posts inline comments, summaries, and even AI-powered replies inside your PRs/MRs.
Example:
ai-review: image: nikitafilonov/ai-review:latest script: - ai-review run variables: LLM__PROVIDER: "OPENROUTER" LLM__META__MODEL: "anthropic/claude-3.5-sonnet" VCS__PROVIDER: "GITLAB" VCS__PIPELINE__MERGE_REQUEST_ID: "$CI_MERGE_REQUEST_IID"
It’s built around modular prompts and review contracts — you can tune depth, style, or rules for each project.
In short, AI Review is a plug-and-play AI reviewer for real-world teams — no tokens, no vendor lock-in, no data leaves your infra.
- GitHub: https://github.com/Nikita-Filonov/ai-review - Docs & examples: https://github.com/Nikita-Filonov/ai-review/tree/main/docs - Feedback: https://t.me/nikitafilonov
Feedback welcome — I’d love to hear how others handle AI code review in enterprise CI/CD setups.