Hi HN,
I'm sharing BAZINGA, a framework that applies professional software engineering practices to AI development.
The observation: AI coding tools generate code without the safeguards we require from human developers. No mandatory code review. No security scanning. No test coverage requirements.
BAZINGA addresses this by coordinating multiple AI agents that follow a professional workflow:
## The Workflow
1. PM analyzes requirements
2. Developer implements + writes tests
3. Security scan runs (mandatory)
4. Lint check runs (mandatory)
5. Tech Lead reviews code (independent reviewer)
6. Only approved code completes
## Key Principles
*Separation of concerns:* Writers don't review their own code. Developer agent writes, Tech Lead agent reviews. Same principle as human teams.
*Mandatory quality gates:* Security scanning, lint checking, and coverage analysis run on every change. Not optional.
*Structured problem-solving:* Complex issues get formal analysis:
- Root Cause Analysis (5 Whys)
- Architectural Decision Records
- Security Issue Triage
- Performance Investigation
*Audit trail:* Every decision logged with reasoning. Full traceability for compliance.
## What It Catches
- SQL injection, XSS, auth vulnerabilities (via bandit, npm audit, gosec, etc.)
- Code style violations (via ruff, eslint, golangci-lint, etc.)
- Missing test coverage (via pytest-cov, jest, etc.)
- Architectural concerns (via Tech Lead review)
## Quick Start
uvx --from git+
https://github.com/mehdic/bazinga.git bazinga init my-project
MIT licensed. Works with Claude Code.
## Technical Approach
Built on research from Google's ADK and Anthropic's context engineering principles. Uses role-based separation with 6-layer drift prevention to ensure agents stay in their designated roles.
GitHub:
https://github.com/mehdic/bazinga
Happy to discuss the engineering approach or answer questions about multi-agent coordination.
```