Over the past few months, we’ve been building Everdone — an AI-powered engineering workflow platform.
We initially launched with: - CodeDoc (AI-generated code documentation) - CodeReview (structured issue detection + tracking)
Today we’ve added two more services: - CodeSecurity — iterative application security review - CodePerformance — structured performance improvement workflow
Why we built CodeSecurity Most security tools generate a report and stop there.
In practice, teams: - Fix a few issues - Forget the rest - Don’t re-verify properly
We designed CodeSecurity as an iterative loop instead of a one-off scan: - Connect GitHub - Select a PR or branch - AI reviews for real, exploitable vulnerabilities - Engineers fix - Re-run → AI verifies whether issues are actually resolved
Issues are tracked with: - Severity (High/Medium/Low) - File + line numbers - Concrete suggested fixes - Status workflow (Open → In Progress → Resolved → Closed/Rejected) - Full verification history
It behaves more like a managed security workflow than a static analyzer.
Why we built CodePerformance Performance reviews often happen reactively (after something slows down in prod).
CodePerformance focuses on material runtime impact: - Algorithmic inefficiencies - N+1 queries - Blocking I/O - Memory pressure - Concurrency bottlenecks - Event-loop blocking (Node), GIL issues (Python), etc.
Same loop: Find → Fix → Re-run → Verified.
Current platform Everdone now includes: - CodeDoc - CodeReview - CodeSecurity - CodePerformance
Pricing: - First 200 files free - $0.05 per file per review (early access pricing) - Unlimited users - No contracts
Usage-based only.
We also have live demos on public OSS repos if anyone wants to explore without signing up.
We’re trying to build “Work as a Service” — AI systems that fit into real engineering workflows rather than replacing them or generating static reports.
Would love feedback from other founders or engineering teams.
Happy to answer anything.
— Vinit