Hi HN — we built Revieko for a problem we kept running into in long-lived repos: code review often catches local correctness, but structural drift can still slip through “reasonable” PRs.
Revieko learns a repo-specific baseline from repo history and adds a PR comment with a short risk summary, top hotspots, and per-file actions. The idea is to help reviewers start with the few lines most likely to matter instead of reading everything equally.
It’s meant for cases where tests still pass, but structure shifts: boundary drift, hidden coupling, state introduced, control-flow complexity, or other repo-abnormal changes.
This is not a linter, not a security scanner, and not a generic LLM reviewer. It focuses on structural deviation relative to the repo baseline, with default mode set to warn, not block.
Current flow is simple: install the GitHub App, baseline builds automatically, and then every PR gets a focused attention map. There’s also a demo path on the page if you just want to inspect the output quickly.
I’d especially love feedback on two things:
1. Does this problem feel real in teams maintaining long-lived repos?
2. Is the output specific enough to be useful in PR review without becoming noise?
AnViF•2h ago
Revieko learns a repo-specific baseline from repo history and adds a PR comment with a short risk summary, top hotspots, and per-file actions. The idea is to help reviewers start with the few lines most likely to matter instead of reading everything equally.
It’s meant for cases where tests still pass, but structure shifts: boundary drift, hidden coupling, state introduced, control-flow complexity, or other repo-abnormal changes.
This is not a linter, not a security scanner, and not a generic LLM reviewer. It focuses on structural deviation relative to the repo baseline, with default mode set to warn, not block.
Current flow is simple: install the GitHub App, baseline builds automatically, and then every PR gets a focused attention map. There’s also a demo path on the page if you just want to inspect the output quickly.
I’d especially love feedback on two things:
1. Does this problem feel real in teams maintaining long-lived repos?
2. Is the output specific enough to be useful in PR review without becoming noise?