It observes the strings your code emits at runtime (logs, summaries, model output, metrics text) and flags cases like:
“Q3 revenue will reach $10M” “The model predicts churn will drop to 3%”
It doesn’t analyze source code, judge correctness, or block execution. It simply records where in your code those outputs are emitted, so developers can review them before shipping.
I kept the scope deliberately narrow: • no static analysis • no AI inference • no policy engine • no blocking • no required backend
It behaves like a linter for runtime output: quiet when nothing looks risky, and bounded when something does.
There are Node.js and Python SDKs plus a CLI. You run it locally or in CI; it emits nothing unless it finds something worth looking at.
npm (Node): https://www.npmjs.com/package/@assertion-runtime/sdk-node
Curious whether others have hit this problem, or if this feels unnecessary.