The problem it solves: AI agents are great at execution. Give them a spec, they ship code. But nobody checks whether the spec should exist. I spent 15 years in enterprise consulting watching teams build the wrong thing, and AI tools have made that failure mode faster and cheaper to reach - which means more of it, not less.
nSENS sits between "I have an idea" and "Let's build it." It has four main components:
1. Phase-gated validation (P0-P5) with hard kill gates. If you fail the gate, the idea dies. No sunk cost negotiation.
2. Adversarial review using 4 independent personas that attack your idea from different angles (business viability, evidence quality, complexity, problem intensity). Each produces numerical impact scores. In the test suite, a "Smart Fleet Energy Optimizer" enters at 0.87 confidence and exits at 0.43 with a DON'T BUILD recommendation. 11 issues found in under a minute.
3. Cognitive bias detection - 51 biases cataloged with pattern-matching rules. The system scans reasoning for indicators of biased thinking (confirmation bias, automation bias, sunk cost, anchoring, etc.) and flags them before the decision gets made.
4. A Prolog digital twin - this is the weird part. The framework includes a Prolog-based model that catches behavioral anti-patterns. Originally built to compensate for my own ADHD executive dysfunction (catching "starting project #4 when 3 are active" or "building when you should be selling"), but the pattern generalizes to any decision-maker profile. Prolog because formal logic doesn't hallucinate. When your validation layer is an LLM evaluating an LLM, you've added a confidence score to a confidence score.
Everything is in Python and YAML. The bias catalog and persona definitions are in YAML so you can fork them and adapt for your own domain without touching the engine. Decision audit trail in JSONL. MCP server integration for Claude-native usage.
Happy to answer questions about the Prolog angle, bias catalog, or anything else.