We kept running into the same problem: when an AI system makes a consequential decision, there's no neutral way to verify what it actually saw and did. Logs are editable. Screenshots lie. "Trust us" doesn't hold up in regulated industries or litigation.
So we built SENTINEL. Three products, one protocol:
*SENTINEL Score* -- unjailbreakable safety layer. Text in, float out (0.0-1.0). Nothing else crosses the wire. No tokens, no text, no PHI. Can't be jailbroken because there's nothing to jailbreak. HIPAA environments, military, credit bureaus -- they need a number, not a chatbot.
*SENTINEL Proof* -- neutral ground for AI disputes. Every evaluation ZK-proofed, timestamped, committed on Base mainnet. Neither vendor nor client controls the record. When it goes to court, the transaction hash is the evidence. AI companies could build this themselves -- they won't. "You're storing our conversations?" is a PR disaster. Neutral third party solves that.
*AEGIS* -- AI security that closes the loop. 72B parameter model. Finds vulnerabilities, writes the patch. Not a report. Scan, identify, patch, verify.
8 contracts live on Base mainnet. Free API key at sentinel.biotwin.io -- no credit card, no sales call.
We use SENTINEL Proof inside Bio-Twin (AI drug safety pre-screening) to make AI decisions defensible for pharma labs and attorneys. The Disney $2.75M CA AG settlement last week is a concrete example of why this matters.
Happy to talk on-chain architecture, zkML proof generation at scale, or EU AI Act compliance.