I built a minimal demo of runtime epistemic governance for LLMs.
The script calls an upstream model, then applies an admissibility layer before returning the answer.
For high-risk actionable claims (e.g., pediatric drug dosages), it refuses the output and logs:
decision (pass_through: false)
rule triggered
divergence from baseline
prompt fingerprint (stable hash)
This is not prompt engineering — it is post-generation enforcement at inference time.
Repo:
https://github.com/milarien/aurora-governor-demo
Example refusal run:
https://github.com/milarien/aurora-governor-demo/tree/main/d...
I’m interested in technical critique on whether this qualifies as enforceable runtime governance vs. guardrail filtering.
milarien•1h ago