this is a safety-first causal engine I’ve been working on.
The goal is not to discover more causal relations, but to prevent AI systems from acting when causal signals are unstable, biased, or unsafe.
The engine: - runs multi-pass causal analysis - allows safe abstention (no insight is a valid outcome) - includes stability and release-gate tests - is designed to block decisions by default unless causality is robust
I’m sharing this mainly to get technical feedback on: - the safety assumptions - the stability criteria - whether this approach makes sense for AI agents / enterprise systems
Happy to answer any technical questions.
EM1805•1h ago