The problem I kept hitting in production systems: AI pipelines are observable, but not reproducible. After an incident, models, routing logic, retries, or fallbacks may have changed — logs alone don’t let you replay what actually happened.
v1.3.0 introduces a runtime determinism core:
Write-ahead log (append-only JSONL) written before side effects
Crash-safe recovery and deterministic replay (fails loud on divergence)
Runtime execution contracts (timeouts, retries, cost ceilings)
Side-effect classification to prevent unsafe retries or fallback
CLI-first inspection (list / show / trace / replay / diff)
It’s not a planner or agent framework, and not a replacement for MCP — it focuses purely on execution semantics around tools (including MCP-style tools).
Quick try (run from repo root):
git clone https://github.com/Balchandar/intentusnet
cd intentusnet pip install -e . python -m examples.deterministic_routing_demo.demo --mode with python -m examples.deterministic_routing_demo.demo --mode mcp
Docs (architecture, guarantees, demos): https://intentusnet.com
MIT licensed, open source: https://github.com/Balchandar/intentusnet
I’d really value feedback from people building real systems:
What guarantees do you expect from deterministic replay in practice?
How do you handle retries and side effects safely in AI pipelines?