The core idea: neural intelligence (LLMs) should be a capability, not an authority. Symbolic control always governs. Neural inference is optional, validated, and can be disabled gracefully.
Today I'm sharing Phase 0 – a minimal Python skeleton proving that coordination, state transitions, and audit trails work before any AI is introduced:
- SM_HUB: async message bus between modules - EL_MEM: SQLite persistence and audit trail - SM_SYN: explicit state machine (INIT → STABILIZING → INTERACTIVE) - Neural processing: intentionally absent at this stage
The full architecture spec (1200+ lines) covers SLOs, lock models, degradation policies, circuit breakers, and operating cycles.
Repo: https://github.com/Jmc-arch/elia-governed-hybrid-architectur...
Looking for feedback on: - Is the governance model viable for production systems? - Biggest architectural blind spots? - Best first domain to prototype: medical, monitoring, agents?
Happy to answer any questions.