Most AI governance discussion is policy and process. This is an
architectural pattern: the database owns the process, the LLM is
a stateless semantic call, and compliance falls out as a side effect
— not extra work.
The core: an orchestrator (a state machine, not an LLM) reads the
current step from DB config, assembles the prompt, executes one
constrained LLM call, stores the full input/output, and advances.
The LLM never decides what happens next. The DB does.
What you get for free:
- EU AI Act — every LLM call has a defined input, defined output
contract, and is TDD-testable with golden examples. Process
definition and data lineage are explicit in the DB.
- GDPR — data evolution is field-level and mapped: you know exactly
what personal data exists, which step created it, and how to
delete it cleanly.
- Auditability — every prompt, every response, and every decision
is reconstructable from DB state at any point in time.
- LLM-agnostic — swap the provider, nothing changes. The governance
model is in the DB, not the model.
The methodology maps directly from how Claude Code's config cascade
works — same separation of concerns, applied to application design.
The repo has the 6 constitutional principles, a bridge table, and a
full worked example: a hiring engine (01.toldi.io) where a JD PDF
becomes structured DB rows through a governed code+LLM pipeline,
then drives a candidate interview step by step.
Interactive architecture diagram in the repo. The pattern is the
point — not the hiring engine.
Curious: has anyone actually shipped something EU AI Act compliant?
What did the architecture look like?
war851•1h ago
The core: an orchestrator (a state machine, not an LLM) reads the current step from DB config, assembles the prompt, executes one constrained LLM call, stores the full input/output, and advances. The LLM never decides what happens next. The DB does.
What you get for free:
- EU AI Act — every LLM call has a defined input, defined output contract, and is TDD-testable with golden examples. Process definition and data lineage are explicit in the DB. - GDPR — data evolution is field-level and mapped: you know exactly what personal data exists, which step created it, and how to delete it cleanly. - Auditability — every prompt, every response, and every decision is reconstructable from DB state at any point in time. - LLM-agnostic — swap the provider, nothing changes. The governance model is in the DB, not the model.
The methodology maps directly from how Claude Code's config cascade works — same separation of concerns, applied to application design. The repo has the 6 constitutional principles, a bridge table, and a full worked example: a hiring engine (01.toldi.io) where a JD PDF becomes structured DB rows through a governed code+LLM pipeline, then drives a candidate interview step by step.
Interactive architecture diagram in the repo. The pattern is the point — not the hiring engine.
Curious: has anyone actually shipped something EU AI Act compliant? What did the architecture look like?