I’m experimenting an AI-native runtime for long-lived CRUD/workflow/dashboard apps where the LLM proposes changes, but the backend compiles them before anything is applied. The big pain isn’t first-generation code — it’s the 2nd/3rd iteration where “valid JSON” still breaks the system over time.
What we’re enforcing so far (opinionated + compiler-style):
No broken joins when entities evolve (canonicalized model + link/type-check; cross-refs must remain valid)
Repeatable migrations across tenants (migration preview + deterministic plan; append-only history; rollback path)
No RBAC drift: any permission change that widens scope is classified as semantic-breaking and requires explicit acknowledgement
No data drift for UI bindings: components bind to named datasets/metrics (semantic IDs) rather than ad-hoc queries; contract mismatches fail fast or require mapping
Semantic diffs surfaced (meaningful change summaries) instead of raw JSON diffs
This feels less like “no-code” and more like solving a lifecycle problem: making schema evolution and invariants a first-class workflow — almost a domain OS for vertical SaaS. Not really an ai no-code rather than a no-drift
Questions:
Which invariants are worth enforcing at runtime vs CI/evals? (metrics meaning, workflows/stages, permissions scope, etc.)
For “semantic drift”, what’s a robust pattern to represent meaning so it’s machine-checkable? (semantic IDs + versioning? fingerprints?)
Any prior art / systems that got this right (schema evolution workflows, semantic diffing, policy normalization)?
Looking for patterns and failure modes from people who’ve shipped long-lived systems.