Our loop is always the same: A UI flow breaks → backend says “spec is outdated” → someone spends 30–60 minutes in Devtools figuring out what the server actually returned → then we argue whether backend/spec/frontend should change.
I want drift to be caught during normal dev/staging usage while clicking through the app. Real traffic, real accounts, real data. Not just CI tests validating what we already expect.
If your team handles this well, what works?
Do you do runtime validation (client/proxy) that logs schema mismatches with enough context to fix fast (operationId, request id, response body/diff)? Gateway enforcement? Contract tests that actually reflect reality? Something else boring but reliable?
Also, what’s the slow part for you when drift happens, mapping to the right operation, getting a repro across envs/accounts, or turning Devtools into a clean ticket/PR?
ngalaiko•1h ago
losalah•1h ago
I mean the spec and the live API behavior fall out of sync (often because implementation changes land first and the spec lags, or vice-versa). The first time we notice is when a real UI flow breaks and someone has to spelunk Devtools to see what the server actually returned (missing fields, nullability changes, new enum values, shape differences...)
So spec-diff tools like Vacuum help once you’re comparing two OpenAPI files, but my pain is earlierm catching “spec vs reality” from normal dev/staging usage (real accounts + data) and getting an actionable report (which operation, what mismatch, request id/response snippet) before it turns into a broken UI + an hour of debugging.
ngalaiko•32m ago
openapi is really meant to be either generated from code, or server-code is meant to be generated from openapi spec