I built this because I was tired of writing brittle regex hacks to fix trailing commas and missing brackets in OpenAI outputs. Wrapping standard `JSON.parse()` in a try/catch loop and dropping the data (or wasting tokens forcing a re-prompt) felt incredibly inefficient for production apps.
This is a middleware reliability layer (available on npm/pip) that intercepts broken LLM JSON, auto-repairs the syntax, and enforces a strict JSON Schema before it ever hits your backend logic.
It also returns a "confidence score" indicating how heavily the AST had to be mutated to force it into a valid object, so you can programmatically reject severe hallucinations.
The heavy AST parsing and schema compilation is offloaded to a RapidAPI backend to keep the client/server thread unblocked.
Would love any feedback on the code, or to hear about the weirdest formatting edge cases you've seen LLMs fail on so I can add them to the repair engine's test suite.
harshvermadr30•2h ago
I built this because I was tired of writing brittle regex hacks to fix trailing commas and missing brackets in OpenAI outputs. Wrapping standard `JSON.parse()` in a try/catch loop and dropping the data (or wasting tokens forcing a re-prompt) felt incredibly inefficient for production apps.
This is a middleware reliability layer (available on npm/pip) that intercepts broken LLM JSON, auto-repairs the syntax, and enforces a strict JSON Schema before it ever hits your backend logic.
It also returns a "confidence score" indicating how heavily the AST had to be mutated to force it into a valid object, so you can programmatically reject severe hallucinations.
The heavy AST parsing and schema compilation is offloaded to a RapidAPI backend to keep the client/server thread unblocked.
Would love any feedback on the code, or to hear about the weirdest formatting edge cases you've seen LLMs fail on so I can add them to the repair engine's test suite.