npm: https://www.npmjs.com/package/jsonchunk
If you're building on top of LLMs with structured output, you've hit this: the model streams JSON token by token, but JSON.parse throws on anything incomplete. So you either wait for the full response (bad UX) or write hacky string recovery logic to show partial results.
jsonchunk is a tiny tolerant recursive descent parser that extracts the best-effort object from whatever has arrived so far. It handles common mid-stream cases: strings cut mid-escape, objects with keys but no values yet, trailing commas, partial numbers, nested structures, etc.
``` parse('{"name": "Ali') // { name: "Ali" } parse('{"users": [{"id": 1}, {') // { users: [{ id: 1 }] } ```
It's typed as `DeepPartial<T>` so your editor knows any field might not be there yet. The API is dual-mode: a stateless `parse()` for one-shot use, and a push-based `createParser()` / `parseStream()` for piping streams.
Tested with a fuzz suite that splits every test fixture at every possible byte position and verifies the final parse is correct. Also threw Anthropic API responses at it with deeply nested objects, escape sequences, unicode, etc.
There are a few similar libraries, but most are untyped, SAX-style, or tied to larger frameworks. The goal here was a tiny dependency-free primitive designed specifically for modern Web Streams and TypeScript.
Would love feedback, especially from anyone who's dealt with this problem in production.