Roni from India — ex-Google Summer of Code(GSoC) @ internet archive, full-stack dev.
got frustrated watching json bloat my openai/claude bills by 50%+ on redundant syntax, so i built ZON over a few weekends: zero-overhead notation that compresses payloads ~50% vs json (692 tokens vs 1,300 on gpt-5-nano benchmarks) while staying 100% human-readable and lossless.
Playground -> https://zonformat.org/playground ROI calculator -> https://zonformat.org/savings
<2kb typescript lib with 100% test coverage. drop-in for openai sdk, langchain js/ts, claude, llama.cpp, streaming, zod schemas—validates llm outputs at runtime with zero extra cost.
Benchmarks -> https://zonformat.org/#benchmarks
try it: npm i zon-format or uv add zon-format, then encode/decode in <10s (code in readme). full site with benchmarks: https://zonformat.org
github → https://github.com/ZON-Format
harsh feedback on perf, edge cases, or api very welcome. if it saves you a coffee's worth of tokens, a star would be awesome
let's make llm prompts efficient again
usefulposter•7h ago
If you're————————going————————to use a LLM, can you at least reformat this post to not sound like @sama?