I’m part of the team at Vidai, based in Scotland , and today we’re open-sourcing VidaiMock.
If you’ve built anything with LLM APIs, you know the drill: testing streaming UIs or SDK resilience against real APIs is slow, eats up your credits, and is hard to reproduce reliably. We tried existing mock servers, but most of them just return static JSON. They don't test the "tricky" parts—the actual wire-format of an OpenAI SSE stream, Anthropic’s EventStream, or how your app handles 500ms of TTFT (Time to First Token) followed by a sudden network jitter.
We needed something better to build our own enterprise gateway (Vidai.Server), so we built VidaiMock.
What makes it different?
- Physics-Accurate Streaming: It doesn't just dump text. It emulates the exact wire-format and per-token timing of major providers. You can test your loading states and streaming UI/UX exactly as they’d behave in production.
- Zero Config / Zero Fixtures: It’s a single ~7MB Rust binary. No Docker, no DB, no API keys, and zero external fixtures to manage. Download it, run it, and it just works.
- More than a "Mock": Unlike tools that just record and replay static data (VCR) or intercept browser requests (MSW), VidaiMock is a standalone Simulation Engine. It emulates the actual network protocol (SSE vs EventStream).
- Dynamic Responses: Every response is a Tera template. You aren't stuck with static strings—you can reflect request data, generate random contents, or use complex logic to make your mock feel alive.
- Chaos Engineering: You can inject latency, malformed responses, or drop requests using headers (X-Vidai-Chaos-Drop). Perfect for testing your retry logic.
- Fully Extensible: It uses Tera (Jinja2-like) templates for every response. You can add new providers or mock internal APIs by dropping a YAML config and a J2 template.
- High Performance: Built in Rust. It can handle very high RPS. (tested uptp 51000RPS in Mac Pro M4)
Why are we open-sourcing it? It’s been our internal testing engine for a while. We realized that the community is still struggling with mock-infrastructure that feels "real" enough to catch streaming bugs before they hit production.
nagug•2h ago
I’m part of the team at Vidai, based in Scotland , and today we’re open-sourcing VidaiMock.
If you’ve built anything with LLM APIs, you know the drill: testing streaming UIs or SDK resilience against real APIs is slow, eats up your credits, and is hard to reproduce reliably. We tried existing mock servers, but most of them just return static JSON. They don't test the "tricky" parts—the actual wire-format of an OpenAI SSE stream, Anthropic’s EventStream, or how your app handles 500ms of TTFT (Time to First Token) followed by a sudden network jitter.
We needed something better to build our own enterprise gateway (Vidai.Server), so we built VidaiMock.
What makes it different?
- Physics-Accurate Streaming: It doesn't just dump text. It emulates the exact wire-format and per-token timing of major providers. You can test your loading states and streaming UI/UX exactly as they’d behave in production.
- Zero Config / Zero Fixtures: It’s a single ~7MB Rust binary. No Docker, no DB, no API keys, and zero external fixtures to manage. Download it, run it, and it just works.
- More than a "Mock": Unlike tools that just record and replay static data (VCR) or intercept browser requests (MSW), VidaiMock is a standalone Simulation Engine. It emulates the actual network protocol (SSE vs EventStream).
- Dynamic Responses: Every response is a Tera template. You aren't stuck with static strings—you can reflect request data, generate random contents, or use complex logic to make your mock feel alive.
- Chaos Engineering: You can inject latency, malformed responses, or drop requests using headers (X-Vidai-Chaos-Drop). Perfect for testing your retry logic.
- Fully Extensible: It uses Tera (Jinja2-like) templates for every response. You can add new providers or mock internal APIs by dropping a YAML config and a J2 template.
- High Performance: Built in Rust. It can handle very high RPS. (tested uptp 51000RPS in Mac Pro M4)
Why are we open-sourcing it? It’s been our internal testing engine for a while. We realized that the community is still struggling with mock-infrastructure that feels "real" enough to catch streaming bugs before they hit production.
We’re keeping it simple: Apache 2.0 license.
Links:
Home: https://vidai.uk GitHub: https://github.com/vidaiUK/VidaiMock Docs: https://vidai.uk/docs/mock/intro/
I’d love to hear how you’re currently testing your LLM integrations and if this solves a pain point for you. I'll be around to answer any questions!
Sláinte, The Vidai Team (from rainy Scotland)