When I started building AI/LLM features, I kept running into the same class of problems - except harder to reason about. Multiple providers, model quirks, intermittent failures, retries/fallbacks, and a constant question of "what actually happened?" Observability was the recurring pain point. I wanted something that didn't feel like a black box, especially once you're running real workloads and latency or errors spike for reasons that aren't obvious.
So I started building the tool I wished I had: an open-source LLM gateway / proxy in Go.
I fell into Go mostly for practical reasons: high concurrency and throughput without fighting the runtime, and a strongly-typed codebase that stays pleasant as it grows. Over time it turned into something more personal - I've found my home in Go, and this project is where I've been putting that energy.
Open source is a deliberate choice here. Coming from payments + ecommerce, trust isn't a tagline - it's operational. People need to understand what's happening under the hood, and they need to be able to verify it. I've been building software for ~15 years, and I wanted to contribute something real back to the communities that taught me how to build reliable systems.
Repo: https://github.com/ongoingai/gateway
Feedback, criticism, "you're doing this wrong," feature ideas, weird edge cases you're hitting - all welcome. If you've built anything similar (AI infra, gateways, proxies, high-throughput Go services), I'd especially love to hear what you'd consider non-negotiable for something like this.
Cheers, Nathan @ OngoingAI
lostinbetween•33m ago