frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

I built an open-source AI Gateway that sits between your apps and LLM providers

https://github.com/DatanoiseTV/aigateway
2•sylwester•2h ago

Comments

sylwester•2h ago
I built an open-source AI Gateway that sits between your apps and LLM providers

I needed a way to give different clients (apps, users, teams) their own API keys and LLM provider settings without deploying separate proxies. Most solutions required complex setup or were tied to specific providers.

What it does: - Single OpenAI-compatible endpoint (/v1/chat/completions) - Each client gets their own API key with independent config: - Backend provider (Gemini, OpenAI, Anthropic, Ollama, LM Studio, etc.) - Upstream API key - Base URL override (for local models) - Default model - Model whitelist - System prompt injection - Rate limits & token quotas - Built-in admin dashboard with real-time stats via WebSocket - Auto-discovers models from backends (Ollama, LM Studio)

The key insight: all provider configuration is per-client, not global. A client using LM Studio never touches your OpenAI quota. A client with a Gemini key stays isolated.

Tech: Go, SQLite, chi router, WebSocket for live dashboard updates.

Demo: admin UI shows live request streaming, token usage, model breakdown.

Would love feedback on the architecture - especially the per-client provider model. Is this useful for others running multiple LLM backends?

toomuchtodo•2h ago
Self hosted OpenRouter.ai?
sylwester•1h ago
Yes, kind much with a few additions.
toomuchtodo•1h ago
Great work.
sylwester•1h ago
Thank you. Looking forward for your feedback. Any wishes? I wanted to implement model rerouting next.