I built UnifyRoute because I kept running into the same problem: rate limits, quota exhaustion, and provider outages were breaking my LLM-powered apps at the worst times.
UnifyRoute is a self-hosted gateway that sits in front of your LLM providers (OpenAI, Anthropic, etc.) and handles routing, failover, and quota management automatically — with a fully OpenAI-compatible API, so you don't change a single line of your existing code.
What it does: - Drop-in OpenAI-compatible API (/chat/completions, /models, etc.) - Tier-based routing: define which providers to try and in what order - Automatic failover when a provider fails or hits quota - Web dashboard to manage providers, credentials, and usage - Self-hosted — your API keys never leave your infrastructure - Works with any tool that supports OpenAI's API (LangChain, LlamaIndex, etc.)
Quick start (Docker): git clone https://github.com/unifyroute/UnifyRoute.git cd UnifyRoute && cp sample.env .env ./unifyroute setup && ./unifyroute start # Dashboard at http://localhost:6565
It's open source under Apache 2.0.
Happy to answer questions about the architecture or design decisions.
unifyroute•1h ago