Free models on NVIDIA NIM and OpenRouter now hit S-tier on SWE-bench Verified (60%+), but availability is a coin flip. Models get rate-limited, go dark, or spike to 5s+ latency with no notice. I was editing JSON configs by hand every time one died mid-session.
frouter is a terminal TUI that pings all free models in parallel every 2 seconds and shows live latency, uptime, and health. You pick one, it writes the config for OpenCode or OpenClaw and launches it. When a model dies, you switch in seconds instead of minutes.
How ranking works: models are sorted by availability first, then SWE-bench tier (S+ through C), then rolling average latency. Arena Elo is shown as a secondary signal. All ranking data is bundled and updated with releases; latency and uptime are measured live per session.
Some specifics:
Pings are real completion requests, not just health checks
Vim-style keybinds, tier filtering (T to cycle S+ → C), search with /
frouter --best prints the best model ID to stdout for scripting/CI
First-run wizard handles API keys, configs backed up before overwrite
Two providers currently: NVIDIA NIM (nvapi- keys) and OpenRouter (sk-or- keys), both free tier
Install and run:
npx frouter-cli
MIT licensed, TypeScript, 59 commits. Would appreciate feedback on model coverage, provider support, or ranking methodology — especially if you're using free models for agentic coding and have opinions on what's missing.
https://github.com/jyoung105/frouter