Enter Litmus. I'm pitching it as "specification testing" for LLMs. You define test cases (input prompt -> output JSON), as well as your system prompt and structured output (JSON Schema). All of this gets chucked at OpenRouter, and you get some nice terminal output summarising the test results (with a breakdown per-field for any failing cases) to see how well the model performed.
Although it's framed as an LLM testing tool, it also serves as a model comparator. You can pass the `--model` CLI argument multiple times, and this will let you run the test cases against multiple models, with a comparison table generated in the output at the end for evaluating latency, throughput, tokens, and accuracy (tests passing vs. failing).
The GitHub README contains a full example output of what a test report from Litmus looks like.
With this, I've managed to get my system prompt for my side-project whittled down to the point where the accuracy is acceptable and it's not an exorbitant amount of tokens. I've also found out, through model comparison, that I didn't need anywhere near as large of a model as I had originally envisioned.
You can grab it on GitHub as a single-file, zero-dependency executable (written in Go). Admittedly, I've not tested the pre-built binaries that are created via GitHub Actions, but there's no reason why they shouldn't work.