https://nitter.net/badlogicgames/status/1929312803799576757#...
Now that I had full control over requests made by Claude Code, the next obvious step was to make Claude Code talk to other LLM endpoints, which led to claude-bridge. Here are a few fun little test runs with GPT 4.1, Gemini 2.5 Flash, Grok 3, and LLama 4.
https://nitter.net/badlogicgames/status/1930090999004443049#...
There are, of course, limitations, as outlined in the README.md in the repo linked to in this submission. But it's a fun little tool to quickly test agentic coding assistant capabilities and tool use in real-world scenarios using Claude Code's fantastic interface. Gemini fared pretty well; the rest, not so much.
Nothing came close to Sonnet and Opus, which I attribute in part to the prompts sent by Claude Code being tailored towards the capabilities of Anthropic's models. Claude Code's pricing is also really hard to beat for this token-heavy use case.
duxup•1d ago