While working with different LLMs in Go at Marble (open source Fraud & compliance automation platform), we got tired of juggling the specific SDKs and request/response structs for OpenAI, Anthropic, Cohere, etc. It makes switching providers or even just testing a new model more work than it should be.
To fix this, we built `llmberjack`. It’s a lightweight, no-frills Go library that provides a single, common interface over multiple LLM backends.
It's *not* a complex framework like LangChain. There are no agents, chains, or magic. It does just one thing: give you unified, typed, request builder functions so you can swap out providers with a one-line config change while still taking advantage of provider-specific bells and whistles. The goal is to avoid vendor lock-in and keep application code clean and simple.
Here’s a quick example:
```go gpt, err := openai.New(openai.WithApiKey("...")) gemini, err := aistudio.New()
llm, err := llmberjack.New( llmberjack.WithDefaultProvider(gpt), llmberjack.WithProvider("gemini", gemini), llmberjack.WithDefaultModel("gpt-5"), )
type Output struct { LightColor string `json:"text" jsonschema_description:"Color of the traffic light" jsonschema:"enum=red,enum=yellow=enum=red"` }
resp, _ := llmberjack.NewRequest[Output](). WithText(llmberjack.RoleUser, "What is the traffic light color indicating for cars to stop?"). Do(context.Background(), llm)
obj, _ := resp.Get(0)
fmt.Println("Traffic light is ", obj.LightColor) ```
It’s open source and the code is straightforward. We’d love to get feedback or contributions.
Check it out on GitHub: https://github.com/checkmarble/llmberjack