It handles model-specific idiosyncrasies across popular families like GPT, Gemini, Llama, Qwen, and others. This includes dropping unsupported fields, renaming deprecated ones, normalizing structures, and generally cleaning inputs so they conform to each provider’s/model's stricter expectations.
The library also converts between OpenAI Chat Completions and the new Responses format, enabling modern clients to interoperate with older APIs or third-party models seamlessly.
The primary use cases are LLM routers that transparently redirect requests to different models for cost or performance optimization, and AI frameworks that expose a unified LLM interface while supporting multiple underlying providers.
Github link: https://github.com/Divyam-AI/divyam-llm-interop