The library intentionally exposes only the functionality common across providers to avoid provider-specific parameters.
Libraries like LangChain provide many integrations but often rely on many abstractions, heavy use of kwargs, and complex code that can be difficult to customize.
Features: - Sync and async APIs - LLM calls: invoke and stream (temperature, reasoning level) - Response metadata: answer, token usage, stop reason - RAG documents: retrieval, reranking, embeddings - Chat history: conversation store - Common error handling across providers - Providers: OpenAI, Anthropic, Google, AWS
Retry logic is left to the user (see README). Agent functionality is not supported yet.