Right now, every provider has different request/response formats, which makes integration painful:
Parsing responses is inconsistent
Switching models needs custom wrappers
Error handling and metadata vary wildly
OLLS defines a simple, extensible JSON spec for both inputs (prompts, parameters, metadata) and outputs (content, reasoning, usage, errors). Think of it like OpenAPI for LLMs—portable, predictable, and provider-agnostic.
GitHub Repo - https://github.com/julurisaichandu/open-llm-specification Example input/output formats, goals, and roadmap Looking for contributors, feedback, and real-world use cases!
Let’s build a unified LLM interface—contribute ideas or join the discussion