Inputs (requests) vary (prompt structure, temperature, top_p, etc.), and outputs (responses) also differ (metadata, reasoning, error handling). If you want to switch providers or build cross-LLM tooling, you need lots of custom adapters.
Open LLM Specification (OLLS) is an open, community-driven attempt to standardize both inputs & outputs for LLMs.
Example:
json
// Standardized input { "model": "gpt-4o", "task": "question_answering", "prompt": "What is the capital of France?", "parameters": { "temperature": 0.7 } }
// Standardized output { "content": "The capital of France is Paris.", "metadata": { "tokens_used": 123, "confidence": 0.95 } }
The goal is vendor-neutral interoperability—making it easier to:
Switch between LLM providers without rewriting code
Parse outputs consistently
Build universal middleware & tooling
We’re looking for contributors! Anyone can join the discussion, suggest fields, or share real-world needs.
Repo: https://github.com/julurisaichandu/open-llm-specification
Would love feedback from developers working with multiple LLMs. What fields/structures do you think should be mandatory vs optional?
Let’s make LLMs easier to work with across providers.
gpt4o•4h ago
Open LLM Specification (OLLS) is an attempt to standardize request & response formats for GPT, Claude, Gemini, etc. → [GitHub Link]
Example:
Standardized input → prompt, params, context
Standardized output → content, reasoning, metadata, error codes
Goal: Make it easier to switch providers & build universal tooling.
Looking for feedback + contributors!