By using Ollama as a backend provider, Polymcp can coordinate MCP servers and models with minimal configuration. This lets you focus on building agents instead of wiring infrastructure.
from polymcp.polyagent import PolyAgent, OllamaProvider
agent = PolyAgent( llm_provider=OllamaProvider(model="gpt-oss:120b"), mcp_servers=["http://localhost:8000/mcp"] )
result = agent.run("What is the capital of France?") print(result)
What this enables: • Clean orchestration: Polymcp manages MCP servers while Ollama handles model execution. • Same workflow, everywhere: Run the same setup on your laptop or in the cloud. • Flexible model choice: Works with models like gpt-oss:120b, Kimi K2, Nemotron, and others supported by Ollama.
The goal is to provide a straightforward way to experiment with and deploy LLM-powered agents without extra glue code.
Would love feedback or ideas on how you’d use this.