I am Vincenzo and i’m working on PolyMCP, an open-source framework that not only exposes Python functions as AI-callable MCP tools but also lets you orchestrate agents across multiple MCP servers.
The idea: instead of rewriting code or wrapping every function with a special SDK, you can: 1. Publish your existing Python functions as MCP tools automatically 2. Spin up a UnifiedPolyAgent that coordinates multiple MCP servers 3. Ask your agent to perform complex workflows spanning different tools
Here’s a quick example in Python:
from polymcp.polyagent import UnifiedPolyAgent, OpenAIProvider
agent = UnifiedPolyAgent( llm_provider=OpenAIProvider(model="gpt-4o-mini"), mcp_servers=[ "http://localhost:8000/mcp", "http://localhost:8001/mcp", ], verbose=True, )
answer = agent.run("Read sales data, compute totals, then summarize.") print(answer)
Or TypeScript, combining HTTP and stdio-based MCP tools:
import { UnifiedPolyAgent, OpenAIProvider } from 'polymcp-ts';
const agent = new UnifiedPolyAgent({ llmProvider: new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY!, model: 'gpt-4o-mini', }), mcpServers: ['http://localhost:3000/mcp'], stdioServers: [{ command: 'npx', args: ['@playwright/mcp@latest'] }], verbose: true, });
await agent.start(); const answer = await agent.run('Collect data and summarize.'); console.log(answer);
Use cases: • Aggregate data from multiple internal services and scripts • Build AI copilots that span different tools and languages • Automate multi-step operational workflows • Prototype agents that interact with production systems safely
Works with OpenAI, Anthropic, and Ollama models, including local deployments.
GitHub links: • Core & Agent: https://github.com/poly-mcp/PolyMCP • Inspector: https://github.com/poly-mcp/PolyMCP-Inspector • SDK Apps: https://github.com/poly-mcp/PolyMCP-MCP-SDK-Apps
I’d love feedback from anyone exploring agent orchestration or building multi-tool AI pipelines.