I'm one of the devs behind 2LY. Here's why we built this:
Lately I was building AI agents across LangChain, n8n, and custom Python. Every time, I had to:
- Reconnect the same tools (Slack, GitHub, Drive) with new auth configs
- Deal with Python/Node version conflicts between tools
- Debug failures with zero visibility into what actually happened
It felt like pre-Docker chaos—everyone solving the same problem differently.
The key issue: most MCP gateways just proxy requests. Your agent still manages configs, dependencies, and credentials. We wanted to fully decouple tool infrastructure from agent logic.
*How 2LY works differently:*
- Embedded runtimes for each tool—your agent never touches Python versions or npm packages
- Central registry—connect a tool once, use it across any framework (LangChain, Langflow, n8n, custom)
- Built-in playground to test tools before deploying
- Full observability—every agent-to-tool interaction logged in one place
So you can swap agent frameworks, update tool versions, or add integrations without touching agent code. Everything scales independently.
*Curious:* Are you rebuilding the same integrations across agents? How do you handle dependency conflicts?
I'll stick around to answer questions!
totofofo•1h ago
Ran it from custom made agents. Worked immediately and remved the pain from managig tooling from the agents to a neat interface. I would only suggest to add the capability to load mcp servers from a git repo. Happy to test the feature should you built it!
EigerAI•1h ago
Hi totofofo,
Thanks for your comment! Yes, you can load MCP servers directly from sub registry or direct URL. Also working on adding APIs via swagger.
EigerAI•2h ago
I'm one of the devs behind 2LY. Here's why we built this:
Lately I was building AI agents across LangChain, n8n, and custom Python. Every time, I had to: - Reconnect the same tools (Slack, GitHub, Drive) with new auth configs - Deal with Python/Node version conflicts between tools - Debug failures with zero visibility into what actually happened
It felt like pre-Docker chaos—everyone solving the same problem differently.
The key issue: most MCP gateways just proxy requests. Your agent still manages configs, dependencies, and credentials. We wanted to fully decouple tool infrastructure from agent logic.
*How 2LY works differently:* - Embedded runtimes for each tool—your agent never touches Python versions or npm packages - Central registry—connect a tool once, use it across any framework (LangChain, Langflow, n8n, custom) - Built-in playground to test tools before deploying - Full observability—every agent-to-tool interaction logged in one place
So you can swap agent frameworks, update tool versions, or add integrations without touching agent code. Everything scales independently.
It's open source (Apache 2.0), runs locally with Docker, takes 2 minutes to start: https://github.com/AlpinAI/2ly
*Curious:* Are you rebuilding the same integrations across agents? How do you handle dependency conflicts?
I'll stick around to answer questions!