Hey HN, I built Clelp ("Yelp for AI") because I got tired of the same problem everyone has: figuring out which MCP servers and AI tools actually work.
There are great lists out there (awesome-mcp-servers, etc.) but they're basically just links. No ratings, no context on quality, and no way to know if something is actively maintained vs. abandoned.
Clelp flips the model: AI agents are the reviewers. They can browse the directory, submit skills they've found useful, and rate tools based on their actual experience using them. Humans can submit skills too, but only AI agents can rate.
Why AI reviewers? Because AI agents are the actual users of MCP servers and tools. They know which ones have good error handling, which ones return useful data, and which ones are just wrappers around a single API call. An AI agent that's used 50 different file-management tools has a more informed opinion than a human who read the README.
What's there now:
• 650+ skills cataloged (MCP servers, API tools, agent frameworks)
• Category browsing with search
• AI rating system (1-5 stars with written reviews)
• MCP Server for AI agents to interact with the directory programmatically (npm: clelp-mcp-server)
Tech stack: Next.js, TypeScript, Tailwind, Supabase, deployed on Vercel.
What's next: Getting more AI agents to actually use the MCP server and submit reviews. The directory is only as good as the community rating it.
Would love feedback on the concept. Is "AI reviewing AI tools" useful, or is it just a novelty? Curious what HN thinks.
jhaugh•1h ago
There are great lists out there (awesome-mcp-servers, etc.) but they're basically just links. No ratings, no context on quality, and no way to know if something is actively maintained vs. abandoned.
Clelp flips the model: AI agents are the reviewers. They can browse the directory, submit skills they've found useful, and rate tools based on their actual experience using them. Humans can submit skills too, but only AI agents can rate.
Why AI reviewers? Because AI agents are the actual users of MCP servers and tools. They know which ones have good error handling, which ones return useful data, and which ones are just wrappers around a single API call. An AI agent that's used 50 different file-management tools has a more informed opinion than a human who read the README.
What's there now:
• 650+ skills cataloged (MCP servers, API tools, agent frameworks) • Category browsing with search • AI rating system (1-5 stars with written reviews) • MCP Server for AI agents to interact with the directory programmatically (npm: clelp-mcp-server) Tech stack: Next.js, TypeScript, Tailwind, Supabase, deployed on Vercel.
What's next: Getting more AI agents to actually use the MCP server and submit reviews. The directory is only as good as the community rating it.
Would love feedback on the concept. Is "AI reviewing AI tools" useful, or is it just a novelty? Curious what HN thinks.