- The motivation: I kept seeing companies with perfectly good APIs, but their users still struggled with complex UIs or had to read through documentation. Meanwhile, building a custom AI agent requires significant engineering resources most teams don't have.
- How it works: Upload your OpenAPI/Swagger spec (or paste your API docs) The system maps out your endpoints and parameters You get an embeddable widget that understands natural language queries When users ask questions, the agent translates them to API calls and returns results conversationally
Example: An e-commerce site with an inventory API. Instead of navigating filters, users ask "Do you have running shoes in size 10 under $100?" and get instant results.
- Technical details: Uses LLM function calling to map queries to API endpoints Handles authentication (API keys, OAuth) Rate limiting and caching to avoid hammering your API Works with REST APIs (GraphQL support coming)
Current state: Functional MVP, testing with a few early users. Still figuring out edge cases around complex APIs and auth flows.
- What I'm looking for feedback on: Have you encountered this problem? How did you solve it? Security concerns I should be thinking about? Is this actually useful or just a cool demo?