Our open source repo is at https://github.com/inconvoai/inconvo (`npx inconvo@latest dev` to run locally), we also have a cloud version (https://app.inconvo.ai) :)
Here’s a quick demo of Inconvo: https://youtu.be/sbDTAbVG-WQ
We built Inconvo to solve a specific problem: building a chat-with-data demo is easy, but shipping to production for real customers is much harder.
We previously did a Launch HN (https://news.ycombinator.com/item?id=44984096). Since then, we’ve open-sourced the core: the entire data agent, semantic layer, and database connectors — under Apache 2.0, so developers can run it locally end-to-end, inspect the code, and contribute.
For a chat-with-data build, most teams start by showing an LLM the schema and letting it generate SQL. LLMs are great at SQL, so it's easy to get a demo working quickly, but going to production is where it breaks: you need to hide certain tables/columns, handle role based access to data and give the model a better understanding of your business logic and how it maps to the data. Some teams use prompts to attempt to solve these issues, but prompts are just suggestions to a probabilistic system that can emit arbitrary queries. For production, a set of guardrails and checks are required to make sure the system is behaving to your specification at all times.
Inconvo flips the approach: constrain the agent up front instead of trying to rein it in after. You define a semantic layer as the contract (approved tables/columns, metrics, filters/enums, join paths, tenancy rules). At runtime, the model outputs a structured query plan in the form of an IL (intermediate language) not SQL. We validate the IL deterministically against the semantic layer contract, and only then compile and execute it, so tenant scoping and field restrictions etc. are enforced by deterministic code on every request.
Here’s a simplified example of the structured query plan (IL) the model emits:
``` { "table": "reviews", "where": [{ "organisation_id": { "equals": 1 } }], "operation": "groupBy", "operationParameters": { "joins": [{ "table": "products", "joinPath": "reviews.product", "joinType": "inner" }], "groupBy": ["products.title"], "avg": { "columns": ["reviews.rating"] }, "orderBy": { "function": "avg", "column": "reviews.rating", "direction": "desc" }, "limit": 1 } } ```
In a nutshell, Inconvo data agents have a smaller set of safe moves they combine creatively, instead of an infinite surface area to validate after the fact.
There is a real tradeoff here: letting the model generate SQL is maximally flexible, and works well for systems with technical humans-in-the-loop who understand the data (e.g internal BI). Inconvo is more constrained by design so that behavior is enforceable for customer-facing use cases.
Because Inconvo is designed to be embedded in customer-facing products, we’re focused on the developer experience. Applications call Inconvo agents from their backend via our API or TypeScript SDK, and responses come back as typed, structured JSON (text, table, or chart as Vega-Lite spec) so they can be rendered directly in the UI. In multi-agent setups e.g. an application wide agent, it also fits nicely as a specialist chat-with-data sub-agent for the orchestrator.
We’re really happy to show this to you all. Thanks for reading. Please let us know your thoughts and questions in the comments.