I just spent the last few weeks building a database for agents.
Over the last year I built PostHog AI, the company's business analyst agent, where we experimented on giving raw SQL access to PostHog databases vs. exposing tools/MCPs. Needless to say, SQL wins.
I left PostHog 3 weeks ago to work on side-projects. I wanted to experiment more with SQL+agents.
I built an MVP exposing business data through DuckDB + annotated schemas, and ran a benchmark with 11 LLMs (from Kimi 2.5 to Claude Opus 4.6) answering business questions with either 1) per-source MCP access (e.g. one Stripe MCP, one Hubspot MCP) or 2) my annotated SQL layer.
My solution consistently reached 2-3x accuracy (correct vs. incorrect answers), using 16-22x less tokens per correct answer, and being 2-3x faster. Benchmark in the repo!
The insight is that tool calls/MCPs/raw APIs force the agent to join information in-context. SQL does that natively.
What I have today: - 101 connectors (SaaS APIs, databases, file storages) sync to Parquet via dlt, locally or in your S3/GCS/Azure bucket - DuckDB is the query engine — cross-source JOINs across sources work natively, plus guardrails for safe mutations / reverse ETL - After each sync a Claude agent annotates the schema: table descriptions, column docs, PII flags, relationship maps
It works with all major agent frameworks (LangChain, CrewAI, LlamaIndex, Pydantic AI, Mastra), and local agents like Claude Code, Cursor, Codex and OpenClaw.
I love dinosaurs and the domain was available, so it's called Dinobase.
It's not bug free and I'm here to ask for feedback or major holes in the project I can't see, because the results seem almost too good. Thanks!
federiconitidi•1h ago
Kappa90•1h ago
There's 75 questions, divided in 5 use case groups: revenue ops, e-commerce, knowledge bases, devops, support.
I then generated a synthetic dataset with data mimicking APIs ranging from Stripe to Hubspot to Shopify to Zendesk etc..
I expose all the data through Dinobase vs. having one MCP per source e.g. one MCP for Stripe data, one MCP for Hubspot data etc.
I tested this with 11 models, ranging from Kimi 2.5 to Claude Opus 4.6.
Finally there's an LLM-as-a-judge that decides if the answer is correct, and I log latency and tokens.
[1] https://arxiv.org/abs/2510.02938