I built QueryVeil because I was tired of two things: (1) uploading data to third-party tools, and (2) AI tools that just translate English to one SQL query and call it done.
QueryVeil is an AI data analyst that actually investigates. When you ask "why did revenue drop last month?", it doesn't just run one query — it plans an approach, runs multiple queries, self-corrects when it hits errors, and builds a report with its findings. Like a junior analyst who happens to live in your browser tab.
Everything runs client-side:
- *DuckDB WASM* for SQL execution — your data never leaves your machine - *WebLLM* for local AI (Llama via WebGPU) — no API keys, no server costs - *LangGraph agent* for multi-step investigations with tool use
What it actually does:
- Drop in CSV, Excel, JSON, or Parquet files (or connect to Postgres, MySQL, BigQuery) - Get an instant data brief — row counts, column profiles, anomaly detection, data quality warnings — before you ask anything - Ask questions in plain English. The AI agent runs multiple queries, self-corrects SQL errors (up to 3 retries), and generates charts automatically - Proactive insights: correlation detection, outlier flagging, duplicate detection, temporal gap analysis — runs automatically on every new table - Four modes: Chat, SQL editor (with schema-aware autocomplete), Jupyter-style notebooks (with cell references and variables), and a drag-and-drop report builder - Share reports and notebooks via public links, embed them, or schedule email delivery - Command palette (Cmd+K) for quick actions
Free tier: local AI (WebLLM/Ollama), unlimited files, all four modes, auto-insights. Pro ($19/mo or $190/yr): 12+ cloud models via OpenRouter (Claude, GPT-4o, Gemini, DeepSeek, Llama, etc.), database connections, sharing, scheduled reports. 14-day free trial.
Technical details: - Nuxt 4, Vue 3, Pinia, TailwindCSS - DuckDB WASM handles millions of rows in the browser - LangGraph StateGraph with ReAct loop — the agent has tools for SQL execution, schema inspection, column stats, and creating notebooks/reports - Self-correction: when SQL fails, the error + schema context goes back to the AI for auto-fix - WebLLM runs Llama-3.2-3B via WebGPU — zero server cost for the free tier - Ollama support for people who prefer running models locally - Server-side: Supabase (auth + Postgres), Stripe billing, OpenRouter proxy with model allowlist
Try the demo instantly — no signup, no email: https://app.queryveil.com/demo
It loads sample ecommerce data, auto-profiles it, shows proactive insights, and lets you chat or write SQL. Everything runs in your browser.
Landing page: https://www.queryveil.com
Solo developer, would love feedback — especially on the agent behavior and whether the proactive insights are useful or noisy.