The Problem: Every time I needed to query csv, parquet, or even to play with SQL, I had to either: (a) spin up a Jupyter notebook (b) use the CLI (c) upload to a hosted service.
Friction at every step (TOO MUCH to load a csv or even to test some sql (study)...
The Solution: DuckDB's WASM runtime lets us run SQL analysis client-side. Load CSV/JSON/Parquet files from disk or URL, write SQL, get results instantly. Data stays on your machine. What It Does:
SQL editor with autocomplete & syntax highlighting Import CSV, JSON, Parquet, Arrow (local or remote URLs) Query history, keyboard shortcuts, theme toggle Persistent storage via OPFS (data survives browser refresh) Optional: Connect to external DuckDB servers One-liner Docker deployment or Node 20+ dev server
Technical Details:
DuckDB compiled to WASM; query execution in-browser OPFS-backed persistence Apache 2.0 licensed Runs on Chrome 88+, Firefox 79+, Safari 14+
Use Cases:
Learning SQL without setting up databases Ad-hoc data exploration (CSV → SQL in seconds) Quick prototyping before shipping to production Privacy-conscious workflows (no data leaves your browser)
GitHub: https://github.com/ibero-data/duck-ui Live Demo: https://demo.duckui.com Quick Start: docker run -p 5522:5522 ghcr.io/ibero-data/duck-ui:latest
Would love feedback on: (1) Use cases I'm missing (2) Performance bottlenecks you hit (3) Features that would make this your default SQL scratchpad.
mgaunard•2h ago
caioricciuti•2h ago