The engine is genuinely powerful:
- Full multi-statement analysis tracks lineage across CTEs, subqueries, temp tables, and cross-file references
- Column-level lineage with expression decomposition (not just "column A depends on table B")
- Backward inference — figures out SELECT * columns from downstream usage even without schema
- Type inference with dialect-aware compatibility checking
- Handles lateral column aliases, COPY/UNLOAD statements, table renames, and other edge cases that trip up simpler parsers
It's fast:
- Rust core compiled to WebAssembly — analyzes hundreds of files in milliseconds
- Graph layout runs in web workers so the UI never blocks
- Serve mode watches your SQL directory and re-analyzes on save with 100ms debounce
Multi-dialect, multi-format: - PostgreSQL, Snowflake, BigQuery, DuckDB, Redshift, MySQL, and more
- Native dbt/Jinja support with ref(), source(), config(), var() — not regex hacks
- Exports to Mermaid, JSON, Excel, CSV, HTML reports, or DuckDB for further analysis
Multiple interfaces:
- Web app at flowscope.pondpilot.io — interactive graphs, drag-and-drop
- CLI — flowscope -d snowflake -f mermaid *.sql for CI/CD pipelines
- Serve mode — flowscope --serve --watch ./models --open gives you a full local web UI in a single 15MB binary
- TypeScript/React packages — embed the engine or visualization in your own tools
The CLI can introspect live databases (--metadata-url postgres://...) for accurate wildcard expansion against your actual schema.
Reusable Rust core and React components. Zero data egress. Apache-2.0 licensed.
VS Code extension in work.
GitHub: https://github.com/pondpilot/flowscope Try it: https://flowscope.pondpilot.io
What SQL patterns does your lineage tooling struggle with? I'm curious what edge cases I should tackle next.