The idea started simple: I wanted to do batch image processing entirely in the browser without uploading anything to a server. Most existing tools were either cloud-based (privacy concerns, slow for large batches) or desktop apps that were hard to automate and combine into workflows.
A big source of inspiration was macOS Automator. I really liked how you could chain small actions into repeatable workflows, and I wanted something similar for image processing on the web.
At first it was just basic operations like crop, resize, compress, and format conversion. Then I kept adding things like:
face mosaic
face-centered cropping
background removal
old photo restoration
Somewhere along the way it became a full pipeline system that can either run individual steps or chain multiple steps automatically.
Everything runs locally in the browser—no server-side processing, no uploads, no data tracking.
Technical notes (where I’m unsure)
CPU-heavy operations run in WebAssembly
Some steps are GPU-accelerated via WebGL
Most processing happens off the main thread with OffscreenCanvas + Web Workers
A few ML-ish tasks use transformer.js in the browser
Next.js is mostly just a UI shell, deployed on Vercel
It works, but I’m not sure this is the “right” architecture long-term. Some issues I’ve run into:
Memory usage grows fast when chaining multiple steps over large batches
Cleaning up intermediate buffers feels fragile
Safari behaves very differently from Chromium-based browsers
Not sure if Next.js is overkill since everything critical is client-side
Questions for HN
If anyone here has built heavy client-side tools, I’d love your thoughts on:
How to structure long-running pipelines without memory leaks
Patterns for cancellation / progress reporting without spaghetti code
Whether keeping everything browser-only makes sense
Any obvious architectural smells I’m missing
I’ve made it a usable tool mainly to test it with real workloads.
I’m still iterating, so brutally honest feedback is very welcome.