Instead of wiring node graphs or writing long prompts, you import 1–3 reference images and run visual “abilities” directly on canvas. Brood keeps the workflow image-first and model/provider-flexible.
What it does now: - Single-image actions: diagnose, recast, variations, background edits, crop
- Two-image actions: combine, swap DNA, bridge, argue
- Ambient intent discovery: background intent classification while editing, with subtle visual nudges
- Reproducible runs: artifacts + events written to disk for traceability
Why I built it:
I wanted a faster “think with images” loop for product/creative workflows without heavy graph setup, while still keeping reproducibility and provider routing first-class.Tech:
- macOS desktop app (Tauri)
- Python engine + CLI
- Multi-provider model routing
- Open source (Apache-2.0)
Repo:
https://github.com/kevinshowkat/broodI’d love feedback on:
- where this beats/loses to node-based tools
- highest-value workflows to prioritize next
- what would make this a daily tool for you