I'm a designer/PM who got tired of the same ritual every time I need to benchmark a competitor: open their product, take screenshots one by one, paste them into Miro, draw arrows by hand, add notes... It takes 2-3 hours just to document a flow I could analyze in 20 minutes if the capture were automated.
I'm building a tool called BenchCanvas. You paste a URL, it crawls the product with a real browser, captures every screen, detects the navigation paths between them, and outputs an interactive canvas with the full flow diagram. From there you can annotate, run AI analysis (heuristics, UX patterns, improvement suggestions), and share with your team.
A few things I'm genuinely unsure about and would love input on:
- Is the manual screenshot pain real for others, or is this just my workflow?
- Would you trust an automated capture enough to show it to stakeholders, or would you always redo it manually?
- The hard part technically is authenticated flows (post-login screens). Is that a dealbreaker if v1 only covers public pages?
If this sounds useful, I have a waitlist but mostly I'm here for the honest feedback before I build further.
davidmartinsu•1h ago
I'm building a tool called BenchCanvas. You paste a URL, it crawls the product with a real browser, captures every screen, detects the navigation paths between them, and outputs an interactive canvas with the full flow diagram. From there you can annotate, run AI analysis (heuristics, UX patterns, improvement suggestions), and share with your team.
A few things I'm genuinely unsure about and would love input on:
- Is the manual screenshot pain real for others, or is this just my workflow? - Would you trust an automated capture enough to show it to stakeholders, or would you always redo it manually? - The hard part technically is authenticated flows (post-login screens). Is that a dealbreaker if v1 only covers public pages?
If this sounds useful, I have a waitlist but mostly I'm here for the honest feedback before I build further.