People love posting screenshots of new UI components, performance benchmarks, or "before/after" improvements.
But almost no one shares debug output, architecture warnings, or the list of bugs a tool just caught.
A couple reasons seem obvious:
- Privacy: Hard to share findings without exposing internal code
- Professional image: Posting bugs feels like admitting mistakes, even when the tool is doing exactly what it should
- Negativity bias: "Look what's broken" vs "Look what I built"
I've seen this firsthand while building a diagnostic tool: people use it, contribute fixes, star it—but public discussion is almost nonexistent. The bugs it catches are real, but nobody wants to post anything in public.
This makes me wonder: are bug-finding tools structurally disadvantaged for discoverability compared to generative/productive tools?
If you've worked on or maintained a diagnostic/linting/debugging tool, I'd love to hear what's worked for you:
- Demo strategies that don't expose user code?
- Health scores or badges that make findings shareable?
- Benchmark formats that feel positive rather than critical?
- Something else entirely?