Over the last two months, we’ve been speaking with dozens of engineering leaders through the Dev Leaders Lunch Club and our startup (Flea) about how AI is really used in software development.
One thing kept coming up: adoption looks high — but impact is uneven.
A few developers are shipping faster and writing better code with AI, while others barely use it or don’t trust it.
So we built a structured AI Use-Case Survey to explore what’s actually happening inside engineering teams.
It looks at things like:
where across the SDLC (planning → coding → reviews → docs → learning) AI is genuinely helping,
what’s blocking broader adoption, and
how AI is changing roles, responsibilities, and developer experience.
The survey is anonymous by default (no tracking, no marketing). Teams that run it can optionally group results internally if they want to learn from their “AI ahead-of-the-curve” developers — one CTO even said this could help them identify their internal AI champions.
We’re sharing it here because we’d love to hear how others are approaching this:
How do you track or understand AI’s real impact in engineering work?
What signals would you find valuable to benchmark across teams?
Any pitfalls we should avoid when analyzing the results?
(If you click through, no data is stored unless you explicitly start a team instance.)
We’re genuinely curious about what this community thinks — whether this is the right way to capture what’s changing in how developers work with AI, or if there’s a better approach entirely.
seal17•11h ago
Over the last two months, we’ve been speaking with dozens of engineering leaders through the Dev Leaders Lunch Club and our startup (Flea) about how AI is really used in software development.
One thing kept coming up: adoption looks high — but impact is uneven. A few developers are shipping faster and writing better code with AI, while others barely use it or don’t trust it.
So we built a structured AI Use-Case Survey to explore what’s actually happening inside engineering teams. It looks at things like:
where across the SDLC (planning → coding → reviews → docs → learning) AI is genuinely helping,
what’s blocking broader adoption, and
how AI is changing roles, responsibilities, and developer experience.
The survey is anonymous by default (no tracking, no marketing). Teams that run it can optionally group results internally if they want to learn from their “AI ahead-of-the-curve” developers — one CTO even said this could help them identify their internal AI champions.
We’re sharing it here because we’d love to hear how others are approaching this:
How do you track or understand AI’s real impact in engineering work?
What signals would you find valuable to benchmark across teams?
Any pitfalls we should avoid when analyzing the results?
You can preview it here: https://helloflea.com/aisurvey
(If you click through, no data is stored unless you explicitly start a team instance.)
We’re genuinely curious about what this community thinks — whether this is the right way to capture what’s changing in how developers work with AI, or if there’s a better approach entirely.