I’ve seen this before with non-AI development. Usually it comes down to skill. People haven’t yet learned how to decompose a big problem into smaller, concrete steps. With some guidance, they improve. We go from two big "8s", and find out we can release a few 2s and 3s of value over time.
But with AI system development, I’m not sure if this is the same issue or if the nature of the work really is different. The engineers argue that it’s harder to "shrink" AI work into predictable, incremental pieces because outcomes are probabilistic, not deterministic. And that we can't just break them up since they rely on one another contextually.
So I’m curious:
- Are others experiencing this shift?
- Is this just a new version of the same problem-decomposition skill?
- Or, is building AI systems genuinely a different game we have to calibrate expectations around?
- And either way, how have you adapted your process to deal with these longer, less predictable, larger tickets?
Would love to hear what’s working for people.