The primary value of measuring cycle time is precisely that it captures end-to-end process inefficiencies, variability, and bottlenecks, rather than individual effort. This systemic perspective is fundamental in Kanban methodology, where cycle time and its variance are commonly used to forecast delivery timelines.
Yes! Waiting for responses from colleagues, slow CI pipelines, inefficient local dev processes, other teams constantly breaking things and affecting you, someone changing JIRA yet again, someone's calendar being full, stakeholders not available to clear up questions around requirements, poor internal documentation, spiraling testing complexity due to microservices etc. The list is endless
It's borderline cruel to take cycle time and measure and judge the developer alone.
- Dev gets a bug report.
- Dev finds problem and identifies fix.
- Dev has to get people to review PR. Oh BTW the CI takes 5-10 minutes just to tell them whether their change passes everything on CI, despite the fact only new code is having tests written for and overall coverage is only 20-30%.
- Dev has to fill out a document to deploy to even Test Environment, get it approved, wait for a deployment window.
- Dev has to fill out another document to deploy to QA Environment, get it approved, wait for a deployment window.
- Dev has to fill out another document for Prod, get it approved....
- Dev may have to go to a meeting to get approval for PROD.
That's the -happy- path, mind you...
... And then the Devs are told they are slow rather than the org acknowledging their processes are inefficient.
I’ve seen in sensitive apps needing an approval to go to prod but it’s async and didn’t require a meeting!
But generally when I’m evaluating cycle efficiency, it’s much better to look at everything around the teams instead. It’s a good way to improve things for everyone across the space as well, because it helps other people too.
Assume you're in a team where work is distributed uniformly and not some of this faster person only picking up small items.
So it does tell you something. You also nicely avoided the condition I gave you which is, the team picks up similar tickets and one person doesn't just pickup easy tickets. Assume there's a team lead that isn't blind.
Cycle time measures how long it takes for a unit of work (usually a ticket) to move from initiation to completion within a team's workflow. It is a property of the team / process, not individuals. It can be used to generate statistical forecasts for when a number of tasks are likely to be completed by the team process.
For most teams, actual programming or development tasks usually represent only a small portion—often less than 20%—of the total cycle time. The bulk of cycle time typically results from process inefficiencies like waiting periods, bottlenecks, handoffs between team members, external dependencies (such as waiting for stakeholder approval or code review), and other friction points within the workflow. Because of this, many Kanban-based forecasting methods don't even attempt to estimate technical complexity. They focus instead on historical cycle time data.
For example, consider a development task estimated to take a developer only two days of actual programming. If the developer has to wait on code reviews, deal with shifting priorities, or coordinate with external teams, the total cycle time from task initiation to completion might end up taking two weeks. Here, focusing on the individual’s performance misses the bigger issue: the structural inefficiencies embedded within the workflow itself.
Even if tasks were perfectly and uniformly distributed across all developers—a scenario both unlikely and probably undesirable—this fact would remain. The purpose of measuring cycle time is to identify and address overall process problems, not to evaluate individual contributions.
If you're using cycle time as an individual performance metric, you're missing the fundamental point of what cycle time actually measures.
If this research is aimed at web-dev, sure I get it. I only read the intro. Software happens outside of webdev a lot, like a whole lot.
I didn't see these talked about much in the paper at a glance. Highly recommend Reinertsen's The Principles of Product Development Flow here instead.
In terms of resources, Will Larson's An Elegant Puzzle hits on some of these themes and is very readable. However, he doesn't show much of his work, as it were. It's more like a series of blog posts, whereas Reinertsen's book is more like a textbook. You could also just read a queuing theory textbook and try to generalize from it (and that's where you'll read plenty about high-capacity queues, for example).
To be serious with the recipient, I actually multiply by 3.
What I can't understand is why my intuitive guess is always wrong. Even when I break down the parts, GUI is 3 hours, Algorithem is 20 hours, getting some important value is 5 hours... why does it end up taking 75 hours?
Sometimes I finish within ~1.5x my original intuitive time, but that is rare.
I even had a large project which I threw around the 3x number, not entirely being serious that it would take that long... and it did.
* Research scrollbar implementation options. (note, time box to x hours).
* Determine number of lines in document.
* Add scrollbar to primary pane. * Determine number of lines presentable based on current window size.
* Determine number of lines in document currently visible.
* Hide scrollbar when number of displayed lines < document size.
* Verify behavior when we reach reach the end of the document.
* Verify behavior when we scroll to the top.
When you decompose a task it's also important to figure out which steps you don't understand well enough to estimate. The unpredictability of these steps are what blows your estimation, and the more of these are in your estimate, the less reliable your estimate will be.
If it's really important to produce an accurate estimate, then you have to figure out the details of these unknowns before you begin the project.
That sounds like a particularly poor measure - it might even be negatively correlated. I'm worked on teams that are highly aligned on principles, style and understanding of the problem domain - they got there by deep collaboration - and have few comments on PRs. I've also seen junior devs go without support and be faced with a deluge of feedback come review time.
The results actually make sense then. If you look at Fig 7: the cycle time explodes as the number of PR comments goes up. This seems like a symptom that the developers weren't aligned before the PR.
It gels with my personal experience where controversial changes in a PR gum up the works and trigger long comment threads and meetings.
* Fig 2b: the cycle time drops slightly around June and July. I have no idea why this is but it's amusing.
* Fig 3: more coding days has very diminishing returns on cycle time. E.g. from eyeballing the graph, a 3x increase in the number of days per week spent coding (from 2 days to 6 days) only has a ~25% boost to cycle time.
* Fig 7: more comments on a PR means vastly slower cycle time. I can personally attest to this as some controversial PRs that I've participated in triggered a chain reaction of meetings and soul searching.
SiempreViernes•8mo ago
rk06•8mo ago
sethammons•8mo ago