https://blog.pragmaticengineer.com/yes-you-should-estimate/ > https://news.ycombinator.com/item?id=27006853
https://josephmate.github.io/PowersOf2/ Complexity Estimator
https://earthly.dev/blog/thought-leaders/ > https://news.ycombinator.com/item?id=27467999
https://jacobian.org/2021/may/20/estimation/ > https://news.ycombinator.com/item?id=27687265
https://tomrussell.co.uk/writing/2021/07/19/estimating-large... > https://news.ycombinator.com/item?id=27906886
https://www.scalablepath.com/blog/software-project-estimatio...
https://estinator.dk/ > https://news.ycombinator.com/item?id=28104934
https://news.ycombinator.com/item?id=28662856 How do you do estimates in 2021?
https://web.archive.org/web/20170603123809/http://www.tuicoo... Always Multiply Your Estimates by π > https://news.ycombinator.com/item?id=28667174
https://lucasfcosta.com/2021/09/20/monte-carlo-forecasts.htm... > https://news.ycombinator.com/item?id=28769331
https://tinkeredthinking.com/index.php?id=833 > https://news.ycombinator.com/item?id=28955154
https://blog.abhi.se/on-impact-effort-prioritization > https://news.ycombinator.com/item?id=28979210
https://www.shubhro.com/2022/01/30/hacks-engineering-estimat...
https://www.paepper.com/blog/posts/monte-carlo-for-better-ti...
https://drmaciver.substack.com/p/task-estimation-101 > https://news.ycombinator.com/item?id=32177425
https://morris.github.io/e7/#?t=
https://stevemcconnell.com/17-theses-software-estimation-exp...
https://www.doomcheck.com/ > https://news.ycombinator.com/item?id=34440872
https://github.com/kimmobrunfeldt/git-hours
https://pm.stackexchange.com/questions/34768/why-are-develop... > https://news.ycombinator.com/item?id=35316808
https://erikbern.com/2019/04/15/why-software-projects-take-l... > https://news.ycombinator.com/item?id=36720573
https://news.ycombinator.com/item?id=42173575
https://www.thecaringtechie.com/p/8-guaranteed-ways-to-annoy... > https://news.ycombinator.com/item?id=43146871
20-25 years later, this is still where we are at :)
But accurate estimate ranges are a super-power for businesses if they can trust them. Never understood why that's not demanded more... oh yeah, it's work :p
But, if you are enough experienced, with some fermi-style math, you will do good enough estimates most of the time.
Another way to say this is that an estimate becomes a commitment to not learn.
Re-planning is seen as failure by [management].
Re-planning is what happens when you learn that a previous assumption was not correct.
You should be encouraging learning, as this is THE cornerstone of software development.
This is the exact problem, and why it is so extremely difficult to communicate about the subject.
Another way to say "learn from previous [failures]" is "bureaucracy". We did something wrong last time, so we need a rule about that failure.
It SHOULD NOT be seen as failure, we SHOULD NOT add bureaucracy, simple rules CAN NOT fix the issue.
What should happen: Line managers and developers should be able to communicate about what they are learning as they do the work of software engineering. Developers should be able to re-plan their approach to problem solutions.
This simply will not work if there is not strong trust between upper management/[the money] and the engineering/execution side of the organization.
----
The meta-problem is that you have to see things go wrong many times in many ways before you really understand what is going on, and by that point ... well everything gets very tiresome.
emphasis added
You can make good estimates, but it takes extra time researching and planning. So you will spend cycles estimating instead of maximizing throughput, and to reduce risk, plan is usually padded up so you lose extra time there according to the Parkinson's law. IME a (big) SW company prefers to spend all these cycles, even though technically it is irrational (that's why we don't do it in the operating systems).
Only if your company operates in a vacuum, without investors or customers
From your description, SCRUM, could work just as equally.
Don't get me wrong, I'm a fan of Kanban, it's awesome for visually presenting where bottlenecks are for tasks, but estimations aren't a feature of that.
But SCRUM maybe, where people are having a sprint planning meeting maybe more what you're thinking?
SCRUM implies sprints where you agree in advance what will be actually pulled into sprints and delivered by the team so spillovers are not really expected / wanted.
So, I think the issue is about whether it is a routine workflow work which has well-tested historical timelines or not.
Nevertheless, estimates are needed at some granularity level. When you order something on Amazon, would like an estimate on when the item would be delivered to you?
Even if coding work can't be estimated, the overall project requires estimation. Someone need to commit to timelines and come under pressure. Distribution of that pressure is only fair.
Even that doesn’t work because the time taken isn’t just about similarity to other work, it’s about how your new feature interacts with the current state of the codebase which is not the same as when the similar feature was implemented before.
Ultimately, it’s a complexity problem that’s borderline impossible for our feeble human brains to properly understand. And we consistently misunderstand the nature of that complexity
I didn't say that, at all. But it is an extreme form of abstract complexity which is different to the more tangible and physical complexity that you might have in say a jet engine. That is not to say a jet engine isn't complex, it is and obviously so; but in many ways I think most people don't fully understand the extreme complexity of a medium to large software project because, on the surface, we've apparently captured a lot of it behind reusable libraries and the like. The reality is much more nuanced due to the fact that most software engineering is not following a technique of provably sound composition of components and therefore complexity emerges and is a sum of its parts. That summing of complexity creeps up on a development team and emerges as bugs, security issues, difficulty in planning, and difficulty in maintenance, etc.
My singular argument is: reduce complexity, improve predictability.
> But still, estimates can't be avoided.
I also didn't state that estimates can be avoided. I ran a tech company for 20 years and I needed to be able to tell everyone else in the company when things are likely to be done. That's just a fact of life.
When teams do need strong estimates, then the best way I know is doing a project management ROPE estimate, which uses multiple perspectives to improve the planning.
https://github.com/SixArm/project-management-rope-estimate
R = Realistic estimate. This is based on work being typical, reasonable, plausible, and usual.
O = Optimistic estimate. This is based on work turning out to be notably easy, or fast, or lucky.
P = Pessimistic estimate. This is based on work turning out to be notably hard, or slow, or unlucky.
E = Equilibristic estimate. This is based on success as 50% likely such as for critical chains and simulations.
https://en.wikipedia.org/wiki/Three-point_estimation
https://projectmanagementacademy.net/resources/blog/a-three-...
And yeah, they can be useful when you can must put numbers to things and can decompose the work into familiar enough tasks that your estimation points are informed and plausible. Unfortunately, in a field often chasing originality/innovation, with tools that turn over far too often, that can be a near-impossible criteria to meet.
I've found giving probabalistic estimates to be hopeless to effectively communicate, even if you assume the possible outcomes are normally distributed, which they aren't.
Tasks that are easiest to estimate are tasks that are predictable, and repetitive. If I ask you how long it'll take to add a new database field, and you've added a new database field 100s of times in the past and each time they take 1 day, your estimate for it is going to be very spot-on.
But in the software world, predictable and repetitive tasks are also the kinds of tasks that are most easily automated, which means the time it takes to perform those tasks should asymptotically approach 0.
But if the predictable tasks take 0 time, how long a project takes will be dominated by the novel, unpredictable parts.
That's why software estimates are very hard to do.
Needless to say we always underestimate. Or overestimate. Best case we use the underestimated task as buffer for the more complex ones.
And it has been years.
Giving estimations based on complexity would at least give a clear picture.
I honestly don’t know what the PO and TL gains with this absurd obscenity.
One of the hardest problems with estimating for me is that I mostly do really new tasks that either no one wants to do because they are arduous, or no one knows how to do yet. Then I go and do them anyway. Sometimes on time, mostly not. But everybody working with me already knows, that it may be long, but I will achieve the result. And in rare instances other developers ask me how did I managed to find the bug so fast. This time I was doing something I have never before done in my life and I missed some code dependencies that needed changing when I was revieving how to do that task.
We knew that couldn't possibly be right, so we doubled the estimate and tripled the team, and ended up with 6 people for 18 months - which ended up being almost exactly right.
[1]: We were moving from a dead/dying framework and language onto a modern language and well-supported platform; I think we started out with about 1MLOC on the old system and ended up with about 700K on the rewrite.
software is only tougher to estimate if incompetent people (vast majority of the industry, like 4+ million) is doing the estimating :)
You can guess what happens next, which is that around week 8 the business is getting pretty angry that their 4-week project is taking twice as much time as they thought, while the engineering team has encountered some really nasty surprises and is worried they'll have to push to 24 weeks.
First, people asking for estimates know they aren't going to get everything they want, and they are trying to prioritize which features to put on a roadmap based on the effort-to-business-value ratio. High impact with low effort wins over high impact high effort almost every time.
Second, there's a long tail of things that have to be coordinated in meat space as soon as possible after the software launches, but can take weeks or months to coordinate. Therefore, they need a reasonable date to pick- think ad spend, customer training, internal training, compliance paperwork etc.
"It is impossible to know" is only ever acceptable in pure science, and that is only for the outcome of the hypothesis, not the procedure of conducting the experiment.
This isn't true, just desired, and is one of the main roots of the conflict here. OF COURSE you would like to start selling in advance and then have billing start with customers the instant the "last" pr is merged. That isn't a realistic view of the software world though and pretending it is while everyone knows otherwise starts to feel like bad faith. Making software that works, then having time to deploy it, make changes from early feedback, and fix bugs is important. THEN all the other business functions should start the cant-take-back parts of their work that need to coordinate with the rest of the world. Trying to squeeze some extra days from the schedule is a business bet you can make but it would be nice if the people taking this risk were the ones who had to crunch or stay up all night or answer the page.
Trying to force complicated and creative work into a fake box just so you can make a gantt chart slightly narrower only works on people a couple times before they start to resent it. 10x that if management punishes someone when that fantasy gantt chart isn't accurate and 100x that if the one punished is the person who said "it's impossible to know" and then was forced into pretending to know instead of the person doing the forcing.
And he really only used them in comparison to estimates for other tasks, not to set hard deadlines for anything.
But still we are much better at estimating complexity
Time estimations usually tends to be overly optimistic. I don’t know why. Maybe the desire to please the PO. Or the fact that we never seem to take into account factors such as having a bad day, interruptions, context switch.
T-shirt sizes or even story points are way more effective.
The PO can later translate it to time after the team reaches certain velocity.
I have been developing software for over twenty years, I still suck at giving time estimates.
I've been in planning sessions where someone would confidently declare something would take half a day, was surprised when I suggested that it would take longer then that since they were basically saying "this'll be finished mid-afternoon today"...and was still working on it like 3 weeks later.
I like to think of this as 'pragmatic agile': for sure break it down into tasks in a backlog, but don't get hung up on planning it out to the Nth degree because then that becomes more waterfall and you start to lose agility.
There are marketing campaigns that need to be set up, users informed, manuals written. Sales people want to sell the new feature. People thinking about road maps need to know how many new features to can fit in a quarter.
Development isn't the only thing that exists.
As well as the actual development work that will result, which isn't known yet at the time of estimation.
As you say, worthwhile software is usually novel. And to justify our expense, it needs to be valuable. So to decide whether a project is worth doing, we're looking at some sort of estimate of return on investment.
That estimate will also, at least implicitly, have a range. That range is determined by both the I and the R. If you don't have a precise estimate of return, making your estimate of investment more precise doesn't help anything. And I've never seen an estimate of return both precise and accurate; business is even less certain than software.
In my opinion, effort put into careful estimates is almost always better put into early, iterative delivery and product management that maximizes the information gained. Shipping early and often buys much clearer information on both I and R than you can ever get in a conference room.
Of course all of this only matters if running an effective business is more important than managerial soap opera and office politics. Those often require estimates in much the same way they're required from Star Trek's engineers: so the people with main character syndrome have something to dramatically ignore or override to prove their dominance over the NPCs and material reality.
But that is rarely how it works. In the dozens of different projects across ten or twelve companies I’ve had insight into, “doing Agile” is analogous with “we have a scrum master, hold stand ups, and schedule iterations” while the simple reality is “Agilefall.”
You might be speaking a little more broadly than I am interpreting.
This is what "agile" is: https://agilemanifesto.org/
More specific methodologies that say they are agile may use concepts like estimates (story points or time or whatever), but even with Scrum I've never run into a Scrum-imposed "deadline". In Scrum the sprint ends, yes, but sprints often end without hitting all the sprint goals and that, in conjunction with whatever you were able to deliver, just informs your backlog for the next sprint.
Real "hard" deadlines are usually imposed by the business stakeholders. But with agile methods the thing they try to do most of all isn't manage deadlines, but maximize pace at which you can understand and solve a relevant business problem. That can often be just as well done by iteratively shipping and adjusting at high velocity, but without a lot of time spent on estimates or calendar management.
This is so good.
This is an interesting assumption. I’d argue that the overwhelming majority of software is the most boring LoB CRUD apps you can imagine, and not novel at all. Yet, people need to estimate the tasks on these projects as well.
If something is truly boring in software, it gets turned into a library or a tool for non-programmers to use. Our value is always driven by the novelty of the need.
And no, people don't need to estimate the tasks. My dad did LoB apps in the 1970s to the 1990s. E.g., order entry and shop floor management systems for office furniture factories. His approach was to get something basic working, see how it worked for the users, and then iteratively improve things until they'd created enough business advantage and/or cost savings to move on. Exploratory, iterative work like that can at best be done with broad ballpark estimates.
I grant that people want estimates. But that is usually about managerial fear of waste and/or need for control. But I think there are better ways to solve those problems.
[1] e.g., https://en.wikipedia.org/wiki/DBase
Everything you said could apply to a new bridge, building, pharmaceutical compound, or anything else that is the result of a process with some known and some unknown steps.
No one in that industry is giving estimates based on developing brand new drugs - they're giving estimates related to manufacturing lead times, unalterable physics time lines, and typical time to navigate administrative tasks which are well known and generally predictable (but also negotiable: regulations have a human on the other end). All of this after they have a candidate drug in hand.
Same story with bridge building basically: no one puts an estimate on coming up with a brand new bridge design: they're a well understood, scalable engineering constructions which are the mostly gated by your ability to collect the data needed to use them - i.e. a field survey team etc. - and also once again, regulatory processes and accountability.
"Everything"? So
> predictable and repetitive tasks are also the kinds of tasks that are most easily automated, which means the time it takes to perform those tasks should asymptotically approach 0.
Also applies to bridges? Bridges require a ton of manual human input at every stage of construction, regardless of how predictable and repetitive the work is. With software, we can write software to make those tasks disappear. I've yet to see the bridge that can build itself.
2. You can estimate the duration of each step of a process, regardless of how much human involvement is required.
We have a fundamental failure to communicate, what we're doing. The game project managers and finance believe we're all playing is a regression towards the mean, where everything is additive and rounds up to nice consistent formulaic sums. Whereas software development follows power law distributions. A good piece of software can deliver 10x, 100x, or 1000x the cost to produce it (ex: distribution, cost of delivering another copy of software is near 0 once written). You don't get that sort of upside with many other investments. Finance is happy with an NPV 8% above what they invest in. This means that when software people talk, everything they say sounds foreign, and everyone assumes it's because of jargon. It's not. The fish don't know they're swimming in water. When the fisherman comes, everyone is caught off guard.
So we get what the author talks about
> The estimates stopped being estimates. They became safety railings against being held accountable for unreasonable expectations.
We. Pad. Like. Crazy. Yes this is inefficient. Some project managers recognize this. We get theory of constraints. But rather than cull the layers of hierarchy that lead to the padding in the first place, all the blame for failure goes back to developers. Get hit on the head enough and you will stop acting in good faith and pad to save your ability to feed and cloth yourself.
The agile solution of incremental value delivery is a compromise, and can produce good outcomes for functional changes. But agile has unacceptable failure modes when working on infrastructure and satisfying system constraints. Agile can work okay for programmers, but it's not a solution for engineers. Acknowledging, owning, and managing risk is more scalable, but you have to have leaders who acknowledge that they exist and have the maturity to take on that responsibility.
If we're just making up numbers, why don't you just make it up yourself and save the developers the trouble?
Say you have problem A to solve. Then either one of those is true:
1) it has been solved before, ergo by virtue of software having a zero cost of copying (contrary to, say, a nail, a car, or a bridge), so there is no actual problem to be solved.
2) it hasn't been solved before, ergo it is a new problem, and thus at any moment you may turn a stone and discover something that was not foreseen (whether they are rabbits, yaks, bikesheds, dragons, or what have you eldritch horrors) and thus of unknown cost.
Any task that cannot be obviously fit into one or the other can nonetheless be split into an assembly of both.
Thus any attempt at estimates is as futile as gambling to win, tasks are only ever done when they're done, and "successful estimators" are kings of retconning.
It's all make-believe.
Anybody with any business experience has already isolated themselves from the certainty of software project failure, where "failure" is a euphemism for "late." So it doesn't matter if software can't be estimated.
This can be nerve-wracking to a beginner, but one gets used to it over time.
Making mistakes over and over again. And adding a lot of time buffers just to be safe
Missing estimates isn't unique to software, but it's common across all engineering fields.
86% is more than "occasionally".
Two elements (the first quite obvious, the second not really) seem to be particularly common in overruns:
- the bigger the project the likelier the overrun. Small road projects tend to be over estimated, complex rail projects are virtually always way underestimated, mega projects are never close to the budget.
- the lengthier the planning and pre-construction phase the likelier the overrun. This is particularly interesting because it's counter intuitive: you would expect that the more analysis is done, the more accurate the estimates, but experience tells us the truth is the very opposite.
If you're not subject to the batshit insanity of the broader software market, and you crank the planning up a little closer to what "real" engineering does, software delivery gets extremely predictable. See: NASA's Space Shuttle software development processes.
(There are actual, not self-inflicted, problems with software development that "real" engineering doesn't see, though—for one thing, you can't on a whim completely restructure a building that's already been built, and for another, you generally don't have to design real-world engineering projects to defend against the case of intentional sabotage by someone who's just e.g. using the bathroom—it may happen, but it's rare and short of wacky Mission Impossible type plans, busting a water pipe in the bathroom isn't going to get you access to super-secret documents or anything like that)
But I get all this pushback when I do that, such that the path of least resistance is to give some bullshit estimate anyway. Or I get asked to make a "rough guesstimate", which inevitably turns itself into some sort of deadline anyway.
Garbage in, garbage out. Inaccurate estimates, unreasonable timelines, stressed devs and upset PMs.
I'm so over working on software teams.
For humans, 2x the original estimate.
Asking "how long do you want me to spend on this?" got better results, because I got more idea how important tasks were to the business and can usually tell if something is going to take longer than they want. (Or know when we need to discuss scoping it back, or just abandoning the feature)
The carpenters trick of measure many times, cut once, can instead just be cut and re-size if wrong size, which can often be quicker.
Asking how long they want me to spend on it also let's me know how solid it needs to be engineered. Is it something that needs doing right then first time, or do we just want something rough that we can refine later with feedback.
Mainly they are useful to build belief and keep a direction towards the goal.
Models of any kind in whatever domain are necessarily always something less than reality. That is both their value and weakness.
So estimates are models. Less than reality. Therefore we should not expect them to be useful beyond 'plans are useless, but planning is indispensable' -- I think thats' Eisenhower.
The PM and team lead write a description of a task, and the whole team reads it together, thinks about it privately, and then votes on its complexity simultaneously using a unitless Fibonacci scale: 1,2,3,5,8,13,21... There's also a 0.5 used for the complexity of literally just fixing a typo.
Because nobody reveals their number until everyone is ready, there's little anchoring, adjustment or conformity bias which are terribly detrimental to estimations.
If the votes cluster tightly, the team settles on the convergent value. If there’s a large spread, the people at the extremes explain their thinking. That’s the real value of the exercise: the outliers surface hidden assumptions, unknowns, and risks. The junior dev might be seeing something the rest of the team missed. That's's great. The team revisits the task with that new information and votes again. The cycle repeats until there’s genuine agreement.
This process works because it forces independent judgment, exposes the model-gap between team members, and prevents anchoring. It’s the only estimation approach I’ve seen that reliably produces numbers the team can stand behind.
It's important that the scores be unitless estimates of complexity, not time. How complex is this task? not How long will this task take?
One team had a rule that if a task had complexity 21, it should be broken down into smaller tasks. And that 8 meant roughly implementing a REST API endpoint of complexity.
A PM can use these complexity estimations + historical team performance to estimate time. The team is happy because they are not responsible for the PM's bad time estimation, and the PM is happy because the numbers are more accurate.
A clear description with background appears in Mike Cohn’s original writeup on Planning Poker: https://www.mountaingoatsoftware.com/agile/planning-poker
* the arbitrary usage of the Fibonnaci sequence
* a non-clear conversion from complexity to time. Complexity and time aren’t always correlated. Some things can be easy but take a long time. Should that be a 1 or a 5?
Let’s just cut the layer of terminology, in this difficult task, and estimate in units of time directly.
As for “just estimate in time,” the problem is that teams are consistently bad at doing that directly. Mixing “how hard is this” with “how long will it take” collapses two separate variables: intrinsic complexity and local throughput. Story points deliberately avoid that conflation. The team’s velocity is what translates points into time at the sprint level, and that translation only stabilizes if the underlying unit is complexity rather than hours.
The whole point of the method is to strip away the illusion of precision. Time estimates look concrete, but they degrade immediately under uncertainty and project pressure. Relative complexity estimates survive discussion, converge reliably, and don’t invite the fallacy that a complex task with a high risk of surprises somehow has an exact hour count.
That’s why the technique exists. Estimating time directly sounds simpler, but in practice it produces worse forecasts because it hides uncertainty instead of exposing it.
Estimates don't work there at all - everything is new.
So, flip it. Use known values to prioritize work. That is: client demand and (potential) revenue. Then allocate known time/budget to the necessary project, see how far you get, iterate. Team can move faster. Looks chaotic.
At some (uncomfortable) point however, need to rotate into the "standard" process.
Words mean things. Estimate carries a certain weight. It's almost scientific sounding. Instead, we should use the word "guess".
It's exactly equivalent, but imagine the outcome if everyone in the chain, from the very serious people involved in thinking up the project, to funding the project, to prioritising and then delivering the project, all used the word "guess"
Now, when the project is millions of dollars over budget and many months/years late, no one is under any pretence that it was going to be anything else.
I tried this once. It turns out serious people don't like the idea of spending millions of dollars based on "guessing", or even letting developers "play" in order to better understand the guesses they are forced to make, even when it turns un-educated guesses into educated guesses.
Of course, none of this would improve the outcome, but at least it sets expectations appropriately.
These are usually companies that are led by and perform engineering work.
Software developers aren’t engineers.
Project managers have no authoritative training, certification or skills to manage software development projects.
When you discover more work hidden under that "simple" pile of code, you absolutely HAVE to update your estimate. Add more points, add more tickets, whatever. But then your various managers have the ammunition to decide what to do next - allocate more resources to the project, descope the project, push back the release date, etc.
Far too frequently, the estimate is set in stone at the start of the project and used as a deadline that is blown past, with everyone going into crisis mode at that point. The earlier the estimate is updated, the calmer and more comprehensive action everyone responsible can take.
Also doesn’t help when estimates become due dates.
Projects can and will fail or run late; but heck; a 6-months projects cannot found late after 5 months and 29 days; things must be discovered early, so that the most important issues can be addressed.
We've gone through so many cycles of how to use Jira well at my org where these frustrations are shared and we try a different approach, and we're finally starting to converge on the idea that this has historically been a little too lopsided requiring too much tax on the developer doing the actual work. We agreed on a new approach that has actually been pretty awesome: the product owners or managers that are trying to direct a body of work must be a little more in the trenches with us to have an overall understanding of where the different pieces are in a moving body of work. We don't expect them to understand the nitty gritty work in the trenches, but at the same time no more 30,000 foot view product managers who just ask for status updates at EOD. _Everyone_, developers included, is responsible for keeping documentation up to date as we go. So we have central working-bodies of information to find details without having to cruise thru 100+ jira tickets to find details we're looking for. The expectation is that they're engaged enough with development whether in chat or on meetings that if they were blindsided by an executive asking for an update, they could speak to it with some authority over at the water cooler without having to go check Jira. This has really helped weed out the lazy product owners/managers, has forced them to thoughtfully consider their meeting schedules, and has placed the exceptional ones in the pod of work being done and really added a lot of velocity and together-ness about the things we're pushing along.
This approach we're using now was born out of some hurt feelings from projects that didn't go so well & we had to have some real restrospective convos where everyone aired out their beef. Those are good convos to have, I think a lot of teams would find that people aren't deceptively trying to screw you over. Being encouraged to level set human-to-human genuinely is one of the greatest parts of working where I work. Always walk away from those types of chats learning valuable things: for the most part our product owners really do care. Not just about their career aspirations but also about _us_ nerdy and sometimes socially maladjusted developers. They look forward to working with us, and they want to make this as easy as possible for themselves but also for the developers. In the past they spent a lot of time in planning phases trying to scaffold out a project in Jira and attaching timelines to it so that their needs are met to provide predictable timelines to their bosses... but also with the hope that by plainly outlining work it sort of 2 in 1 satisfies our needs and will make development timelines a breeze. We've had to ask them to cede rigidity on that latter part because even the software architects admit the work being done is often a moving target. And when that target moves, maybe you realized you need to add a pivotal software solution to the stack, you can sometimes throw like 45 planned tickets into the dumpster. New ship dates need to be assessed. This was our reality check that we were all collectively shit at adapting to the dynamic nature of any given project. Now, our product owners decided that the expectation they have of their own role is that they understand this dynamic and are prepared and willing to make the case about why the shipping timeline must change. So there's actually a pain point solved here: don't break your back doing so much up front work to try and guess/capture what the body of work might look like only for it all to possibly get thrown away, involve architecture a bit more in the planning phases, but most importantly let's engage throughout the project and we'll try our best to have shared ownership/interest in making sure where we are in the project is broadly understood by everyone involved.
We're currently in phases of implementing a major platform right now and it's just night and day better and dare I say fun. We're still keeping Jira up to date, but the product owners and PMs are more or less managing this as it is a tool they find useful. Removing the "can you update this ticket please" 24/7 has forced them to be a little more involved and have the right chats, but also makes us developers happier to jump in and get it updated on our own volition because we also want to help them have an easier time. If my PM pings me and says "hey I'm looking at this ticket that's stuck in blocked, I just wanted to make sure we got an update from so-and-so about provisioning this credential so I can follow up if needed" I will likely automagically jump in and be like "still stuck, but let me update that ticket for you there's a couple key details I want to make sure our there for you before you reach out". There's an inherent "we're both interested in seeing this ticket through" here that doesn't strike a nerve with either party. Pretty much everyone involved both developers & non developers has a really solid read on where anything's at and we're all just talking a lot more. And for developers I find that it's really good, even if you've got them committed in one narrow body of work, to understand the larger pieces in motion. When they're in tune with the broader orchestration of a projects timeline, they tend to weigh in during unsuspecting moments that might tie seemingly unrelated pieces together. They might be assigned to work on x, but in a group chat about y they notice y has a dependency on x and they'll speak up and call out the need to test that they both work together. We've had a lot of great callouts materialize like this, and on a human-psyche level I think it snowballs & avalanches encouraging developer participation in a way that is really meaningful for PMs. It's interesting that Jira & the expectation that predicting development time in an arena of uncertainty was previously in the way of forming the group dynamics we have now. Jira, despite just being a tool, can really amplify a lot of bad outcomes when it's used by those who aren't fit to be near development, it sort of devolves into a two dimensional behind-schedule tracker that detrimentally impacts how team members on the ground communicate with each other.
And since we're talking a lot more there's just like... way more memes. And memes are important in any development timeline. We prioritize laughing through the pain together.
We didn't get it right the first, second, third, fourth or fifth time. But I'd say as an org we are learning lessons that other orgs may have learned a decade ago, but it's just nice to come to these conclusions on our own. We hope to have more technical product guys on board up ahead cause it's a dream setup that really organizes and harnesses velocity in all the right places, it's so nice to have a technical product guy step in and say "no" to some absurd executive request because he/she is well aware of what such an implementation would look like. They can actually be vanguards and stewards over development personnel in their own way and it seems to go hand in hand with a lot of mutual respect for each other. Always get a kick out of nerding out over possibilities with our technical product dudes.
They wrangle a number out of you which goes into an user story estimate, which feeds into a Gantt chart they use to make their pretty powerpoints they present to upper management to say that the feature will make it into the Q4 release.
If you move this number around the whole estimation will crumble, not that it wont in real life but you deprave them of two things - an illusion of control and somebody to blame when things go south.
Does this actually happen to you? This is literally the whole point of agile, is to change the plan as you learn more about your work. If you didn't want to change the plan you'd spend a lot of time on up-front planning and do waterfall.
Like, a Gantt chart is more or less explicitly anti-agile. I'm aware of the 'no true Scotsman' thing but we shouldn't buy into people using agile terms for what is really a BDUF-based plan.
By the time my work is done, that estimate will be perfect
Wait
Perhaps my true job is to create perfect estimate ? Is coding only a side-effect ?
By the time it reaches the customer, your rough guess with explicit uncertainty has become a hard commitment with legal implications. And when you miss it, the blame flows backward.
What's worked for me: always giving estimates in writing with explicit confidence levels, and insisting that any external date includes at least a week of buffer that I don't know about. That way when the inevitable scope creep or surprise dependency shows up, there's room to absorb it without the fire drill.
I was surprised to not see “story points” mentioned in the context of scrum. This is an estimation concept I have found baffling because it is a measure of “complexity”, not time, yet is used directly to estimate how much can be done. At least this is how it is done at my work.
Scrum lingo is silly.
There is also the adage that if you are late, then 100% of the time the customer will be unhappy, but if you are early, then 100% of the time the customer will be pleased.
So make people more pleased :-)
So always over estimate a bit (25%)
1 day
1 week
1 month
1 year
This communicates the level of uncertainty/complexity. 5 days is way too precise. But saying 1 week is more understandable if it becomes 2 weeks.
I don’t estimate in hours or use any number other than 1
Amazing to read this… everything wrong with our industry summed up neatly in one sentence :)
By definition estimate can be wrong
As I've pointed out before, the business of film completion bonds has this worked out. For about 3% to 5% of the cost of making a movie, you can buy an insurance policy that guarantees to the investors that they get a movie out or their money back.
What makes this work is that completion bond companies have the data to do good estimations. They have detailed spending data from previous movie productions. So they look at a script, see "car chase in city, 2 minutes screen time", and go to their database for the last thousand car chase scenes and the bell curve of how much they cost. Their estimates are imperfect, but their error is centered around zero. So completion bond companies make money on average.
The software industry buries their actual costs. That's why estimation doesn't work.
It is largely a "hits" business where 1% of the activities that you do result in 99% of the revenues. The returns are non-linear so there should be almost no focus on the input estimation. If your feature only makes sense if it can be done in 3 months but doesn't make economic sense if it takes > 6 months - delete feature.
#NoEstimates
Yes, be agile. Yes, measure all the things.
But estimation sets everyone up for disappointment.
Instead you should be thinking in probability distributions. When someone asks for your P90 or P50 of project completion, you know they are a serious estimator, worth your time to give a good thoughtful answer. What is the date at which you would bet 90:10 that the project is finished? What about 99:1? And 1:99? Just that frameshift alone solves a lot of problems. The numbers actually have agreed-upon meaning, there is a straightforward way to see how bad an estimate really was, etc.
At the start of a project have people give estimates for a few different percentiles, and record them. I usually do it in bits, since there is some research that humans can't handle more than about 3 bits +/- for probabilistic reasoning. That would be 1:1, 2:1, 4:1, 8:1, and their reciprocals. Revisit the recorded estimates during the project retrospective.
You can make this as much of a game as you want. If you have play-money at your company or discretionary bonuses, it can turn into a market. But most of the benefit comes from playing against yourself, and getting out of the cognitive trap imposed by single number/date estimates.
I have no problem if they just hear the "2 weeks" part. If they come complaining in 3 weeks I just say "we hit that 10%".
The other important thing is to update estimates. Update people as soon as you realise you hit the 10%. Or in a better case, in a week I might be able to say it's now "1% chance of taking more than a week".
So any estimate has to include uncertainty about _the scope of the work itself_ as well as the uncertainties involved in delivering the work.
The natural follow on question when you present a range as the answer to an estimate is: what would help you narrow this range? Sometimes it is "find out this thing about the state of the world" (how long will external team take to do their bit) but sometimes it is "provide better specs".
"You have six weeks to do X for $$$" or "I'll get it done in six weeks or you don't pay"
Where i work there is no penalty for being late or not hitting a deadline. Life goes on and work continues. I have seen when there are specific dates and metrics and suddenly people work in focused effort and sometimes work weekends or celebrate with finishing early.
At least in my company we've stopped calling them "estimates". They are deadlines, which everyone has always treated "estimates" as anyway.
Unfortunately in the real world deadlines are necessary. The customer is not just mad that they didn't get the shiny new thing, especially in the case of B2B stuff, the customer is implementing plans and projects based on the availability of X feature on Y date. Back to the initial point, these deadlines often come down to how quickly the customer is going to be able to implement their end of the solution, if they aren't going to be ready to use the feature for six months there's no reason for us to bust our asses trying to get it out in a week.
The far more common pattern is being asked to provide such an estimate off hand and those are all about what you mentioned, giving the PM whatever number you think they will accept.
th0ma5•3h ago