zlacker

[return to "Driving engineers to an arbitrary date is a value destroying mistake (2020)"]
1. stephc+g9[view] [source] 2021-08-06 09:18:07
>>vimes6+(OP)
There is a counter intuitive thing about software project estimation that took a long time for me to discover.

The more rigorously you try to analyse the problem, cutting it into smaller and smaller parts, the more error you introduce, reducing the value of the estimate at each step.

Thus, the best practice is to give a very rough estimate based on the scale of the project and your past experiences.

If you don't have previous experience with similar projects, then you should not even try to estimate.

◧◩
2. noname+dg[view] [source] 2021-08-06 10:29:30
>>stephc+g9
Theoretically, this should be the exact opposite of the truth. Breaking a big prediction into a whole lot of smaller predictions is the basic intuition behind Fermi estimation and wisdom of crowds. As long as errors are symmetrically distributed and independent of each error, they'll cancel each other in aggregate.

The problem is estimation errors are not symmetrically distributed because engineers chronically underestimate how long something will take, and the problem is exacerbated by management pressure giving them an incentive to estimate even lower.

◧◩◪
3. jerf+i11[view] [source] 2021-08-06 15:10:58
>>noname+dg
They're also not symmetrically distributed because delays are much larger than surprise wins. A win of 50% and a delay of 50% is already non-symmetrical, because the delay will be quite a bit larger, and the real numbers are even worse. A 5x delay on a particular element would be unsurprising, but to estimate something and then have it surprisingly cut to 1/5th the time is something I've only seen a handful of times in my career. "Oh! There's a library in the code that already does exactly this!"

The distribution of delays is pathological, too. It's not normal or poisson or anything nicely amenable to analysis.

[go to top]