The more rigorously you try to analyse the problem, cutting it into smaller and smaller parts, the more error you introduce, reducing the value of the estimate at each step.
Thus, the best practice is to give a very rough estimate based on the scale of the project and your past experiences.
If you don't have previous experience with similar projects, then you should not even try to estimate.
Jokes aside I think accumulation of errors together the fact that time estimations are hard and Parkinson's law is the fatal flaws with Agile.
> Thus, the best practice is to give a very rough estimate based on the scale of the project and your past experiences.
This is my experience too. It is somehow easier to estimate big scopes then many small that sums up to the same scope. Also you get a better feeling of how far you have come when looking at the big scope then when digging down to the details.
The problem is estimation errors are not symmetrically distributed because engineers chronically underestimate how long something will take, and the problem is exacerbated by management pressure giving them an incentive to estimate even lower.
We're trained to cut big problems in small parts to solve them.
This is more closely related to human psychology than pure logic.
The distribution of delays is pathological, too. It's not normal or poisson or anything nicely amenable to analysis.
Ever had a task with 4 seemingly “easy” parts where 3 are easy and it turns out 4 requires a big rewrite because of some hacked put in 8 years ago that are now deeply ingrained assumptions in the code?
Often you are asked to estimate a 10k part project!