This is a cop-out. Just because you can’t do it, doesn’t mean it’s impossible :)
There are many types of research and prototyping project that are not strongly estimable, even just to p50.
But plenty can be estimated more accurately. If you are building a feature that’s similar to something you built before, then it’s very possible to give accurate estimates to, say, p80 or p90 granularity.
You just need to recognize that there is always some possibility of uncovering a bug or dependency issue or infra problem that delays you, and this uncertainty compounds over longer time horizon.
The author even gestures in this direction:
> sometimes you can accurately estimate software work, when that work is very well-understood and very small in scope. For instance, if I know it takes half an hour to deploy a service
So really what we should take from this is that the author is capable of estimating hours-long tasks reliably. theptip reports being able to reliably estimate weeks-long tasks. And theptip has worked with rare engineers who can somehow, magically, deliver an Eng-year of effort across multiple team members within 10% of budget.
So rather than claim it’s impossible, perhaps a better claim is that it’s a very hard skill, and pretty rare?
(IMO also it requires quite a lot of time investment, and that’s not always valuable, eg startups usually aren’t willing to implement the heavyweight process/rituals required to be accurate.)
Maybe, just maybe, it's because "the heavyweight process/rituals required to be accurate" might not be productive, and because being startups, and therefore small organizations, it's easier for everyone to know who's pulling their weight and who isn't, therefore "heavyweight process/rituals" add nothing and cost too much.
Mature organizations tend to "implement the heavyweight process/rituals required to be accurate" precisely because they are too large for everyone to know everyone, and so senior management loses touch with reality and starts feeling anxious about whether their R&D spend is yielding value. This is totally understandable, and we have to have empathy for executives, but there is tremendous danger in this approach. How many mature market leaders have had their lunch eaten by disruptive innovators (invariably startups)? And why? Maybe those "heavyweight" processes kill innovation! That urge to accurately measure what the org's devs are doing can be counterproductive.
All measures but one (so far) are gameable. So far only KTLO fraction, which one should couple with promoting a management culture that allows subjective value judgements to make it up and down the chain. Management of knowledge work essentially is a social problem, not a scientific one.