But you're pretty spot on, as 'professionally acceptable' indeed means politically acceptable most of the time. Being honest and admitting one's limit is often unacceptable.
[0]: https://www.strategy-business.com/article/Why-do-large-proje...
Multiply your estimate by 3.14159 until you find the actuals and your more accurate estimating coefficient.
>>28667174 (2013)
original: https://web.archive.org/web/20170603123809/http://www.tuicoo...
As for reading… https://thecodelesscode.com/case/215?topic=documentation
I wonder if it was a mistake to ever call it "engineering", because that leads people to think that software engineering is akin to mechanical or civil engineering, where you hire one expensive architect to do the design, and then hand off the grunt work to lower-paid programmers to bang out the code in a repetitive and predictable timeline with no more hard thinking needed. I think that Jack Reeves was right when he said, in 1992, that every line of code is architecture. The grunt work of building it afterward is the job of the compiler and linker. Therefore every time you write code, you are still working on the blueprint. "What is Software Design?"<https://www.bleading-edge.com/Publications/C++Journal/Cpjour...>
Martin Fowler cites this in his 2005 essay about agile programming, "The New Methodology"<https://www.martinfowler.com/articles/newMethodology.html>. Jeff Atwood, also in 2005, explains why software is so different from engineering physical objects, because the laws of physics constrain houses and bridges and aircraft. "Bridges, Software Engineering, and God"<https://blog.codinghorror.com/bridges-software-engineering-a...>. All this explains not only why estimates are so hard but also why two programs can do the same thing but one is a thousand lines of code and one is a million.
I came into programming from a liberal arts background, specifically writing, not science or math. I see a lot of similarities between programming and writing. Both let you say the same thing an infinite number of ways. I think I benefitted more from Strunk and White's advice to "omit needless words" than I might have from a course in how to build city hall.
https://en.wikipedia.org/wiki/Program_evaluation_and_review_...
Hogwash. Has this person never run a business, or interacted with those who have? The business depends on estimates in order to quantitatively determine how much time, money, and resources to allocate to a project. Teams in the manufacturing and construction fields deliver estimates all the time. Why shouldn't IT people be held to the same standard?
If you can't estimate, it's generally because your process isn't comprehensive enough. Tim Bryce said it's very straightforward, once you account for all the variables, including your bill of materials (what goes into the product), and the skill level and effectiveness rating (measured as the ratio of direct work to total time on the job) of the personnel involved. (You are tracking these things, aren't you?)
https://www.modernanalyst.com/Resources/Articles/tabid/115/I...
> The pro-estimation dogma says that these questions ought to be answered during the planning process, so that each individual piece of work being discussed is scoped small enough to be accurately estimated. I’m not impressed by this answer. It seems to me to be a throwback to the bad old days of software architecture, where one architect would map everything out in advance, so that individual programmers simply had to mechanically follow instructions.
If you're not dividing the work such that about ~60% of the time is spent in analysis and design and only ~15% in programming, you've got your priorities backwards. In the "bad old days", systems got delivered on time and under budget, and they shipped in working order, rather than frustrating users with a series of broken or half-working systems. This is because PRIDE, the scientific approach to systems analysis and design, was the standard. It still is in places like Japan. Not so much America, where a lot of software gets produced it's true, but very little of it is any good.
Here are literally the top two Google results for "story points" and they both seem to align entirely with what I said:
https://www.atlassian.com/agile/project-management/estimatio...
https://www.mountaingoatsoftware.com/blog/what-are-story-poi...
I don't doubt that what you're describing as story points is something somebody told you. I'm just telling you that their definition was highly idiosyncratic and extremely non-standard. When discussing these things, using the standard definitions is helpful.
https://www.atlassian.com/agile/project-management/estimatio...
Points are not intrinstic or objective attributes, like the sky being blue. The scale is arbitrarily chosen by any given team, and relative to past work. But a common reference point is that 1 point is the "smallest" feature worth tracking (sometimes 1/2), and 20 points is usually the largest individual feature a team can deliver in a sprint. So it's common for teams to be delivering something between e.g. 50 and 200 points per sprint. Teams very quickly develop a "feel" for points.
> And then another matter is that points do not correlate to who later takes that work.
Yes, this is by design. Points represent complexity, not time. An experienced senior dev might tend to deliver 30 points per sprint, while a junior dev might usually deliver 10. If a team swaps out some junior devs for senior devs, you will expect the team to deliver more points per sprint.
The goal isn't to avoid time estimation completely, that would be crazy. People estimate how many points get delivered per sprint and sprints have fixed lengths of time. You can do the math, you're supposed to.
The point is that points avoid a false sense of precision: >>46748310
The process is quite easy to implement. And it does wind up with extraordinary efficacy gains on a lot of teams, that's the whole reason why it's so popular. But you do have to actually learn about it. Here:
https://www.atlassian.com/agile/project-management/estimatio...
^ This report from 2020 analyzed about 50,000 IT projects in a wide range of market segments, and they found that 50% exceeded their deadline. This seems to suggest that your conclusion holds more generally than just your specific context.
On a personal level, I hardly ever see a developer's estimate turn out to be right, on whatever scale. I'm wondering what the pro estimate folks in this thread work on that they're able to estimate accurately.
How do you know if your estimate is good? Would you rather bet on your estimate or on hitting one of 8 numbers on a 10-number roulette wheel? If you prefer one of the bets, adjust your estimates. If you're indifferent between the bets, the estimates accurately reflect your beliefs.
(The roulette wheel is from the book, How to Measure Anything by Hubbard. Confidence interval estimates are from LiquidPlanner, https://web.archive.org/web/20120508001704/http://www.liquid...)