zlacker

[parent] [thread] 16 comments
1. Magmal+(OP)[view] [source] 2026-01-24 23:45:36
> is in points not days

I hear this often, but I've never met someone for whom points didn't eventually turn into a measurement of time - even using the exact process you're describing.

I think any process that's this hard to implement should be considered bad by default, barring some extraordinary proof of efficacy.

replies(1): >>crazyg+o1
2. crazyg+o1[view] [source] 2026-01-24 23:55:18
>>Magmal+(OP)
> I've never met someone for whom points didn't eventually turn into a measurement of time

The goal isn't to avoid time estimation completely, that would be crazy. People estimate how many points get delivered per sprint and sprints have fixed lengths of time. You can do the math, you're supposed to.

The point is that points avoid a false sense of precision: >>46748310

The process is quite easy to implement. And it does wind up with extraordinary efficacy gains on a lot of teams, that's the whole reason why it's so popular. But you do have to actually learn about it. Here:

https://www.atlassian.com/agile/project-management/estimatio...

replies(2): >>mitthr+Z6 >>Magmal+OD
◧◩
3. mitthr+Z6[view] [source] [discussion] 2026-01-25 00:46:19
>>crazyg+o1
If it works for you, then it's a good method, but in my opinion the most transparent way to avoid a false sense of precision for time estimation (as with all else) is by explicitly including error bars, rather than changing the units-of-measure.
replies(1): >>crazyg+t7
◧◩◪
4. crazyg+t7[view] [source] [discussion] 2026-01-25 00:49:19
>>mitthr+Z6
Error bars are complicated, and who's to say how large they should be? It winds up being a lot of pointless arguing over arbitrary precision.

The Fibonnaci sequence of point values has wound up just being a lot simpler for most people, as it encapsulates both size and error, since error tends to grow proportionally with size.

I.e. nobody is arguing over whether it's 10h +/- 1h, versus 12h +/- 1h, versus 12h +/- 2h, versus 11h +/- 3h. It's all just 5 points, or else 8 points, or else 13 points. It avoids discussion over any more precision than is actually reliably meaningful.

replies(1): >>netgho+Sc
◧◩◪◨
5. netgho+Sc[view] [source] [discussion] 2026-01-25 01:38:00
>>crazyg+t7
I worked on a product that was built around planning an estimation with ranged estimates (2-4h, 1-3d, etc)

2-12d conveys a very different story than 6-8d. Are the ranges precise? Nope, but they're useful in conveying uncertainty, which is something that gets dropped in any system that collapses estimates to a single point.

That said, people tend to just collapse ranges, so I guess we all lose in the end.

replies(1): >>crazyg+Zi
◧◩◪◨⬒
6. crazyg+Zi[view] [source] [discussion] 2026-01-25 02:36:58
>>netgho+Sc
> 2-12d conveys a very different story than 6-8d.

In agile, 6-8d is considered totally reasonable variance, while 2-12d simply isn't permitted. If that's the level of uncertainty -- i.e. people simply can't decide on points -- you break it up into a small investigation story for this sprint, then decide for the next sprint whether it's worth doing once you have a more accurate estimate. You would never just blindly decide to do it or not if you had no idea if it could be 2 or 12 days. That's a big benefit of the approach, to de-risk that kind of variance up front.

replies(2): >>mitthr+iy >>Lehere+Fg1
◧◩◪◨⬒⬓
7. mitthr+iy[view] [source] [discussion] 2026-01-25 05:39:03
>>crazyg+Zi
If you measure how long a hundred "3-day tasks" actually take, in practice you'll find a range that is about 2-12. The variance doesn't end up getting de-risked. And it doesn't mean the 3-day estimate was a bad guess either. The error bars just tend to be about that big.
replies(1): >>crazyg+bH1
◧◩
8. Magmal+OD[view] [source] [discussion] 2026-01-25 06:48:13
>>crazyg+o1
> The process is quite easy to implement

Having implemented it myself, I agree it is easy to implement. My argument is that it is overly difficult to maintain. My experience is that incentives to corrupt the point system are too high for organizations to resist.

Funnily enough - I work closely with a former director of engineering at Atlassian (the company whose guide you cite) and he is of the opinion that pointing had become "utterly dishonest and a complete waste of time". I respect that opinion.

If you have citations on pointing being effective I'd be very interested. I consider myself reasonably up to date on SWE productivity literature and am not aware of any evidence to that point - I have yet to see it.

replies(1): >>crazyg+FI1
◧◩◪◨⬒⬓
9. Lehere+Fg1[view] [source] [discussion] 2026-01-25 13:15:27
>>crazyg+Zi
> you break it up into a small investigation story for this sprint, then decide for the next sprint whether it's worth doing

That's just too slow for business in my experience though. Rightly or wrongly, they want it now, not in a couple of sprints.

So what we do is we put both the investigation and the implementation in the same sprint, use the top of the range for the implementation, and re-evaluate things mid-sprint once the investigation is done. Of course this messes up predictability and agile people don't like it, but they don't have better ideas either on how to handle it.

Not sure if we're not enough agile or too agile for scrum.

replies(1): >>crazyg+OH1
◧◩◪◨⬒⬓⬔
10. crazyg+bH1[view] [source] [discussion] 2026-01-25 16:30:17
>>mitthr+iy
If a story-sized task takes 4x more effort than expected, something really went wrong. If it's blocked and it gets delayed then fine, but you can work on other stories in the meantime.

I'm not saying it never happens, but the whole reason for the planning poker process is to surface the things that might turn a 3 point story into a 13 point story, with everyone around the table trying to imagine what could go wrong.

You should not be getting 2-12 variance, unless it's a brand-new team working on a brand new project that is learning how to do everything for the first time. I can't count how many sprint meetings I've been in. That level of variance is not normal for the sizes of stories that fit into sprints.

replies(2): >>netgho+ES1 >>mitthr+OE3
◧◩◪◨⬒⬓⬔
11. crazyg+OH1[view] [source] [discussion] 2026-01-25 16:34:44
>>Lehere+Fg1
That's definitely one way of doing it! And totally valid.

I think it often depends a lot on who the stakeholders are and what their priorities are. If the particular feature is urgent then of course what you describe is common. But when the priority is to maximize the number of features you're delivering, I've found that the client often prefers to do the bounded investigation and then work on another feature that is better understood within the same sprint, then revisit the investigation results at the next meeting.

But yes -- nothing prevents you from making mid-sprint reevaluations.

◧◩◪
12. crazyg+FI1[view] [source] [discussion] 2026-01-25 16:40:01
>>Magmal+OD
I guess my experiences are quite the opposite. Maintaining the process couldn't be easier. I don't even know what it means to "corrupt" points...? Or for points to become "dishonest"? I'm genuinely baffled.

I'm not aware of any citations, just like I'm not aware of any citations for most common development practices. It seems to be justified more in a practical sense -- as a team or business, you try it out, and see if it improves productivity and planning. If so, you keep it. I've worked at several places that adopted it, to huge success, solving a number of problems. I've never once seen a place choose to stop it, or find something that worked better. If you have a citation that there is something that works better than points estimation, then please share!

It's just wisdom of the crowds, or two heads are better than one. Involving more people in making estimates, avoiding false precision, and surfacing disagreement -- how is that not going to result in higher-quality estimates?

replies(1): >>Magmal+w53
◧◩◪◨⬒⬓⬔⧯
13. netgho+ES1[view] [source] [discussion] 2026-01-25 17:42:34
>>crazyg+bH1
I think it really depends on how teams use their estimates. If you're locking in an estimate and have to stick with it for a week or a month, you're right, that's terrible.

If you don't strictly work on a Sprint schedule, then I think it's reasonable to have high variance estimates, then as soon as you learn more, you update the estimate.

I've seen lots of different teams do lots of different things. If they work for you and you're shipping with reliable results then that's excellent.

◧◩◪◨
14. Magmal+w53[view] [source] [discussion] 2026-01-26 01:50:06
>>crazyg+FI1
By "dishonest" I'm saying they become measurements of time, which is what we were trying to avoid.

Stepping back - my experience is that points are solving a problem good organizations don't have.

The practice I see work well is that a senior person comes up with a high level plan fror a project with confidence intervals on timeline and quality and has it sanity checked by peers. Stakeholders understand the timeline and scope to be an evolving conversation that we iterate on week-by-week. Our rough estimates are enough to see when the project is truly off-track and we can have a discussion about timelines and resourcing.

I just don't see what points do for me other than attempt to "measure velocity". In principle there's a metric that's useful for upper management, but the moment they treat it as a target engineers juice their numbers.

replies(1): >>crazyg+VR5
◧◩◪◨⬒⬓⬔⧯
15. mitthr+OE3[view] [source] [discussion] 2026-01-26 08:08:18
>>crazyg+bH1
Try systematically collecting some fine grained data comparing your team's initial time estimates against actual working time spent on each ticket. See what distribution you end up with.

Make sure you account for how often someone comes back from working on a 3-point story and says "actually, after getting started on this it turned out to be four 3-point tasks rather than one, so I'm creating new tickets." Or "my first crack at solving this didn't work out, so I'm going to try another approach."

replies(1): >>crazyg+SP5
◧◩◪◨⬒⬓⬔⧯▣
16. crazyg+SP5[view] [source] [discussion] 2026-01-26 20:56:39
>>mitthr+OE3
That's literally what retrospectives are for. You do them at the end of every sprint.

Granted, they're point estimates not time estimates, but it's the same idea -- what was our velocity this sprint, what were the tickets that seemed easier than expected, what were the ones that seemed harder, how can we learn from this to be more accurate going forwards, and/or how do we improve our processes?

Your tone suggests you think you've found some flaw. You don't seem to realize this is explicitly part of sprints.

I'm describing my experiences with variances based on many, many, many sprints.

◧◩◪◨⬒
17. crazyg+VR5[view] [source] [discussion] 2026-01-26 21:06:42
>>Magmal+w53
> By "dishonest" I'm saying they become measurements of time, which is what we were trying to avoid.

On the one hand, they simply can't. They're a measurement of effort, and a junior dev will take more time to finish a story than a senior dev will. On the other hand, at the sprint velocity level, yes of course they're supposed to be equivalent to time, in the sense that they're what the team expects to be able to accomplish in the length of a sprint. That's not dishonest, that's the purpose.

> The practice I see work well is that a senior person comes up with a high level plan fror a project with confidence intervals on timeline and quality and has it sanity checked by peers... I just don't see what points do for me other than attempt to "measure velocity".

Right, so what happens with what you describe is that you're skipping the "wisdom of the crowds" part, estimation is done too quickly and not in enough depth, and you wind up significantly underestimating, and management keeps pushing the senior person to reduce they're estimates because there's no process behind them, and planning suffers because you're trying to order the backlog based on wishful thinking rather than good information.

What points estimation does is provide a process that aims to increase accuracy which can be used for better planning, in order to deliver the highest-priority features faster, and not waste time on longer features that go off track where nobody notices for weeks. Management can say, "can't they do it faster?", and you can explain, "we have a process for estimation and this is it." It's not any single employee's opinion, it's a process. This is huge.

> but the moment they treat it as a target engineers juice their numbers.

How? Management doesn't care about points delivered, they care about features delivered. There's nothing to "juice". Points are internal to a team, and used with stakeholders to measure the expected relative size of tasks, so tasks can be reprioritized. I've never seen sprint velocity turn into some kind of management target, it doesn't even make sense. I mean, I'm sure there's some dumb management out there that's tried it. But what you're describing isn't my experience even remotely.

[go to top]