zlacker

[return to "Why Speed Matters"]
1. jandre+2s[view] [source] 2025-12-06 16:41:07
>>gsky+(OP)
On the other hand, "slow is smooth, smooth is fast". Which strategy is optimal depends on the nature of the product and the cost of iteration.

In software, optimizing for speed works best in cases where architecture has minimal relevance for product outcomes. If I am writing a Python library then I typically iterate very quickly. Swapping out bits of implementation has low cost.

If I am writing a database kernel then designing any part of it poorly has a high chance of permanently crippling the implementation. Iterating is often tantamount to a major rewrite and extremely costly. You can only afford a very small number of rewrites before the iteration time stretches into years, so it is actually faster to spend much more time thinking through details that may seem unimportant.

◧◩
2. RaftPe+FF[view] [source] 2025-12-06 18:32:42
>>jandre+2s
> In software, optimizing for speed works best in cases where architecture has minimal relevance for product outcomes.

The other consideration is the impact of low quality on the business.

Generally, I find that cleaning up issues in production systems (e.g. transactions all computed incorrectly and flowed to 9 downstream systems, incorrectly) far outweighs the time it takes to get it right.

Even if the issue doesn't involve fixing data all over the place and just involves creating a manual work around, that can still be a huge issue that requires business people and systems people to work out an alternate process that correctly achieves the result and gets the systems into the correct state.

The approach I've seen that seems to work is to reduce scope and never reduce quality. You can still get stuff done rapidly and learn about what functions well for the business and what doesn't, but anything you commit to should work as expected in production.

[go to top]