I think it's a bad trade-off, most languages out there are moving away from it
So for a large loop the code like
for i, value := source { result[i] = value * 2 + 1 }
Would be 2x faster than a loop like
for i, value := source { intermediate[i] = value * 2 }
for i, value := intermediate { result[i] = value + 1 }
For example, Rust iterators are lazily evaluated with early-exits (when filtering data), thus it's your first form but as optimized as possible. OTOH python's map/filter/etc may very well return a full list each time, like with your intermediate. [EDIT] python returns generators, so it's sane.
I would say that any sane language allowing functional-style data manipulation will have them as fast as manual for-loops. (that's why Rust bugs you with .iter()/.collect())
I always encounter these upsides once every few years when preparing leetcode interviews, where this kind of optimization is needed for achieving acceptable results.
In daily life, however, most of these chunks of data to transform fall in one of these categories:
- small size, where readability and maintainability matters much more than performance
- living in a db, and being filtered/reshaped by the query rather than code
- being chunked for atomic processing in a queue or similar (usual when importing a big chunk of data).
- the operation itself is a standard algorithm that you just consume from a standard library that handless the loop internally.
Much like trees and recursion, most of us don’t flex that muscle often. Your mileage might vary depending of domain of course.