From my brief forays into reading (mostly AARCH64) assembly, it looks like C compilers can detect these kinds of patterns now and just convert them all to SIMD by themselves, with no work from the programmer. Even at -O2, converting an index-based loop into one based on start and end pointers is not unusual. Go doesn't seem to do this, the assembly output by the Go compiler looks much closer to the actual code than what you get from C.
Rust iterators would also be fun to benchmark here, they're supposed to be as fast as plain old loops, and they're probably optimized to omit bounds checks entirely.
You need 2~3 accumulators to saturate instruction-level parallelism with a parallel sum reduction. But the compiler won't do it because it only creates those when the operation is associative, i.e. (a+b)+c = a+(b+c), which is true for integers but not for floats.
There is an escape hatch in -ffast-math.
I have extensive benches on this here: https://github.com/mratsim/laser/blob/master/benchmarks%2Ffp...
It's quite telling that there is a #pragma omp simd to hint to a compiler to rewrite the loop.
Now I wonder what's the state of polyhedral compilers. It's been many years. And given the AI, LLMs hype they could really shine.