https://gcc.godbolt.org/z/fGaP6roe9
I see the same behavior on clang 17 as well
I deal with a lot of floating point professionally day to day, and I use fast math all the time, since the tradeoff for higher performance and the relatively small loss of accuracy are acceptable. Maybe the biggest issue I run into is lack of denorms with CUDA fast-math, and it’s pretty rare for me to care about numbers smaller than 10^-38. Heck, I’d say I can tolerate 8 or 16 bits of mantissa most of the time, and fast-math floats are way more accurate than that. And we know a lot of neural network training these days can tolerate less than 8 bits of mantissa.
But you're correct that it's probably usually fine in practice.
Hey I grant and acknowledge that using fast-math carries a little risk of surprises, we don’t necessarily need to try to think of corner cases. I’m mostly pushing back a little because using floats at all carries almost as much risk. A lot of people seem to use floats without knowing how inaccurate floats are, and a lot of people aren’t doing precision analysis or handling the exceptional cases… and don’t really need to.