https://gcc.godbolt.org/z/fGaP6roe9
I see the same behavior on clang 17 as well
I deal with a lot of floating point professionally day to day, and I use fast math all the time, since the tradeoff for higher performance and the relatively small loss of accuracy are acceptable. Maybe the biggest issue I run into is lack of denorms with CUDA fast-math, and it’s pretty rare for me to care about numbers smaller than 10^-38. Heck, I’d say I can tolerate 8 or 16 bits of mantissa most of the time, and fast-math floats are way more accurate than that. And we know a lot of neural network training these days can tolerate less than 8 bits of mantissa.
But you're correct that it's probably usually fine in practice.
Hey I grant and acknowledge that using fast-math carries a little risk of surprises, we don’t necessarily need to try to think of corner cases. I’m mostly pushing back a little because using floats at all carries almost as much risk. A lot of people seem to use floats without knowing how inaccurate floats are, and a lot of people aren’t doing precision analysis or handling the exceptional cases… and don’t really need to.
Small nit, but floats aren't inaccurate, they have non uniform precision. Some float operations can be inaccurate, but that's rather path dependent...
One problem with -ffast-math is that a) it sounds appealing and b) people don't understand floats, so lots of people turn it on without understanding what it does, and that can introduce subtle problems in code they didn't write.
Sometimes in computational code it makes sense e.g. to get rid of denorms, but a very small fraction of programmers understand this properly, or ever will.
I wish they had named it something scary sounding.
"Some times" here being almost all the time. It is rare that your code will break without denormals if it doesn't already have precision problems with them.