zlacker

[return to "Std: Clamp generates less efficient assembly than std:min(max,std:max(min,v))"]
1. celega+im[view] [source] 2024-01-16 13:50:05
>>x1f604+(OP)
On gcc 13, the difference in assembly between the min(max()) version and std::clamp is eliminated when I add the -ffast-math flag. I suspect that the two implementations handle one of the arguments being NaN a bit differently.

https://gcc.godbolt.org/z/fGaP6roe9

I see the same behavior on clang 17 as well

https://gcc.godbolt.org/z/6jvnoxWhb

◧◩
2. gumby+1n[view] [source] 2024-01-16 13:54:31
>>celega+im
You (celegans25) probably know this but here is a PSA that -ffast-math is really -finaccurate-math. The knowledgeable developer will know when to use it (almost never) while the naive user will have bugs.
◧◩◪
3. dahart+Vy[view] [source] 2024-01-16 15:05:15
>>gumby+1n
Why do you say almost never? Don’t let the name scare you; all floating point math is inaccurate. Fast math is only slightly less accurate, I think typically it’s a 1 or maybe 2 LSB difference. At least in CUDA it is, and I think many (most?) people & situations can tolerate 22 bits of mantissa compared to 23, and many (most?) people/situations aren’t paying attention to inf/nan/exception issues at all.

I deal with a lot of floating point professionally day to day, and I use fast math all the time, since the tradeoff for higher performance and the relatively small loss of accuracy are acceptable. Maybe the biggest issue I run into is lack of denorms with CUDA fast-math, and it’s pretty rare for me to care about numbers smaller than 10^-38. Heck, I’d say I can tolerate 8 or 16 bits of mantissa most of the time, and fast-math floats are way more accurate than that. And we know a lot of neural network training these days can tolerate less than 8 bits of mantissa.

◧◩◪◨
4. jcranm+RM[view] [source] 2024-01-16 16:04:36
>>dahart+Vy
Here are some of the problems with fast-math:

* It links in an object file that enables denormal flushing globally, so that it affects all libraries linked into your application, even if said library explicitly doesn't want fast-math. This is seriously one of the most user-hostile things a compiler can do.

* The results of your program will vary depending on the exact make of your compiler and other random attributes of your compile environment, which can wreak havoc if you have code that absolutely wants bit-identical results. This doesn't matter for everybody, but there are some domains where this can be a non-starter (e.g., multiplayer game code).

* Fast-math precludes you from being able to use NaN or infinities, and often even being able to defensively test for NaN or infinity. Sure, there are times where this is useful, but an option you might generally prefer to suggest for an uninformed programmer would rather be a "floating-point code can't overflow" option rather than "infinity doesn't exist and it's UB if it does exist".

* Fast-math can cause hard range guarantees to fail. Maybe you've got code that you can prove that, even with rounding error, the result will still be >= 0. With fast-math, the code might be adjusted so that the result is instead, say, -1e-10. And if you pass that to a function with a hard domain error at 0 (like sqrt), you now go from the result being 0 to the result being NaN. And see above about what happens when you get NaN.

Fast-math is a tradeoff, and if you're willing to except the tradeoff it offers, it's a fine option to use. But most programmers don't even know what the tradeoffs are, and the failure mode can be absolutely catastrophic. It's definitely an option that is in the "you must be this knowledgeable to use" camp.

◧◩◪◨⬒
5. fl0ki+UE1[view] [source] 2024-01-16 19:56:09
>>jcranm+RM
> The results of your program will vary depending on the exact make of your compiler and other random attributes of your compile environment, which can wreak havoc if you have code that absolutely wants bit-identical results. This doesn't matter for everybody, but there are some domains where this can be a non-starter (e.g., multiplayer game code).

This already shouldn't be assumed, because even the same code, compiler, and flags can produce different floating point results on different CPU targets. With the world increasingly split over x86_64 and aarch64, with more to come, it would be unwise to assume they produce the same exact numbers.

Often this comes down to acceptable implementation defined behavior, e.g. temporarily using an 80-bit floating register despite the result being coerced to 64 bits, or using an FMA instruction that loses less precision than separate multiply and add instructions.

Portable results should come from integers (even if used to simulate rationals and fixed point), not floats. I understand that's not easy with multiplayer games, but doing so with floats is simply impossible because of what is left as implementation-defined in our language standards.

◧◩◪◨⬒⬓
6. Ashame+BK1[view] [source] 2024-01-16 20:22:05
>>fl0ki+UE1
> Often this comes down to acceptable implementation defined behavior,

I believe this is "always" rather than often when it comes to the actual operations defined by the FP standard. gcc does play it fast and loose (as -ffast-math is not yet enabled by default, and FMA on the other hand is), but this is technically illegal and at least can be easily configured to be in standards-compliant mode.

I think the bigger problem comes from what is _not_ documented by the standard. E.g. transcendental functions. A program calling plain old sqrt(x) can find itself behaving differently _even between different stepping of the same core_, not to mention that there are well-known differences between AMD vs Intel. This is all using the same binary.

◧◩◪◨⬒⬓⬔
7. mabste+yg2[view] [source] 2024-01-16 23:17:13
>>Ashame+BK1
I'm surprised by this, regarding sqrt. The standard stipulates correct rounding for simple arithmetic, including sqrt ever since 754 1985.

Unless of course we are talking about the 80 bit format.

If that's not the case, would be interested to know where they differ.

Unfortunately for the transcendental function the accuracy still hasn't been pinned down, especially since that's still an ongoing research problem.

There's been some great strides in figuring out the worst cases for binary floating point up to doubles so hopefully an upcoming standard will stipulate 0.5 ULP for transcendentals. But decimal floating point still has a long way to go.

[go to top]