zlacker

[parent] [thread] 0 comments
1. dahart+(OP)[view] [source] 2024-01-16 22:25:44
I only do numeric computation, I don’t work in machine learning. Sorry your assumptions are incorrect, maybe it’s best not to assume or attack. I didn’t exactly advise using fast math either, I asked for reasoning and pointed out that most casual uses of float aren’t highly sensitive to precision.

It’s easy to have wrong sums and catastrophic cancellation without fast math, and it’s relatively rare for fast math to cause those issues when an underlying issue didn’t already exist.

I’ve been working in some code that does a couple of quadratic solves and has high order intermediate terms, and I’ve tried using Kahan’s algorithm repeatedly to improve the precision of the discriminants, but it has never helped at all. On the other hand I’ve used a few other tricks that improve the precision enough that the fast math version is higher precision than the naive one without fast math. I get to have my cake and eat it too.

Fast math is a tradeoff. Of course it’s a good idea to know what it does and what the risks of using it are, but at least in terms of the accuracy of fast math in CUDA, it’s not an opinion whether the accuracy is relatively close to slow math, it’s reasonably well documented. You can see for yourself that most fast math ops are in the single digit ulps of rounding error. https://docs.nvidia.com/cuda/cuda-c-programming-guide/index....

[go to top]