This is a dumb question but... are GPUs really that much faster than CPUs specifically at the math functions tested on this page?
xlogy trunc tan/tanh sub square sqrt sin/sinc/silu/sinh sign sigmoid sqrt/rsqrt round relu reciprocal rad2deg pow positive neg mul logaddexp/logaddexp2 log/log1p/log10/log2 ldexp hypot frac floor expm1 exp2 exp div deg2rad cos/cosh copysign ceil atan/atan2 asinh/asin add acosh/acos abs
Those are the types of math GPUs are good at? I thought they were better at a different kind of math, like matrices or something?
IMO almost any application that is bottlenecked by CPU performance can be recast to use GPUs effectively. But it's rarely done because GPUs aren't nearly as standardized as CPUs and the developer tools are much worse, so it's a lot of effort for a faster but much less portable outcome.
I think WebGPU will be that universal language everyone speaks, and I think also that this will help get rid of Nvidia's monopoly on GPU compute.