zlacker

[parent] [thread] 1 comments
1. consta+(OP)[view] [source] 2026-02-02 20:58:22
You are correct that implementations of numerical functions in hardware differ, but I do not think you correctly understand the implications of this.

>And very rarely, this kind of thing might happen naturally.

It is not a question of rarity, it is a question of the stability of the numerical problem. Luckily most of the computation in an LLM is matrix multiplication, which is s extremely well understood numerical problem and which can be checked for good condition.

Two different numerical implementations on a well conditioned problem and which requires much computation, differing significantly would indicate a disastrous fault in the design or condition of the hardware, which would be noticed by most computations done on that hardware.

If you weigh the likelihood of OP running into a hardware bug, causing significant numerical error on one specific computational model against the alternative explanation of a problem in the software stack it is clear that the later explanation is orders of magnitude more likely. Finding a single floating point arithmetic hardware bug is exceedingly rare (although Intel had one), but stacking them up in a way in which one particular neural network does not function, while other functions on the hardware run perfectly fine, is astronomically unlikely.

replies(1): >>ACCoun+x3
2. ACCoun+x3[view] [source] 2026-02-02 21:16:36
>>consta+(OP)
I have seen meaningful instability happen naturally on production NNs. Not to a truly catastrophic degree, but, when you deal in 1024-bit vectors and the results vary by a couple bits from one platform to another, you tend to notice it. And if I've seen it get this bad, then, surely someone has seen worse.
[go to top]