zlacker

[return to "My iPhone 16 Pro Max produces garbage output when running MLX LLMs"]
1. rainco+8h[view] [source] 2026-02-01 23:08:02
>>rafael+(OP)
Low level numerical operation optimizations are often not reproduceable. For example: https://www.intel.com/content/dam/develop/external/us/en/doc... (2013)

But it's still surprising that that LLM doesn't work on iPhone 16 at all. After all LLMs are known for their tolerance to quantization.

◧◩
2. bri3d+Dh[view] [source] 2026-02-01 23:11:50
>>rainco+8h
Yes, "floating point accumulation doesn't commute" is a mantra everyone should have in their head, and when I first read this article, I was jumping at the bit to dismiss it out of hand for that reason.

But, what got me about this is that:

* every other Apple device delivered the same results

* Apple's own LLM silently failed on this device

to me that behavior suggests an unexpected failure rather than a fundamental issue; it seems Bad (TM) that Apple would ship devices where their own LLM didn't work.

◧◩◪
3. danpal+Mp[view] [source] 2026-02-02 00:17:54
>>bri3d+Dh
FYI, the saying is "champing at the bit", it comes from horses being restrained.
◧◩◪◨
4. odo124+xB[view] [source] 2026-02-02 02:05:15
>>danpal+Mp
chomping at the bit
◧◩◪◨⬒
5. danpal+fC[view] [source] 2026-02-02 02:13:24
>>odo124+xB
Actually it was originally "champing" – to grind or gnash teeth. The "chomping" (to bite) alternative cropped up more recently as people misheard and misunderstood, but it's generally accepted as an alternative now.
◧◩◪◨⬒⬓
6. odo124+wb4[view] [source] 2026-02-03 01:58:11
>>danpal+fC
I see
[go to top]