zlacker

[return to "My iPhone 16 Pro Max produces garbage output when running MLX LLMs"]
1. rainco+8h[view] [source] 2026-02-01 23:08:02
>>rafael+(OP)
Low level numerical operation optimizations are often not reproduceable. For example: https://www.intel.com/content/dam/develop/external/us/en/doc... (2013)

But it's still surprising that that LLM doesn't work on iPhone 16 at all. After all LLMs are known for their tolerance to quantization.

◧◩
2. bri3d+Dh[view] [source] 2026-02-01 23:11:50
>>rainco+8h
Yes, "floating point accumulation doesn't commute" is a mantra everyone should have in their head, and when I first read this article, I was jumping at the bit to dismiss it out of hand for that reason.

But, what got me about this is that:

* every other Apple device delivered the same results

* Apple's own LLM silently failed on this device

to me that behavior suggests an unexpected failure rather than a fundamental issue; it seems Bad (TM) that Apple would ship devices where their own LLM didn't work.

◧◩◪
3. danpal+Mp[view] [source] 2026-02-02 00:17:54
>>bri3d+Dh
FYI, the saying is "champing at the bit", it comes from horses being restrained.
◧◩◪◨
4. jasinj+834[view] [source] 2026-02-03 01:02:47
>>danpal+Mp
Huh. I never knew "champing" was the proper spelling [0]

[0] https://www.npr.org/sections/memmos/2016/06/09/605796769/che...

[go to top]