zlacker

[return to "My iPhone 16 Pro Max produces garbage output when running MLX LLMs"]
1. zcbenz+Ys1[view] [source] 2026-02-02 11:46:08
>>rafael+(OP)
It is a bug in MLX that has been fixed a few days ago: https://github.com/ml-explore/mlx/pull/3083
◧◩
2. zozbot+3v1[view] [source] 2026-02-02 12:03:32
>>zcbenz+Ys1
So the underlying issue is that the iPhone 16 Pro SKU was misdetected as having Neural Accelerator (nax) support and this caused silently wrong results. Not a problem with the actual hardware.
◧◩◪
3. llm_ne+Rz1[view] [source] 2026-02-02 12:41:54
>>zozbot+3v1
Apple's documentation is utter garbage, but this code almost seems like a separate issue (and notably the MLX library uses loads of undocumented properties in metal which isn't cool). It looks like the change used to allow the NAX kernel to be used on the iPhone 17 or upcoming 18 if you're on 26.2 or later, to instead only allow it on the iPhone 17 Pro or upcoming 18. I'm fairly sure the GPU arch on the A19 is 17. They changed it so it will only use that kernel on the 17 Pro or upcoming 18, which is notable as the A19 Pro in the 17 Pro has a significantly changed GPU, including GPU tensor cores. The only real change here is that it would limit to the pro variants for the "17" model.
◧◩◪◨
4. zozbot+SA1[view] [source] 2026-02-02 12:47:06
>>llm_ne+Rz1
> The neural accelerator exists in iPhones going back many years.

What has existed before is the Apple Neural Engine (ANE) which is very different from the newer Neural Accelerator support within the GPU blocks. In fact MLX does not even support ANE yet since at least in previous versions it was hardware-limited to computing FP16 and INT8 MADDs, and not even that fast.

◧◩◪◨⬒
5. llm_ne+AH1[view] [source] 2026-02-02 13:29:17
>>zozbot+SA1
Sure, I directly and explicitly talked about Apple's version of tensor cores in the GPU. But the ANE is by every definition a neural accelerator. Yes, I'm aware of Apple's weird branding for their tensor cores.

"In fact MLX does not even support ANE yet"

I didn't say otherwise. The ANE is a fantastic unit for small, power-efficient models, like extracting text from images, doing depth modelling, etc. It's not made for LLMs, or the other sorts of experimental stuff MLX is intended for. Though note that MLX's author's reason for not supporting the ANE is that it has a "closed-source" API (https://github.com/ml-explore/mlx/issues/18#issuecomment-184...), making it unsuitable for an open-source project, and given that MLX didn't want to just lean on CoreML. But anyways, the ANE is fantastically fast at what it does, while sipping juice.

In any case, the code change shown should have zero impact on the running of MLX on an iPhone 16 Pro. MLX tries to really leverage platform optimizations so maybe another bifucation is making the wrong choice.

[go to top]