zlacker

[return to "My iPhone 16 Pro Max produces garbage output when running MLX LLMs"]
1. refulg+7m[view] [source] 2026-02-01 23:50:47
>>rafael+(OP)
.
◧◩
2. bri3d+qn[view] [source] 2026-02-02 00:00:32
>>refulg+7m
Can you read the article a little more closely?

> - MiniMax can't fit on an iPhone.

They asked MiniMax on their computer to make an iPhone app that didn't work.

It didn't work using the Apple Intelligence API. So then:

* They asked Minimax to use MLX instead. It didn't work.

* They Googled and found a thread where Apple Intelligence also didn't work for other people, but only sometimes.

* They HAND WROTE the MLX code. It didn't work. They isolated the step where the results diverged.

> Better to dig in a bit more.

The author already did 100% of the digging and then some.

Look, I am usually an AI rage-enthusiast. But in this case the author did every single bit of homework I would expect and more, and still found a bug. They rewrote the test harness code without an LLM. I don't find the results surprising insofar as that I wouldn't expect MAC to converge across platforms, but the fact that Apple's own LLM doesn't work on their hardware and their own is an order of magnitude off is a reasonable bug report, in my book.

[go to top]