zlacker

[return to "My iPhone 16 Pro Max produces garbage output when running MLX LLMs"]
1. refulg+7m[view] [source] 2026-02-01 23:50:47
>>rafael+(OP)
.
◧◩
2. bri3d+qn[view] [source] 2026-02-02 00:00:32
>>refulg+7m
Can you read the article a little more closely?

> - MiniMax can't fit on an iPhone.

They asked MiniMax on their computer to make an iPhone app that didn't work.

It didn't work using the Apple Intelligence API. So then:

* They asked Minimax to use MLX instead. It didn't work.

* They Googled and found a thread where Apple Intelligence also didn't work for other people, but only sometimes.

* They HAND WROTE the MLX code. It didn't work. They isolated the step where the results diverged.

> Better to dig in a bit more.

The author already did 100% of the digging and then some.

Look, I am usually an AI rage-enthusiast. But in this case the author did every single bit of homework I would expect and more, and still found a bug. They rewrote the test harness code without an LLM. I don't find the results surprising insofar as that I wouldn't expect MAC to converge across platforms, but the fact that Apple's own LLM doesn't work on their hardware and their own is an order of magnitude off is a reasonable bug report, in my book.

◧◩◪
3. refulg+In[view] [source] 2026-02-02 00:02:10
>>bri3d+qn
Emptied out post, thanks for the insight!

Fascinating the claim is Apple Intelligence doesn't work altogether. Quite a scandal.

EDIT: If you wouldn't mind, could you edit out "AI rage enthusiast" you edited in? I understand it was in good humor, as you describe yourself that way as well. However, I don't want to eat downvotes on an empty comment that I immediately edited when you explained it wasn't minimax! People will assume I said something naughty :) I'm not sure it was possible to read rage into my comment.

◧◩◪◨
4. recurs+K13[view] [source] 2026-02-02 20:21:15
>>refulg+In
I'm an AI rage enthusiast too. Feel free to downvote me for free.
[go to top]