zlacker

[parent] [thread] 2 comments
1. cyanma+(OP)[view] [source] 2025-12-05 23:52:52
I am having trouble understanding the distinction you’re trying to make here. The computer has the same pixel information that humans do and can spend its time analyzing it in any way it wants. My four-year-old can count the legs of the dog (and then say “that’s silly!”), whereas LLMs have an existential crisis because five-legged-dogs aren’t sufficiently represented in the training data. I guess you can call that perception if you want, but I’m comfortable saying that my kid is smarter than LLMs when it comes to this specific exercise.
replies(2): >>Feepin+Z6 >>qnleig+bT
2. Feepin+Z6[view] [source] 2025-12-06 00:52:07
>>cyanma+(OP)
Your kid, it should be noted, has a massively bigger brain than the LLM. I think the surprising thing here maybe isn't that the vision models don't work well in corner cases but that they work at all.

Also my bet would be that video capable models are better at this.

3. qnleig+bT[view] [source] 2025-12-06 11:11:13
>>cyanma+(OP)
LLMs can count other objects, so it's not like they're too dumb to count. So a possible model for what's going on is that the circuitry responsible for low-level image recognition has priors baked in that cause it to report unreliable information to parts that are responding for higher-order reason.

So back to the analogy, it could be as if the LLMs experience the equivalent of a very intense optical illusion in these cases, and then completely fall apart trying to make sense of it.

[go to top]