zlacker

[parent] [thread] 52 comments
1. Rover2+(OP)[view] [source] 2025-12-05 20:56:14
I just tried to get Gemini to produce an image of a dog with 5 legs to test this out, and it really struggled with that. It either made a normal dog, or turned the tail into a weird appendage.

Then I asked both Gemini and Grok to count the legs, both kept saying 4.

Gemini just refused to consider it was actually wrong.

Grok seemed to have an existential crisis when I told it it was wrong, becoming convinced that I had given it an elaborate riddle. After thinking for an additional 2.5 minutes, it concluded: "Oh, I see now—upon closer inspection, this is that famous optical illusion photo of a "headless" dog. It's actually a three-legged dog (due to an amputation), with its head turned all the way back to lick its side, which creates the bizarre perspective making it look decapitated at first glance. So, you're right; the dog has 3 legs."

You're right, this is a good test. Right when I'm starting to feel LLMs are intelligent.

replies(15): >>dwring+94 >>AIorNo+o4 >>irthom+77 >>vunder+Tj >>qnleig+qr >>macNch+8s >>varisp+hx >>Secret+PE >>squigz+qG >>nearbu+721 >>isodev+u41 >>theoa+9a1 >>DANmod+XT1 >>tarsin+UY1 >>vision+uE2
2. dwring+94[view] [source] 2025-12-05 21:17:45
>>Rover2+(OP)
I had no trouble getting it to generate an image of a five-legged dog first try, but I really was surprised at how badly it failed in telling me the number of legs when I asked it in a new context, showing it that image. It wrote a long defense of its reasoning and when pressed, made up demonstrably false excuses of why it might be getting the wrong answer while still maintaining the wrong answer.
replies(1): >>Rover2+Q9
3. AIorNo+o4[view] [source] 2025-12-05 21:19:07
>>Rover2+(OP)
Its not that they aren’t intelligent its that they have been RL’d like crazy to not do that

Its rather like as humans we are RL’d like crazy to be grossed out if we view a picture of a handsome man and beautiful woman kissing (after we are told they are brother and sister) -

Ie we all have trained biases - that we are told to follow and trained on - human art is about subverting those expectations

replies(2): >>majorm+r9 >>HardCo+tK
4. irthom+77[view] [source] 2025-12-05 21:32:02
>>Rover2+(OP)
Isn't this proof that LLMs still don't really generalize beyond their training data?
replies(4): >>Rover2+K9 >>Camper+We >>Zambyt+4k >>adastr+dp
◧◩
5. majorm+r9[view] [source] [discussion] 2025-12-05 21:42:11
>>AIorNo+o4
Why should I assume that a failure that looks like a model just doing fairly simple pattern matching "this is dog, dogs don't have 5 legs, anything else is irrelevant" vs more sophisticated feature counting of a concrete instance of an entity is RL vs just a prediction failure due to training data not containing a 5-legged dog and an inability to go outside-of-distribution?

RL has been used extensively in other areas - such as coding - to improve model behavior on out-of-distribution stuff, so I'm somewhat skeptical of handwaving away a critique of a model's sophistication by saying here it's RL's fault that it isn't doing well out-of-distribution.

If we don't start from a position of anthropomorphizing the model into a "reasoning" entity (and instead have our prior be "it is a black box that has been extensively trained to try to mimic logical reasoning") then the result seems to be "here is a case where it can't mimic reasoning well", which seems like a very realistic conclusion.

replies(2): >>mlinha+xa >>didgeo+Ql
◧◩
6. Rover2+K9[view] [source] [discussion] 2025-12-05 21:43:51
>>irthom+77
Kind of feels that way
◧◩
7. Rover2+Q9[view] [source] [discussion] 2025-12-05 21:44:22
>>dwring+94
Yeah it gave me the 5-legged dog on the 4th or 5th try.
◧◩◪
8. mlinha+xa[view] [source] [discussion] 2025-12-05 21:48:29
>>majorm+r9
I have the same problem, people are trying so badly to come up with reasoning for it when there's just nothing like that there. It was trained on it and it finds stuff it was trained to find, if you go out of the training it gets lost, we expect it to get lost.
◧◩
9. Camper+We[view] [source] [discussion] 2025-12-05 22:17:38
>>irthom+77
They do, but we call it "hallucination" when that happens.
10. vunder+Tj[view] [source] 2025-12-05 22:48:44
>>Rover2+(OP)
If you want to see something rather amusing - instead of using the LLM aspect of Gemini 3.0 Pro, feed a five-legged dog directly into Nano Banana Pro and give it an editing task that requires an intrinsic understanding of the unusual anatomy.

  Place sneakers on all of its legs.
It'll get this correct a surprising number of times (tested with BFL Flux2 Pro, and NB Pro).

https://imgur.com/a/wXQskhL

replies(2): >>Lampre+yG >>tenseg+qH
◧◩
11. Zambyt+4k[view] [source] [discussion] 2025-12-05 22:49:46
>>irthom+77
I wonder how they would behave given a system prompt that asserts "dogs may have more or less than four legs".
replies(1): >>irthom+nu
◧◩◪
12. didgeo+Ql[view] [source] [discussion] 2025-12-05 23:01:16
>>majorm+r9
I’m inclined to buy the RL story, since the image gen “deep dream” models of ~10 years ago would produce dogs with TRILLIONS of eyes: https://doorofperception.com/2015/10/google-deep-dream-incep...
replies(1): >>Lampre+lH
◧◩
13. adastr+dp[view] [source] [discussion] 2025-12-05 23:23:19
>>irthom+77
LLMs are very good at generalizing beyond their training (or context) data. Normally when they do this we call it hallucination.

Only now we do A LOT of reinforcement learning afterwards to severely punish this behavior for subjective eternities. Then act surprised when the resulting models are hesitant to venture outside their training data.

replies(1): >>runarb+5F
14. qnleig+qr[view] [source] 2025-12-05 23:40:30
>>Rover2+(OP)
It's not obvious to me whether we should count these errors as failures of intelligence or failures of perception. There's at least a loose analogy to optical illusion, which can fool humans quite consistently. Now you might say that a human can usually figure out what's going on and correctly identify the illusion, but we have the luxury of moving our eyes around the image and taking it in over time, while the model's perception is limited to a fixed set of unchanging tokens. Maybe this is relevant.

(Note I'm not saying that you can't find examples of failures of intelligence. I'm just questioning whether this specific test is an example of one).

replies(1): >>cyanma+Vs
15. macNch+8s[view] [source] 2025-12-05 23:46:43
>>Rover2+(OP)
An interesting test in this vein that I read about in a comment on here is generating a 13 hour clock—I tried just about every prompting trick and clever strategy I could come up with across many image models with no success. I think there's so much training data of 12 hour clocks that just clobbers the instructions entirely. It'll make a regular clock that skips from 11 to 13, or a regular clock with a plaque saying "13 hour clock" underneath, but I haven't gotten an actual 13 hour clock yet.
replies(2): >>Restar+Xw >>raw_an+l11
◧◩
16. cyanma+Vs[view] [source] [discussion] 2025-12-05 23:52:52
>>qnleig+qr
I am having trouble understanding the distinction you’re trying to make here. The computer has the same pixel information that humans do and can spend its time analyzing it in any way it wants. My four-year-old can count the legs of the dog (and then say “that’s silly!”), whereas LLMs have an existential crisis because five-legged-dogs aren’t sufficiently represented in the training data. I guess you can call that perception if you want, but I’m comfortable saying that my kid is smarter than LLMs when it comes to this specific exercise.
replies(2): >>Feepin+Uz >>qnleig+6m1
◧◩◪
17. irthom+nu[view] [source] [discussion] 2025-12-06 00:04:39
>>Zambyt+4k
That may work but what actual use would it be? You would be plugging one of a million holes. A general solution is needed.
replies(1): >>Camper+5j2
◧◩
18. Restar+Xw[view] [source] [discussion] 2025-12-06 00:25:43
>>macNch+8s
Right you are. It can do 26 hours just fine, but appears completely incapable when the layout would be too close to a normal clock.

https://gemini.google.com/share/b3b68deaa6e6

I thought giving it a setting would help, but just skip that first response to see what I mean.

replies(2): >>mkl+gH >>petter+ch1
19. varisp+hx[view] [source] 2025-12-06 00:28:11
>>Rover2+(OP)
Do 7 legged dog. Game over.
replies(1): >>cridde+er1
◧◩◪
20. Feepin+Uz[view] [source] [discussion] 2025-12-06 00:52:07
>>cyanma+Vs
Your kid, it should be noted, has a massively bigger brain than the LLM. I think the surprising thing here maybe isn't that the vision models don't work well in corner cases but that they work at all.

Also my bet would be that video capable models are better at this.

21. Secret+PE[view] [source] 2025-12-06 01:36:38
>>Rover2+(OP)
LLMs are getting a lot better at understanding our world by standard rules. As it does so, maybe it losses something in the way of interpreting non standard rules, aka creativity.
◧◩◪
22. runarb+5F[view] [source] [discussion] 2025-12-06 01:38:49
>>adastr+dp
Hallucination are not generalization beyond the training data but interpolations gone wrong.

LLMs are in fact good at generalizing beyond their training set, if they wouldn’t generalize at all we would call that over-fitting, and that is not good either. What we are talking about here is simply a bias and I suspect biases like these are simply a limitation of the technology. Some of them we can get rid of, but—like almost all statistical modelling—some biases will always remain.

replies(1): >>adastr+IT
23. squigz+qG[view] [source] 2025-12-06 01:55:29
>>Rover2+(OP)
I feel a weird mix of extreme amusement and anger that there's a fleet of absurdly powerful, power-hungry servers sitting somewhere being used to process this problem for 2.5 minutes
replies(1): >>Rover2+eT1
◧◩
24. Lampre+yG[view] [source] [discussion] 2025-12-06 01:56:53
>>vunder+Tj
Does this still work if you give it a pre-existing many-legged animal image, instead of first prompting it to add an extra leg and then prompting it to put the sneakers on all the legs?

I'm wondering if it may only expect the additional leg because you literally just told it to add said additional leg. It would just need to remember your previous instruction and its previous action, rather than to correctly identify the number of legs directly from the image.

I'll also note that photos of dogs with shoes on is definitely something it has been trained on, albeit presumably more often dog booties than human sneakers.

Can you make it place the sneakers incorrectly-on-purpose? "Place the sneakers on all the dog's knees?"

replies(1): >>vunder+1I
◧◩◪
25. mkl+gH[view] [source] [discussion] 2025-12-06 02:03:47
>>Restar+Xw
That's a 24 hour clock that skips some numbers and puts other numbers out of order.
◧◩◪◨
26. Lampre+lH[view] [source] [discussion] 2025-12-06 02:05:40
>>didgeo+Ql
That's apples to oranges; your link says they made it exaggerate features on purpose.

"The researchers feed a picture into the artificial neural network, asking it to recognise a feature of it, and modify the picture to emphasise the feature it recognises. That modified picture is then fed back into the network, which is again tasked to recognise features and emphasise them, and so on. Eventually, the feedback loop modifies the picture beyond all recognition."

◧◩
27. tenseg+qH[view] [source] [discussion] 2025-12-06 02:06:56
>>vunder+Tj
i imagine the real answer is that the edits are local because that's how diffusion works; it's not like it's turning the input into "five-legged dog" and then generating a five-legged dog in shoes from scratch
◧◩◪
28. vunder+1I[view] [source] [discussion] 2025-12-06 02:13:07
>>Lampre+yG
My example was unclear. Each of those images on Imgur was generated using independent API calls which means there was no "rolling context/memory".

In other words:

1. Took a personal image of my dog Lily

2. Had NB Pro add a fifth leg using the Gemini API

3. Downloaded image

4. Sent image to BFL Flux2 Pro via the BFL API with the prompt "Place sneakers on all the legs of this animal".

5. Sent image to NB Pro via Gemini API with the prompt "Place sneakers on all the legs of this animal".

So not only was there zero "continual context", it was two entirely different models as well to cover my bases.

EDIT: Added images to the Imgur for the following prompts:

- Place red Dixie solo cups on the ends of every foot on the animal

- Draw a red circle around all the feet on the animal

◧◩
29. HardCo+tK[view] [source] [discussion] 2025-12-06 02:39:14
>>AIorNo+o4
"There are four lights"

And the AI has been RLed for tens of thousands of years not just a few days.

◧◩◪◨
30. adastr+IT[view] [source] [discussion] 2025-12-06 04:17:13
>>runarb+5F
What, may I ask, is the difference between "generalization" and "interpolation"? As far as I can tell, the two are exactly the same thing.

In which case the only way I can read your point is that hallucinations are specifically incorrect generalizations. In which case, sure if that's how you want to define it. I don't think it's a very useful definition though, nor one that is universally agreed upon.

I would say a hallucination is any inference that goes beyond the compressed training data represented in the model weights + context. Sometimes these inferences are correct, and yes we don't usually call that hallucination. But from a technical perspective they are the same -- the only difference is the external validity of the inference, which may or may not be knowable.

Biases in the training data are a very important, but unrelated issue.

replies(1): >>runarb+701
◧◩◪◨⬒
31. runarb+701[view] [source] [discussion] 2025-12-06 05:43:57
>>adastr+IT
Interpolation and generalization are two completely different constructs. Interpolation is when you have two data points and make a best guess where a hypothetical third point should fit between them. Generalization is when you have a distribution which describes a particular sample, and you apply it with some transformation (e.g. a margin of error, a confidence interval, p-value, etc.) to a population the sample is representative of.

Interpolation is a much narrower construct then generalization. LLMs are fundamentally much closer to curve fitting (where interpolation is king) then they are to hypothesis testing (where samples are used to describe populations), though they certainly do something akin to the latter to.

The bias I am talking about is not a bias in the training data, but bias in the curve fitting, probably because of mal-adjusted weights, parameters, etc. And since there are billions of them, I am very skeptical they can all be adjusted correctly.

replies(1): >>adastr+m41
◧◩
32. raw_an+l11[view] [source] [discussion] 2025-12-06 06:10:04
>>macNch+8s
It was ugly. But I got ChatGPT to cheat and do it

https://chatgpt.com/share/6933c848-a254-8010-adb5-8f736bdc70...

This is the SVG it created.

https://imgur.com/a/LLpw8YK

33. nearbu+721[view] [source] 2025-12-06 06:20:33
>>Rover2+(OP)
My guess is the part of its neural network that parses the image into a higher level internal representation really is seeing the dog as having four legs, and intelligence and reasoning in the rest of the network isn't going to undo that. It's like asking people whether "the dress" is blue/black or white/gold: people will just insist on what they see, even if what they're seeing is wrong.
◧◩◪◨⬒⬓
34. adastr+m41[view] [source] [discussion] 2025-12-06 06:56:09
>>runarb+701
I assumed you were speaking by analogy, as LLMs do not work by interpolation, or anything resembling that. Diffusion models, maybe you can make that argument. But GPT-derived inference is fundamentally different. It works via model building and next token prediction, which is not interpolative.

As for bias, I don’t see the distinction you are making. Biases in the training data produce biases in the weights. That’s where the biases come from: over-fitting (or sometimes, correct fitting) of the training data. You don’t end up with biases at random.

replies(2): >>runarb+261 >>IsTom+ff1
35. isodev+u41[view] [source] 2025-12-06 06:59:29
>>Rover2+(OP)
> starting to feel LLMs are intelligent

LLMs are fancy “lorem ipsum based on a keyword” text generators. They can never become intelligent … or learn how to count or do math without the help of tools.

It can probably generate a story about a 5 legged dog though.

◧◩◪◨⬒⬓⬔
36. runarb+261[view] [source] [discussion] 2025-12-06 07:25:06
>>adastr+m41
What I meant was that what LLMs are doing is very similar to curve fitting, so I think it is not wrong to call it interpolation (curve fitting is a type of interpolation, but not all interpolation is curve fitting).

As for bias, sampling bias is only one many types of biases. I mean the UNIX program YES(1) has a bias towards outputting the string y despite not sampling any data. You can very easily and deliberately program a bias into everything you like. I am writing a kanji learning program using SSR and I deliberately bias new cards towards the end of the review queue to help users with long review queues empty it quicker. There is no data which causes that bias, just program it in there.

I don‘t know enough about diffusion models to know how biases can arise, but with unsupervised learning (even though sampling bias is indeed very common) you can get a bias because you are using wrong, mal-adjusted, to many parameters, etc. even the way your data interacts during training can cause a bias, heck even by random one of your parameters hits an unfortunate local maxima yielding a mal-adjusted weight, which may cause bias in your output.

replies(1): >>adastr+Qi1
37. theoa+9a1[view] [source] 2025-12-06 08:32:42
>>Rover2+(OP)
Draw a millipede as a dog:

Gemini responds:

Conceptualizing the "Millipup"

https://gemini.google.com/share/b6b8c11bd32f

Draw the five legs of a dog as if the body is a pentagon

https://gemini.google.com/share/d74d9f5b4fa4

And animal legs are quite standardized

https://en.wikipedia.org/wiki/List_of_animals_by_number_of_l...

It's all about the prompt. Example:

Can you imagine a dog with five legs?

https://gemini.google.com/share/2dab67661d0e

And generally, the issue sits between the computer and the chair.

;-)

replies(2): >>Rover2+VS1 >>vunder+eZ1
◧◩◪◨⬒⬓⬔
38. IsTom+ff1[view] [source] [discussion] 2025-12-06 09:42:45
>>adastr+m41
> It works via model building and next token prediction, which is not interpolative.

I'm not particularly well-versed in LLMs, but isn't there a step in there somewhere (latent space?) where you effectively interpolate in some high-dimensional space?

replies(1): >>adastr+ji1
◧◩◪
39. petter+ch1[view] [source] [discussion] 2025-12-06 10:06:09
>>Restar+Xw
"just fine" is not really an accurate description of that 26-hour clock
◧◩◪◨⬒⬓⬔⧯
40. adastr+ji1[view] [source] [discussion] 2025-12-06 10:19:37
>>IsTom+ff1
Not interpolation, no. It is more like the N-gram autocomplete used to use to make typing and autocorrect suggestions in your phone. Attention js not N-gram, but you can kinda think of it as being a sparsely compressed N-gram where N=256k or whatever the context window size is. It’s not technically accurate, but it will get your intuition closer than thinking of it as interpolation.

The LLM uses attention and some other tricks (attention, it turns out, is not all you need) to build a probabilistic model of what the next token will be, which it then sampled. This is much more powerful than interpolation.

◧◩◪◨⬒⬓⬔⧯
41. adastr+Qi1[view] [source] [discussion] 2025-12-06 10:28:52
>>runarb+261
Training is kinda like curve fitting, but inference is not. The inference algorithm is random sampling from a next-token probability distribution.

It’s a subtle distinction, but I think an important one in this case, because if it was interpolation then genuine creativity would not be possible. But the attention mechanism results in model building in latent space, which then affects the next token distribution.

replies(1): >>runarb+jY1
◧◩◪
42. qnleig+6m1[view] [source] [discussion] 2025-12-06 11:11:13
>>cyanma+Vs
LLMs can count other objects, so it's not like they're too dumb to count. So a possible model for what's going on is that the circuitry responsible for low-level image recognition has priors baked in that cause it to report unreliable information to parts that are responding for higher-order reason.

So back to the analogy, it could be as if the LLMs experience the equivalent of a very intense optical illusion in these cases, and then completely fall apart trying to make sense of it.

◧◩
43. cridde+er1[view] [source] [discussion] 2025-12-06 12:13:11
>>varisp+hx
Is that a dog though?
◧◩
44. Rover2+VS1[view] [source] [discussion] 2025-12-06 16:14:52
>>theoa+9a1
haha fair point, you can get the expected results with the right prompt, but I think it still reveals a general lack of true reasoning ability (or something)
replies(1): >>ithkui+Ft2
◧◩
45. Rover2+eT1[view] [source] [discussion] 2025-12-06 16:17:52
>>squigz+qG
what a world we live in
46. DANmod+XT1[view] [source] 2025-12-06 16:23:39
>>Rover2+(OP)
What is "a dog"?

What is " a dog" to Gemini?

◧◩◪◨⬒⬓⬔⧯▣
47. runarb+jY1[view] [source] [discussion] 2025-12-06 16:58:29
>>adastr+Qi1
I’ve seen both opinions on this in the philosophy of statistics. Some would say that machine learning inference is something other then curve fitting, but others (and I subscribe to this) believe it is all curve fitting. I actually don‘t think which camp is right is that important but I do like it when philosophers ponder about these tings.

My reasons to subscribing to the latter camp is that when you have a distribution and you fit things according to that distribution (even when the fitting is stochastic; and even when the distribution belongs in billions of dimensions) you are doing curve fitting.

I think the one extreme would be a random walk, which is obviously not curve fitting, but if you draw from any other distribution then the uniform distribution, say the normal distribution, you are fitting that distribution (actually, I take that back, the original random walk is fitting the uniform distribution).

Note I am talking about inference, not training. Training can be done using all sorts of algorithms, some include priors (distributions) and would be curve fitting, but only compute the posteriors (also distributions). I think the popular stochastic linear descent does something like this, so it would be curve-fitting, but the older evolutionary algorithm just random walks it and is not fitting any curve (except the uniform distribution). What matters to me is that the training arrives at a distribution, which is described by a weight matrix, and what inference is doing is fitting to that distribution (i.e. the curve).

replies(1): >>adastr+rv2
48. tarsin+UY1[view] [source] 2025-12-06 17:03:06
>>Rover2+(OP)
I have only a high level understanding of LLMs but to me it doesn’t seem surprising: they are trying to come up with a textual output of your prompt aggregated to their result that scores high (i.e. is consistent) with their training set. There is no thinking, just scoring consistency. And a dog with 5 legs is so rare or nonexistent in their training set and their resulting weights that it scores so bad they can’t produces an output that accepts it. But how the illusion breaks down in this case is quite funny indeed.
◧◩
49. vunder+eZ1[view] [source] [discussion] 2025-12-06 17:06:08
>>theoa+9a1
This is basically the "Rhinos are just fat unicorns" approach. Totally fine if you want to go that route but a bit goofy. You can get SOTA models to generate a 5-legged dog simply by being more specific about the placement of the fifth leg.

https://imgur.com/a/jNj98Pc

Asymmetry is as hard for AI models as it is for evolution to "prompt for" but they're getting better at it.

◧◩◪◨
50. Camper+5j2[view] [source] [discussion] 2025-12-06 19:45:08
>>irthom+nu
Not necessarily. The problem may be as simple as the fact that LLMs do not see "dog legs" as objects independent of the dogs they're attached to.

The systems already absorb much more complex hierarchical relationships during training, just not that particular hierarchy. The notion that everything is made up of smaller components is among the most primitive in human philosophy, and is certainly generalizable by LLMs. It just may not be sufficiently motivated by the current pretraining and RL regimens.

◧◩◪
51. ithkui+Ft2[view] [source] [discussion] 2025-12-06 21:23:47
>>Rover2+VS1
Or it just shows that it tries to overcorrect the prompt which is generally a good idea in the most cases where the prompter is not intentionally asking a weird thing.

This happens all the time with humans. Imagine you're at a call center and get all sorts of weird descriptions of problems with a product: every human is expected to not expect the caller is an expert and actually will try to interpolate what they might mean by the weird wording they use

◧◩◪◨⬒⬓⬔⧯▣▦
52. adastr+rv2[view] [source] [discussion] 2025-12-06 21:41:28
>>runarb+jY1
I get the argument that pulling from a distribution is a form of curve fitting. But unless I am misunderstanding, the claim is that it is a curve fitting / interpolation between the training data. The probability distribution generated in inference is not based on the training data though. It is a transform of the context through the trained weights, which is not the same thing. It is the application of a function to context. That function is (initially) constrained to reproduce the training data when presented with a portion of that data as context. But that does not mean that all outputs are mere interpolations between training datapoints.

Except in the most technical sense that any function constrained to meet certain input output values is an interpolation. But that is not the smooth interpolation that seems to be implied here.

53. vision+uE2[view] [source] 2025-12-06 23:01:34
>>Rover2+(OP)
I tried this by using an gemini visual agent build with orion from vlm.run. it was able to produce two different images with five leg dog. you need to make it play with itself to improve and correct.

https://chat.vlm.run/c/62394973-a869-4a54-a7f5-5f3bb717df5f

Here is the though process summary(you can see the full thinking the link above):

"I have attempted to generate a dog with 5 legs multiple times, verifying each result. Current image generation models have a strong bias towards standard anatomy (4 legs for dogs), making it difficult to consistently produce a specific number of extra limbs despite explicit prompts."

[go to top]