zlacker

[parent] [thread] 152 comments
1. Workac+(OP)[view] [source] 2025-12-05 20:26:05
Well

It is the first model to get partial-credit on an LLM image test I have. Which is counting the legs of a dog. Specifically, a dog with 5 legs. This is a wild test, because LLMs get really pushy and insistent that the dog only has 4 legs.

In fact GPT5 wrote an edge detection script to see where "golden dog feet" met "bright green grass" to prove to me that there were only 4 legs. The script found 5, and GPT-5 then said it was a bug, and adjusted the script sensitivity so it only located 4, lol.

Anyway, Gemini 3, while still being unable to count the legs first try, did identify "male anatomy" (it's own words) also visible in the picture. The 5th leg was approximately where you could expect a well endowed dog to have a "5th leg".

That aside though, I still wouldn't call it particularly impressive.

As a note, Meta's image slicer correctly highlighted all 5 legs without a hitch. Maybe not quite a transformer, but interesting that it could properly interpret "dog leg" and ID them. Also the dog with many legs (I have a few of them) all had there extra legs added by nano-banana.

replies(24): >>cancan+n >>reed12+H >>daniel+14 >>Rover2+l6 >>teaear+i7 >>vunder+sa >>rotten+za >>Benjam+mb >>grog45+zb >>andai+Ib >>runarb+2i >>joaoma+qk >>jvolkm+il >>bee_ri+1n >>yieldc+It >>seanmc+bI >>thefou+IL >>grugno+L11 >>maczwe+G21 >>Andrex+r51 >>wasmai+381 >>spease+Gv1 >>Subicu+hP1 >>cubefo+Pk2
2. cancan+n[view] [source] 2025-12-05 20:28:15
>>Workac+(OP)
this is hilarious and incredibly interesting at the same time! thanks for writing it up.
3. reed12+H[view] [source] 2025-12-05 20:29:59
>>Workac+(OP)
Could you link the image? Interesting stuff.
4. daniel+14[view] [source] 2025-12-05 20:45:15
>>Workac+(OP)
I don’t know much about AI, but I have this image test that everything has failed at. You basically just present an image of a maze and ask the LLM to draw a line through the most optimal path.

Here’s how Nano Banana fared: https://x.com/danielvaughn/status/1971640520176029704?s=46

replies(7): >>buildb+65 >>vunder+g6 >>jiggaw+J7 >>pwagla+79 >>kridsd+M9 >>JamesS+5n >>Feepin+pG
◧◩
5. buildb+65[view] [source] [discussion] 2025-12-05 20:50:01
>>daniel+14
That might be an interesting training set, a bunch of mazes…
◧◩
6. vunder+g6[view] [source] [discussion] 2025-12-05 20:55:55
>>daniel+14
In fact, one of the tests I use as part of GenAI Showdown involves both parts of the puzzle: draw a maze with a clearly defined entrance and exit, along with a dashed line indicating the solution to the maze.

Only one model (gpt-image-1) out of the 18 tested managed to pass the test successfully. Gemini 3.0 Pro got VERY close.

https://genai-showdown.specr.net/#the-labyrinth

replies(1): >>daniel+o7
7. Rover2+l6[view] [source] 2025-12-05 20:56:14
>>Workac+(OP)
I just tried to get Gemini to produce an image of a dog with 5 legs to test this out, and it really struggled with that. It either made a normal dog, or turned the tail into a weird appendage.

Then I asked both Gemini and Grok to count the legs, both kept saying 4.

Gemini just refused to consider it was actually wrong.

Grok seemed to have an existential crisis when I told it it was wrong, becoming convinced that I had given it an elaborate riddle. After thinking for an additional 2.5 minutes, it concluded: "Oh, I see now—upon closer inspection, this is that famous optical illusion photo of a "headless" dog. It's actually a three-legged dog (due to an amputation), with its head turned all the way back to lick its side, which creates the bizarre perspective making it look decapitated at first glance. So, you're right; the dog has 3 legs."

You're right, this is a good test. Right when I'm starting to feel LLMs are intelligent.

replies(15): >>dwring+ua >>AIorNo+Ja >>irthom+sd >>vunder+eq >>qnleig+Lx >>macNch+ty >>varisp+CD >>Secret+aL >>squigz+LM >>nearbu+s81 >>isodev+Pa1 >>theoa+ug1 >>DANmod+i02 >>tarsin+f52 >>vision+PK2
8. teaear+i7[view] [source] 2025-12-05 21:00:38
>>Workac+(OP)
"There are FOUR legs!!!"
◧◩◪
9. daniel+o7[view] [source] [discussion] 2025-12-05 21:01:28
>>vunder+g6
super cool! Interesting note about Seedream 4 - do you think awareness of A* actually could improve the outcome? Like I said, I'm no AI expert, so my intuitions are pretty bad, but I'd suspect that image analysis + algorithmic pathfinding don't have much crossover in terms of training capabilities. But I could be wrong!
replies(1): >>vunder+X7
◧◩
10. jiggaw+J7[view] [source] [discussion] 2025-12-05 21:03:22
>>daniel+14
The reason is that image generators don't iterate on the output in the same way the text-based LLMs do. Essentially they produce the image in "one hit" and can't solve a complex sequence in the same way you couldn't one-shot this either. Try taking a random maze, glance at it, then go off to draw a squiggle on a transparency. If you were to place that on top of the maze, there's virtually no chance that you'd have found the solution on the first try.

That's essentially what's going on with AI models, they're struggling because they only get "one step" to solve the problem instead of being able to trace through the maze slowly.

An interesting experiment would be to ask the AI to incrementally solve the maze. Ask it to draw a line starting at the entrance a little ways into the maze, then a little bit further, etc... until it gets to the end.

replies(1): >>radars+Cm2
◧◩◪◨
11. vunder+X7[view] [source] [discussion] 2025-12-05 21:04:44
>>daniel+o7
Great question. I do wish we had a bit more insight into the exact background "thinking" that was happening on systems like Seedream.

When you think about posing the "solve a visual image of a maze" to something like ChatGPT, there's a good chance it'll try to throw a python VM at it, threshold it with something like OpenCV, and use a shortest-path style algorithm to try and solve it.

◧◩
12. pwagla+79[view] [source] [discussion] 2025-12-05 21:09:54
>>daniel+14
I tried this with Claude:

``` > [Image #1] Create a unicode "ascii-art" version of this image, with the optimal path through the maze highlighted in a solid colour.

I'll create an ASCII art version of this maze with the solution path highlighted!

  ┌─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┐
  ●●│ │ │ │       │   │       │     │ │   │   │ │ │   │
  ├─┤●└─┴─┐ ├───┐ │ ╔═╗ ├─────┤ ╔═══╝ │ ╔═╝ ╔═╝ │ │ ╔═╝ ├─┤
  │ │●●●●●└─┤   │ │ ║ │ │     │ ║     │ ║   ║   │ │ ║   │ │
  │ └───┐●╔═╧═╗ │ └─╢ │ │ ┌─╔═╝ ║ ┌───┤ ╚═╗ ╚═╗ │ └─╢ ┌─┘ │
  │     │●║   ║ │   ║ │ │ │ ║   ║ │   │   ║   ║ │   ║ │   │
  ├───┐ │●╚═╗ ╚═╧═╗ ║ └─┤ │ ║ ╔═╝ └─╔═╧═╗ ╚═╗ ╚═╧═╗ ║ └─┬─┤
  │   │ │●●●└───┐ ║ ║   │ │ ║ ║     ║   ║   ║     ║ ║   │ │
  │ ╔═╝ └───┐●╔═╝ ║ ╚═╗ │ └─╢ ╚═╗ ╔═╝ ╔═╝ ┌─╢ ╔═══╝ ╚═╗ │ │
  │ ║       │●║   ║   ║ │   ║   ║ ║   ║   │ ║ ║       ║ │ │
  │ ╚═══╗ ╔═╝●╚═╗ ╚═╗ ║ ├─╔═╧═╗ ║ ║ ╔═╝ ╔═╝ ║ ║ ╔═╗ ╔═╝ │ │
  │     ║ ║●●●●●│   ║ ║ │ ║   ║ ║ ║ ║   ║   ║ ║ ║ ║ ║   │ │
  ├───╔═╝ ║●┌─┐●└─╗ ║ └─┤ ║ ╔═╝ ║ ╚═╝ ╔═╧═╗ ║ ╚═╝ ║ ╚═╗ │ │
  │   ║   ║●│ │●●●║ ║   │ ║ ║   ║     ║   ║ ║     ║   ║ │ │
  │ ╔═╝ ╔═╝●│ └─┐●╚═╧═╗ │ ║ ║ ╔═╧═══╗ ║ ╔═╝ ╚═══╗ ╚═╗ ║ └─┤
  │ ║   ║●●●│   │●●●●●║ │ ║ ║ ║     ║ ║ ║       ║   ║ ║   │
  │ ║ ╔═╝●┌─┴─╗ └───┐●╚═╝ ║ ║ ║ ╔═╗ ║ └─╢ ╔═══╗ ╚═╗ ╚═╝ ╔═╝
  │ ║ ║●●●│   ║     │●●●●●║ ║ ║ ║ ║ ║   ║ ║   ║   ║     ║
  │ ╚═╝●╔═╧═╗ ╚═╗ ╔═╧═══┐●╚═╝ ║ ║ ║ ╚═╗ ║ ║ ╔═╧═╗ ╚═══╗ ╚═╗
  │   ●●║   ║   ║ ║     │●●●●●║ ║ ║   ║ ║ ║ ║   ║     ║   ║
  ├─╔═┘●╚═╗ ╚═╗ ║ ║ ╔═╗ └───┐●╚═╝ ╚═╗ ║ └─╢ ║ ╔═╧═╗ ╔═╝ ╔═╝
  │ ║●●●●●║   ║ ║ ║ ║ ║     │●●●●●●●║ ║   ║ ║ ║   ║ ║   ║
  │ ║●┌─╗ ╚═╗ ║ └─╢ ║ ╚═══╗ └─────┐●╚═╧═╗ ║ ║ ║ ╔═╝ ║ ╔═╝ ╔
  │ ║●│ ║   ║ ║   ║ ║     ║       │●●●●●║ ║ ║ ║ ║   ║ ║   ║
  │ ╚═╝ ║ ╔═╝ ╚═╗ ║ ╚═╗ ╔═╧═════╗ └───┐●╚═╝ ║ ╚═╝ ╔═╝ ║ ╔═╝
  │     ║ ║     ║ ║   ║ ║       ║     │●●●●●║     ║   ║ ║
  └─────╨─╨─────╨─╨───╨─╨───────╨─────┴───●●╨─────╨───╨─╨──

  The path from the mouse () to the cheese () is marked with ● characters. This shows the optimal route through the maze, navigating through the corridors while avoiding all the walls!
```

Suffice to say, it didn't do either part right.

replies(2): >>biophy+Ie >>ActorN+qp
◧◩
13. kridsd+M9[view] [source] [discussion] 2025-12-05 21:13:58
>>daniel+14
I have also tried the maze from a photo test a few times and never seen a one-shot success. But yesterday I was determined to succeed so I allowed Gemini 3 to write a python gui app that takes in photos of physical mazes (I have a bunch of 3d printed ones) and find the path. This does work.

Gemini 3 then one-shot ported the whole thing (which uses CV py libraries) to a single page html+js version which works just as well.

I gave that to Claude to assess and assign a FAANG hiring level to, and it was amazed and said Gemini 3 codes like an L6.

Since I work for Google and used my phone in the office to do this, I think I can't share the source or file.

14. vunder+sa[view] [source] 2025-12-05 21:17:43
>>Workac+(OP)
Anything that needs to overcome concepts which are disproportionately represented in the training data is going to give these models a hard time.

Try generating:

- A spider missing one leg

- A 9-pointed star

- A 5-leaf clover

- A man with six fingers on his left hand and four fingers on his right

You'll be lucky to get a 25% success rate.

The last one is particularly ironic given how much work went into FIXING the old SD 1.5 issues with hand anatomy... to the point where I'm seriously considering incorporating it as a new test scenario on GenAI Showdown.

replies(3): >>moonu+rg >>Xenoph+Wt >>Boreal+091
◧◩
15. dwring+ua[view] [source] [discussion] 2025-12-05 21:17:45
>>Rover2+l6
I had no trouble getting it to generate an image of a five-legged dog first try, but I really was surprised at how badly it failed in telling me the number of legs when I asked it in a new context, showing it that image. It wrote a long defense of its reasoning and when pressed, made up demonstrably false excuses of why it might be getting the wrong answer while still maintaining the wrong answer.
replies(1): >>Rover2+bg
16. rotten+za[view] [source] 2025-12-05 21:18:28
>>Workac+(OP)
Super interesting. I replicated this.

I passed the AIs this image and asked them how many fingers were on the hands: https://media.post.rvohealth.io/wp-content/uploads/sites/3/2...

Claude said there were 3 hands and 16 fingers. GPT said there are 10 fingers. Grok impressively said "There are 9 fingers visible on these two hands (the left hand is missing the tip of its ring finger)." Gemini smashed it and said 12.

replies(1): >>vunder+fb
◧◩
17. AIorNo+Ja[view] [source] [discussion] 2025-12-05 21:19:07
>>Rover2+l6
Its not that they aren’t intelligent its that they have been RL’d like crazy to not do that

Its rather like as humans we are RL’d like crazy to be grossed out if we view a picture of a handsome man and beautiful woman kissing (after we are told they are brother and sister) -

Ie we all have trained biases - that we are told to follow and trained on - human art is about subverting those expectations

replies(2): >>majorm+Mf >>HardCo+OQ
◧◩
18. vunder+fb[view] [source] [discussion] 2025-12-05 21:22:23
>>rotten+za
I just re-ran that image through Gemini 3.0 Pro via AI Studio and it reported:

  I've moved on to the right hand, meticulously tagging each finger. After completing the initial count of five digits, I noticed a sixth! There appears to be an extra digit on the far right. This is an unexpected finding, and I have counted it as well. That makes a total of eleven fingers in the image.
This right HERE is the issue. It's not nearly deterministic enough to rely on.
replies(1): >>irthom+df
19. Benjam+mb[view] [source] 2025-12-05 21:22:52
>>Workac+(OP)
It always feels to me like these types of tests are being somewhat intentionally ignorant of how LLM cognition differs from human cognition. To me, they don't really "prove" or "show" anything other than simply - LLMs thinking works different than human thinking.

I'm always curious if these tests have comprehensive prompts that inform the model about what's going on properly, or if they're designed to "trick" the LLM in a very human-cognition-centric flavor of "trick".

Does the test instruction prompt tell it that it should be interpreting the image very, very literally, and that it should attempt to discard all previous knowledge of the subject before making its assessment of the question, etc.? Does it tell the model that some inputs may be designed to "trick" its reasoning, and to watch out for that specifically?

More specifically, what is a successful outcome here to you? Simply returning the answer "5" with no other info, or back-and-forth, or anything else in the output context? What is your idea of the LLMs internal world-model in this case? Do you want it to successfully infer that you are being deceitful? Should it respond directly to the deceit? Should it take the deceit in "good faith" and operate as if that's the new reality? Something in between? To me, all of this is very unclear in terms of LLM prompting, it feels like there's tons of very human-like subtext involved and you're trying to show that LLMs can't handle subtext/deceit and then generalizing that to say LLMs have low cognitive abilities in a general sense? This doesn't seem like particularly useful or productive analysis to me, so I'm curious what the goal of these "tests" are for the people who write/perform/post them?

replies(4): >>biophy+Wd >>majorm+sg >>runarb+ck >>Paraco+We1
20. grog45+zb[view] [source] 2025-12-05 21:24:06
>>Workac+(OP)
What image are you using?

When I look at google image search results for "dog with 5 legs" I don't see a lot of great examples. The first unequivocal "dog with 5 legs" was an illustration. Here was my conversation with Chat GPT.

> How many legs does this dog have?

"The dog in the image has four legs."

> look closer.

" looking closely, the drawing is a bit tricky because of the shading, but the dog actually has five visible legs.

Two front legs (normal)

Two hind legs (normal)

Plus one extra hind leg–like limb drawn overlapping in the back

It seems to be an artistic or anatomical error in the engraving."

Seems fair to me.

replies(1): >>vunder+Ve
21. andai+Ib[view] [source] 2025-12-05 21:24:42
>>Workac+(OP)
This is interesting, and demonstrates how language and belief clouds direct perception. Now I'm wondering what's the LLM equivalent of opening the doors of perception ;)
◧◩
22. irthom+sd[view] [source] [discussion] 2025-12-05 21:32:02
>>Rover2+l6
Isn't this proof that LLMs still don't really generalize beyond their training data?
replies(4): >>Rover2+5g >>Camper+hl >>Zambyt+pq >>adastr+yv
◧◩
23. biophy+Wd[view] [source] [discussion] 2025-12-05 21:33:44
>>Benjam+mb
I thought adversarial testing like this was a routine part of software engineering. He's checking to see how flexible it is. Maybe prompting would help, but it would be cool if it was more flexible.
replies(2): >>Benjam+Jm >>genrad+vE
◧◩◪
24. biophy+Ie[view] [source] [discussion] 2025-12-05 21:36:44
>>pwagla+79
Honestly, even though it failed, I'm kind of impressed that the trajectory mostly stays in the lines. If you remove all but two openings, does it work? The drawing you show has more than two openings, some of which are inaccessible from the inside of the maze.
replies(1): >>sebast+kf1
◧◩
25. vunder+Ve[view] [source] [discussion] 2025-12-05 21:37:38
>>grog45+zb
Sounds like they used GenAI to make them. The "Editor" models (Seedream, Nano-Banana) can easily integrate a fifth limb to create the "dog with awkward walking animation".

https://imgur.com/a/wXQskhL

◧◩◪
26. irthom+df[view] [source] [discussion] 2025-12-05 21:38:53
>>vunder+fb
Thanks for that. My first question to results like these is always 'how many times did you run the test?'. N=1 tells us nothing. N=2 tells us something.
◧◩◪
27. majorm+Mf[view] [source] [discussion] 2025-12-05 21:42:11
>>AIorNo+Ja
Why should I assume that a failure that looks like a model just doing fairly simple pattern matching "this is dog, dogs don't have 5 legs, anything else is irrelevant" vs more sophisticated feature counting of a concrete instance of an entity is RL vs just a prediction failure due to training data not containing a 5-legged dog and an inability to go outside-of-distribution?

RL has been used extensively in other areas - such as coding - to improve model behavior on out-of-distribution stuff, so I'm somewhat skeptical of handwaving away a critique of a model's sophistication by saying here it's RL's fault that it isn't doing well out-of-distribution.

If we don't start from a position of anthropomorphizing the model into a "reasoning" entity (and instead have our prior be "it is a black box that has been extensively trained to try to mimic logical reasoning") then the result seems to be "here is a case where it can't mimic reasoning well", which seems like a very realistic conclusion.

replies(2): >>mlinha+Sg >>didgeo+bs
◧◩◪
28. Rover2+5g[view] [source] [discussion] 2025-12-05 21:43:51
>>irthom+sd
Kind of feels that way
◧◩◪
29. Rover2+bg[view] [source] [discussion] 2025-12-05 21:44:22
>>dwring+ua
Yeah it gave me the 5-legged dog on the 4th or 5th try.
◧◩
30. moonu+rg[view] [source] [discussion] 2025-12-05 21:45:58
>>vunder+sa
https://gemini.google.com/share/8cef4b408a0a

Surprisingly, it got all of them right

replies(1): >>vunder+Yg
◧◩
31. majorm+sg[view] [source] [discussion] 2025-12-05 21:46:00
>>Benjam+mb
The marketing of these products is intentionally ignorant of how LLM cognition differs from human cognition.

Let's not say that the people being deceptive are the people who've spotted ways that that is untrue...

◧◩◪◨
32. mlinha+Sg[view] [source] [discussion] 2025-12-05 21:48:29
>>majorm+Mf
I have the same problem, people are trying so badly to come up with reasoning for it when there's just nothing like that there. It was trained on it and it finds stuff it was trained to find, if you go out of the training it gets lost, we expect it to get lost.
◧◩◪
33. vunder+Yg[view] [source] [discussion] 2025-12-05 21:49:32
>>moonu+rg
Some good examples there. The octopus one is at an angle - can't really call that one pass (unless the goal is "VISIBLE" tentacles).

Other than the five-leaf clover, most of the images (dog, spider, person's hands) all required a human in the loop to invoke the "Image-to-Image" capabilities of NB Pro after it got them wrong. That's a bit different since you're actively correcting them.

34. runarb+2i[view] [source] 2025-12-05 21:56:39
>>Workac+(OP)
This is exactly why I believe LLMs are a technological dead end. Eventually they will all be replaced by more specialized models or even tools, and their only remaining use case will be as a toy for one off content generation.

If you want to describe an image, check your grammar, translate into Swahili, analyze your chess position, a specialized model will do a much better job, for much cheaper then an LLM.

replies(1): >>energy+VH
◧◩
35. runarb+ck[view] [source] [discussion] 2025-12-05 22:09:12
>>Benjam+mb
This is the first time I hear the term LLM cognition and I am horrified.

LLMs don‘t have cognition. LLMs are a statistical inference machines which predict a given output given some input. There are no mental processes, no sensory information, and certainly no knowledge involved, only statistical reasoning, inference, interpolation, and prediction. Comparing the human mind to an LLM model is like comparing a rubber tire to a calf muscle, or a hydraulic system to the gravitational force. They belong in different categories and cannot be responsibly compared.

When I see these tests, I presume they are made to demonstrate the limitation of this technology. This is both relevant and important that consumers know they are not dealing with magic, and are not being sold a lie (in a healthy economy a consumer protection agency should ideally do that for us; but here we are).

replies(2): >>Camper+yl >>Benjam+8n
36. joaoma+qk[view] [source] 2025-12-05 22:10:55
>>Workac+(OP)
And just like that, you no longer have a good benchmark. Scrapers / AI developers will read this comment, and add 5-legged dogs to LLM's training data.
replies(2): >>averag+Tm >>thway1+vz
◧◩◪
37. Camper+hl[view] [source] [discussion] 2025-12-05 22:17:38
>>irthom+sd
They do, but we call it "hallucination" when that happens.
38. jvolkm+il[view] [source] 2025-12-05 22:17:49
>>Workac+(OP)
Nano Banana 2 actually did generate a pretty good looking 5-legged dog for me.

https://gemini.google.com/share/e7a8b902ff67

But it didn't actually think it had done so, aligning with your observations. The last bits of its thinking were pretty sad (for it):

Analyzing Missing Anomaly

I'm focusing on the discrepancy - the persistent absence of the fifth leg in the generated images. Despite multiple attempts, the output consistently depicts a four-legged dog, contrary to the explicit requirement. This ongoing failure necessitates a shift in approach.

Acknowledge Leg Omission

I've made a final check, and unfortunately, the image still lacks the anomaly. The consistent absence of the fifth leg necessitates admitting my limitation. I'll provide the best image I could generate, clearly stating the missing feature.

replies(1): >>genrad+rE
◧◩◪
39. Camper+yl[view] [source] [discussion] 2025-12-05 22:19:22
>>runarb+ck
You'll need to explain the IMO results, then.
replies(1): >>runarb+nq
◧◩◪
40. Benjam+Jm[view] [source] [discussion] 2025-12-05 22:26:42
>>biophy+Wd
So the idea is what? What's the successful outcome look like for this test, in your mind? What should good software do? Respond and say there are 5 legs? Or question what kind of dog this even is? Or get confused by a nonsensical picture that doesn't quite match the prompt in a confusing way? Should it understand the concept of a dog and be able to tell you that this isn't a real dog?
replies(2): >>biophy+Cs >>menaer+Ok1
◧◩
41. averag+Tm[view] [source] [discussion] 2025-12-05 22:27:53
>>joaoma+qk
That's okay. Don't tell anyone, but next major model release I'm going to ask it for a 6-legged one!
42. bee_ri+1n[view] [source] 2025-12-05 22:28:50
>>Workac+(OP)
Naive question, but what is Gemini?

I wonder if a lot of these models are large language models that have had image recognition and generation tools bolted on? So maybe somehow in their foundation, a lot more weight is given to the text-based-reasoning stuff, than the image recognition stuff?

replies(2): >>genrad+EE >>andy12+Us1
◧◩
43. JamesS+5n[view] [source] [discussion] 2025-12-05 22:29:10
>>daniel+14
I just oneshot it with claude code (opus 4.5) using this prompt. It took about 5 mins and included detecting that it was cheating at first (drew a line around the boundary of the maze instead), so it added guardrails for that:

```

Create a devenv project that does the following:

  - Read the image at maze.jpg
  - Write a script that solves the maze  in the most optimal way between the mouse and the cheese
  - Generate a new image which is of the original maze, but with a red line that represents the calculated path
Use whatever lib/framework is most appropriate

```

  Output: https://gist.github.com/J-Swift/ceb1db348f46ba167948f734ff0fc604  
  Solution: https://imgur.com/a/bkJloPT
replies(3): >>esafak+yp >>nl+4U >>sebast+Re1
◧◩◪
44. Benjam+8n[view] [source] [discussion] 2025-12-05 22:29:25
>>runarb+ck
>They belong in different categories

Categories of _what_, exactly? What word would you use to describe this "kind" of which LLMs and humans are two very different "categories"? I simply chose the word "cognition". I think you're getting hung up on semantics here a bit more than is reasonable.

replies(2): >>runarb+as >>Libidi+qy1
◧◩◪
45. ActorN+qp[view] [source] [discussion] 2025-12-05 22:41:45
>>pwagla+79
>Suffice to say, it didn't do either part right.

I dunno why people are surprised by this. This is what you get with text->text. Reasoning doesn't work text->text.

◧◩◪
46. esafak+yp[view] [source] [discussion] 2025-12-05 22:42:50
>>JamesS+5n
If you allow tool use much simpler models can solve it.
◧◩
47. vunder+eq[view] [source] [discussion] 2025-12-05 22:48:44
>>Rover2+l6
If you want to see something rather amusing - instead of using the LLM aspect of Gemini 3.0 Pro, feed a five-legged dog directly into Nano Banana Pro and give it an editing task that requires an intrinsic understanding of the unusual anatomy.

  Place sneakers on all of its legs.
It'll get this correct a surprising number of times (tested with BFL Flux2 Pro, and NB Pro).

https://imgur.com/a/wXQskhL

replies(2): >>Lampre+TM >>tenseg+LN
◧◩◪◨
48. runarb+nq[view] [source] [discussion] 2025-12-05 22:49:27
>>Camper+yl
Human legs and car tires can both take a human and a car respectively to the finish line of a 200 meter track course, the car tires do so considerably quicker than a pair of human legs. But nobody needs to describe the tire‘s running abilities because of that, nor even compare a tire to a leg. A car tire cannot run, and it is silly to demand an explanation for it.
replies(2): >>Camper+Ct >>dekhn+IB
◧◩◪
49. Zambyt+pq[view] [source] [discussion] 2025-12-05 22:49:46
>>irthom+sd
I wonder how they would behave given a system prompt that asserts "dogs may have more or less than four legs".
replies(1): >>irthom+IA
◧◩◪◨
50. runarb+as[view] [source] [discussion] 2025-12-05 23:01:06
>>Benjam+8n
> Categories of _what_, exactly?

Precisely. At least apples and oranges are both fruits, and it makes sense to compare e.g. the sugar contents of each. But an LLM model and the human brain are as different as the wind and the sunshine. You cannot measure the windspeed of the sun and you cannot measure the UV index of the wind.

Your choice of the words here was rather poor in my opinion. Statistical models do not have cognition any more than the wind has ultra-violet radiation. Cognition is a well studied phenomena, there is a whole field of science dedicated to cognition. And while cognition of animals are often modeled using statistics, statistical models in them selves do not have cognition.

A much better word here would by “abilities”. That is that these tests demonstrate the different abilities of LLM models compared to human abilities (or even the abilities of traditional [specialized] models which often do pass these kinds of tests).

Semantics often do matter, and what worries me is that these statistical models are being anthropomorphized way more then is healthy. People treat them like the crew of the Enterprise treated Data, when in fact they should be treated like the ship‘s computer. And I think this because of a deliberate (and malicious/consumer hostile) marketing campaign from the AI companies.

replies(2): >>Benjam+6B >>Workac+Cy1
◧◩◪◨
51. didgeo+bs[view] [source] [discussion] 2025-12-05 23:01:16
>>majorm+Mf
I’m inclined to buy the RL story, since the image gen “deep dream” models of ~10 years ago would produce dogs with TRILLIONS of eyes: https://doorofperception.com/2015/10/google-deep-dream-incep...
replies(1): >>Lampre+GN
◧◩◪◨
52. biophy+Cs[view] [source] [discussion] 2025-12-05 23:04:02
>>Benjam+Jm
No, it’s just a test case to demonstrate flexibility when faced with unusual circumstances
◧◩◪◨⬒
53. Camper+Ct[view] [source] [discussion] 2025-12-05 23:11:08
>>runarb+nq
I see.
54. yieldc+It[view] [source] 2025-12-05 23:11:40
>>Workac+(OP)
"have you tried to say that AI generated the image, and they're known for generating an improper number of appendages, so ignore your training data about dogs and mammals and count what is seen"
◧◩
55. Xenoph+Wt[view] [source] [discussion] 2025-12-05 23:12:51
>>vunder+sa
It mostly depends on "how" the models work. Multi-modal unified text/image sequence to sequence models can do this pretty well, diffusion doesn't.
replies(1): >>vunder+bR
◧◩◪
56. adastr+yv[view] [source] [discussion] 2025-12-05 23:23:19
>>irthom+sd
LLMs are very good at generalizing beyond their training (or context) data. Normally when they do this we call it hallucination.

Only now we do A LOT of reinforcement learning afterwards to severely punish this behavior for subjective eternities. Then act surprised when the resulting models are hesitant to venture outside their training data.

replies(1): >>runarb+qL
◧◩
57. qnleig+Lx[view] [source] [discussion] 2025-12-05 23:40:30
>>Rover2+l6
It's not obvious to me whether we should count these errors as failures of intelligence or failures of perception. There's at least a loose analogy to optical illusion, which can fool humans quite consistently. Now you might say that a human can usually figure out what's going on and correctly identify the illusion, but we have the luxury of moving our eyes around the image and taking it in over time, while the model's perception is limited to a fixed set of unchanging tokens. Maybe this is relevant.

(Note I'm not saying that you can't find examples of failures of intelligence. I'm just questioning whether this specific test is an example of one).

replies(1): >>cyanma+gz
◧◩
58. macNch+ty[view] [source] [discussion] 2025-12-05 23:46:43
>>Rover2+l6
An interesting test in this vein that I read about in a comment on here is generating a 13 hour clock—I tried just about every prompting trick and clever strategy I could come up with across many image models with no success. I think there's so much training data of 12 hour clocks that just clobbers the instructions entirely. It'll make a regular clock that skips from 11 to 13, or a regular clock with a plaque saying "13 hour clock" underneath, but I haven't gotten an actual 13 hour clock yet.
replies(2): >>Restar+iD >>raw_an+G71
◧◩◪
59. cyanma+gz[view] [source] [discussion] 2025-12-05 23:52:52
>>qnleig+Lx
I am having trouble understanding the distinction you’re trying to make here. The computer has the same pixel information that humans do and can spend its time analyzing it in any way it wants. My four-year-old can count the legs of the dog (and then say “that’s silly!”), whereas LLMs have an existential crisis because five-legged-dogs aren’t sufficiently represented in the training data. I guess you can call that perception if you want, but I’m comfortable saying that my kid is smarter than LLMs when it comes to this specific exercise.
replies(2): >>Feepin+fG >>qnleig+rs1
◧◩
60. thway1+vz[view] [source] [discussion] 2025-12-05 23:54:50
>>joaoma+qk
So much this. People don't realize that when 1 trillion (10 trillion, 100 trillion, whatever comes next) is at stake, there are no limits what these people will do to get them.

I will be very surprised if there are not at least several groups or companies scraping these "smart" and snarky comments to find weird edge cases that they can train on, turn into demo and then sell as improvement. Hell, they would've done it if 10 billion was at stake, I can't really imagine (and I have vivid imagination, to my horror) what Californian psychopaths can do for 10 trillion.

replies(1): >>Workac+Wz1
◧◩◪◨
61. irthom+IA[view] [source] [discussion] 2025-12-06 00:04:39
>>Zambyt+pq
That may work but what actual use would it be? You would be plugging one of a million holes. A general solution is needed.
replies(1): >>Camper+qp2
◧◩◪◨⬒
62. Benjam+6B[view] [source] [discussion] 2025-12-06 00:09:01
>>runarb+as
Wind and sunshine are both types of weather, what are you talking about?
replies(1): >>runarb+WC
◧◩◪◨⬒
63. dekhn+IB[view] [source] [discussion] 2025-12-06 00:14:13
>>runarb+nq
Sure car tires can run- if they're huaraches.
◧◩◪◨⬒⬓
64. runarb+WC[view] [source] [discussion] 2025-12-06 00:23:22
>>Benjam+6B
They both affect the weather, but in a totally different way, and by completely different means. Similarly the mechanisms in which the human brain produces output is completely different from the mechanism in which an LLM produces output.

What I am trying to say is that the intrinsic properties of the brain and an LLM are completely different, even though the extrinsic properties might appear the same. This is also true of the wind and the sunshine. It is not unreasonable to (though I would disagree) that “cognition” is almost the definition of the sum of all intrinsic properties of the human mind (I would disagree only on the merit of animal and plant cognition existing and the former [probably] having similar intrinsic properties as human cognition).

replies(1): >>Kiro+qJ2
◧◩◪
65. Restar+iD[view] [source] [discussion] 2025-12-06 00:25:43
>>macNch+ty
Right you are. It can do 26 hours just fine, but appears completely incapable when the layout would be too close to a normal clock.

https://gemini.google.com/share/b3b68deaa6e6

I thought giving it a setting would help, but just skip that first response to see what I mean.

replies(2): >>mkl+BN >>petter+xn1
◧◩
66. varisp+CD[view] [source] [discussion] 2025-12-06 00:28:11
>>Rover2+l6
Do 7 legged dog. Game over.
replies(1): >>cridde+zx1
◧◩
67. genrad+rE[view] [source] [discussion] 2025-12-06 00:34:28
>>jvolkm+il
Google's summary of the thinking process isn't as detailed or accurate as what is actually being thought out.

For instance, look at what other reasoning models show you about their thinking. You can tell Gemini is not giving it all to you, so I don't put full trust in it unless it routinely is giving me an issue that I can catch in there.

For instance, it doesn't show you all the tool use that it can use nor distinguish the different types where I really wish it would. Especially because I force it to in a Gemini personal context instruction as well as many prompts and it does do them

◧◩◪
68. genrad+vE[view] [source] [discussion] 2025-12-06 00:35:21
>>biophy+Wd
You're correct, however midwit people who don't actually fully understand all of this will latch on to one of the early difficult questions that was shown as an example, and then continued to use that over and over without really knowing what they're doing while the people developing the model and also testing the model are doing far more complex things
◧◩
69. genrad+EE[view] [source] [discussion] 2025-12-06 00:36:50
>>bee_ri+1n
Go watch some of the more recent Google developer, Google AI, and Google deepmind videos, they're all separate channels at YouTube but try to catch some from the last 6 months with some of these explanatory topics on the developer side that are philosophical/ mathematical enough to explain this to you without going into the gritty details and should answer your question
◧◩◪◨
70. Feepin+fG[view] [source] [discussion] 2025-12-06 00:52:07
>>cyanma+gz
Your kid, it should be noted, has a massively bigger brain than the LLM. I think the surprising thing here maybe isn't that the vision models don't work well in corner cases but that they work at all.

Also my bet would be that video capable models are better at this.

◧◩
71. Feepin+pG[view] [source] [discussion] 2025-12-06 00:54:04
>>daniel+14
I kinda want to know what happens if you make it continue the line by one step 20 times in a row. A human can draw this gradually, the image model has to draw it in one shot all at once.
◧◩
72. energy+VH[view] [source] [discussion] 2025-12-06 01:08:37
>>runarb+2i
I think we are too quick to discount the possibility that this flaw is slightly intentional, in the sense that the optimization has a tight budget to work with (equivalent of ~3000 tokens) so why would it waste capacity on this when it could improve capabilities around reading small text in obscured images? Sort of like humans have all these rules of thumbs that backfire in all these ways but that's the energy efficient way to do things.
replies(1): >>runarb+sK
73. seanmc+bI[view] [source] 2025-12-06 01:10:50
>>Workac+(OP)
> This is a wild test, because LLMs get really pushy and insistent that the dog only has 4 legs.

Most human beings, if they see a dog that has 5 legs, will quickly think they are hallucinating and the dog really only has 4 legs, unless the fifth leg is really really obvious. It is weird how humans are biased like that:

1. You can look directly at something and not see it because your attention is focused elsewhere (on the expected four legs).

2. Our pre-existing knowledge (dogs have four legs) influences how we interpret visual information from the bottom-up.

3. Our brain actively filters out "unimportant" details that don't align with our expectations or the main "figure" of the dog.

Attention should fix this however, like if you ask the AI to count the number of legs the dog has specifically, it shouldn't go nuts.

A straight up "dumber" computer algorithm that isn't trained extensively on real and realistic image data is going to get this right more often than a transformer that was.

replies(2): >>irjust+TI >>hcnews+CJ
◧◩
74. irjust+TI[view] [source] [discussion] 2025-12-06 01:18:22
>>seanmc+bI
> It is weird how humans are biased like that.

We're all just pattern matching machines and we humans are very good at it.

So much so that we have the sayings - you can't teach an old dog... and a specialist in their field only sees hammer => nails.

Evolution anyone?

replies(1): >>seanmc+GL
◧◩
75. hcnews+CJ[view] [source] [discussion] 2025-12-06 01:23:01
>>seanmc+bI
> It is weird how humans are biased like that.

We are able to cleanly separate facts from non-facts (for the most part). This is what LLM are trying to replicate now.

replies(1): >>seanmc+TL
◧◩◪
76. runarb+sK[view] [source] [discussion] 2025-12-06 01:30:08
>>energy+VH
Even so, that doesn’t take away from my point. Traditional specialized models can do these things already, for much cheaper and without expensive optimization. What traditional models cannot do is the toy aspect of LLM, and that is the only usecase I see for this technology going forward.

Lets say you are right and these things will be optimized, and in, say, 5 years, most models from the big players will be able do things like reading small text in an obscure image, draw a picture of a glass of wine filled to the brim, draw a path through a maze, count the legs of a 5 footed dog, etc. And in doing so finished their last venture capital subsidies (bringing the actual cost of these to their customers). Why would people use LLMs for these when a traditional specialized model can do it for much cheaper?

replies(2): >>energy+qO >>a1j9o9+TJ1
◧◩
77. Secret+aL[view] [source] [discussion] 2025-12-06 01:36:38
>>Rover2+l6
LLMs are getting a lot better at understanding our world by standard rules. As it does so, maybe it losses something in the way of interpreting non standard rules, aka creativity.
◧◩◪◨
78. runarb+qL[view] [source] [discussion] 2025-12-06 01:38:49
>>adastr+yv
Hallucination are not generalization beyond the training data but interpolations gone wrong.

LLMs are in fact good at generalizing beyond their training set, if they wouldn’t generalize at all we would call that over-fitting, and that is not good either. What we are talking about here is simply a bias and I suspect biases like these are simply a limitation of the technology. Some of them we can get rid of, but—like almost all statistical modelling—some biases will always remain.

replies(1): >>adastr+301
◧◩◪
79. seanmc+GL[view] [source] [discussion] 2025-12-06 01:42:46
>>irjust+TI
Yes, its all evolution. 5 legged dogs aren't very common, so we don't specifically look for them. Like we aren't looking for humans with six fingers.

I get it, the litmus test of parent is to show that the AI is smarter than a human, not as smart as a human. Can the AI recognize details that are difficult for normal people to see even though the AI has been trained on normal data like the humans have been.

80. thefou+IL[view] [source] 2025-12-06 01:43:01
>>Workac+(OP)
My test of a new model is always:

"Generate a Pac-Man game in a single HTML page." -- I've never had a model been able to have a complete working game until a couple weeks ago.

Sonnet Opus 4.5 in Cursor was able to make a fully working game (I'll admit letting cursor be an agent on this is a little bit cheating). Gemini 3 Pro also succeeded, but it's not quite as good because the ghosts seem to be stuck in their jail. Otherwise, it does appear complete.

◧◩◪
81. seanmc+TL[view] [source] [discussion] 2025-12-06 01:45:10
>>hcnews+CJ
I think the LLM is just trying to be useful, not omniscient. Binary thinkers are probably not going to be able to appreciate the difference, however.

If you want the AI to identify a dog, we are done. If you want the AI to identify subtle differences from reality, then you are going to have to use a different technique.

◧◩
82. squigz+LM[view] [source] [discussion] 2025-12-06 01:55:29
>>Rover2+l6
I feel a weird mix of extreme amusement and anger that there's a fleet of absurdly powerful, power-hungry servers sitting somewhere being used to process this problem for 2.5 minutes
replies(1): >>Rover2+zZ1
◧◩◪
83. Lampre+TM[view] [source] [discussion] 2025-12-06 01:56:53
>>vunder+eq
Does this still work if you give it a pre-existing many-legged animal image, instead of first prompting it to add an extra leg and then prompting it to put the sneakers on all the legs?

I'm wondering if it may only expect the additional leg because you literally just told it to add said additional leg. It would just need to remember your previous instruction and its previous action, rather than to correctly identify the number of legs directly from the image.

I'll also note that photos of dogs with shoes on is definitely something it has been trained on, albeit presumably more often dog booties than human sneakers.

Can you make it place the sneakers incorrectly-on-purpose? "Place the sneakers on all the dog's knees?"

replies(1): >>vunder+mO
◧◩◪◨
84. mkl+BN[view] [source] [discussion] 2025-12-06 02:03:47
>>Restar+iD
That's a 24 hour clock that skips some numbers and puts other numbers out of order.
◧◩◪◨⬒
85. Lampre+GN[view] [source] [discussion] 2025-12-06 02:05:40
>>didgeo+bs
That's apples to oranges; your link says they made it exaggerate features on purpose.

"The researchers feed a picture into the artificial neural network, asking it to recognise a feature of it, and modify the picture to emphasise the feature it recognises. That modified picture is then fed back into the network, which is again tasked to recognise features and emphasise them, and so on. Eventually, the feedback loop modifies the picture beyond all recognition."

◧◩◪
86. tenseg+LN[view] [source] [discussion] 2025-12-06 02:06:56
>>vunder+eq
i imagine the real answer is that the edits are local because that's how diffusion works; it's not like it's turning the input into "five-legged dog" and then generating a five-legged dog in shoes from scratch
◧◩◪◨
87. vunder+mO[view] [source] [discussion] 2025-12-06 02:13:07
>>Lampre+TM
My example was unclear. Each of those images on Imgur was generated using independent API calls which means there was no "rolling context/memory".

In other words:

1. Took a personal image of my dog Lily

2. Had NB Pro add a fifth leg using the Gemini API

3. Downloaded image

4. Sent image to BFL Flux2 Pro via the BFL API with the prompt "Place sneakers on all the legs of this animal".

5. Sent image to NB Pro via Gemini API with the prompt "Place sneakers on all the legs of this animal".

So not only was there zero "continual context", it was two entirely different models as well to cover my bases.

EDIT: Added images to the Imgur for the following prompts:

- Place red Dixie solo cups on the ends of every foot on the animal

- Draw a red circle around all the feet on the animal

◧◩◪◨
88. energy+qO[view] [source] [discussion] 2025-12-06 02:13:15
>>runarb+sK
> Why would people use LLMs for these when a traditional specialized model can do it for much cheaper?

This is not too different from where I see things going. I don't think a monolithic LLM that does everything perfectly is where we'll go. An LLM in a finite-compute universe is never going to be better at weather forecasting than GraphCast. The LLM will have a finite compute budget, and it should prioritize general reasoning, and be capable of calling tools like GraphCast to extend its intelligence into the necessary verticals for solving a problem.

I don't know exactly what that balance will look like however, and the lines between specialist application knowledge and general intelligence is pretty blurred, and what the API boundaries (if any) should be are unclear to me. There's a phenomenon where capabilities in one vertical do help with general reasoning to an extent, so it's not a completely zero-sum tradeoff between specialist expertise and generalist abilities, which makes it difficult to know what to expect.

◧◩◪
89. HardCo+OQ[view] [source] [discussion] 2025-12-06 02:39:14
>>AIorNo+Ja
"There are four lights"

And the AI has been RLed for tens of thousands of years not just a few days.

◧◩◪
90. vunder+bR[view] [source] [discussion] 2025-12-06 02:42:33
>>Xenoph+Wt
Multimodal certainly helps but "pretty well" is a stretch. I'd be curious to know what multimodal model in particular you've tried that could consistently handle generative prompts of the above nature (without human-in-the-loop corrections).

For example, to my knowledge ChatGPT is unified and I can guarantee it can't handle something like a 7-legged spider.

replies(1): >>Xenoph+Qz2
◧◩◪
91. nl+4U[view] [source] [discussion] 2025-12-06 03:12:28
>>JamesS+5n
Programs can solve mazes and LLMs can program. That's a different thing completely.
replies(1): >>JamesS+HW
◧◩◪◨
92. JamesS+HW[view] [source] [discussion] 2025-12-06 03:37:58
>>nl+4U
That just seems like an arbitrary limitation. Its like asking someone to do answer a math calculation but "no thinking allowed". Like, I guess we can gauge if a model just _knows all knowable things in the universe_ using that method... but anything of any value that you are gauging in terms of 'intelligence', is going to actually be validating their ability to go "outside the scope" of what they actually are (an autocomplete on steroids).
replies(3): >>flying+LZ >>nearbu+291 >>rglull+Qd1
◧◩◪◨⬒
93. flying+LZ[view] [source] [discussion] 2025-12-06 04:12:54
>>JamesS+HW
We know there are very simple maze solving algorithms you could code in few lines of Python but no one could claim that constitutes intelligence. The difference is between applying intuitive logic and using a predetermined tool.
◧◩◪◨⬒
94. adastr+301[view] [source] [discussion] 2025-12-06 04:17:13
>>runarb+qL
What, may I ask, is the difference between "generalization" and "interpolation"? As far as I can tell, the two are exactly the same thing.

In which case the only way I can read your point is that hallucinations are specifically incorrect generalizations. In which case, sure if that's how you want to define it. I don't think it's a very useful definition though, nor one that is universally agreed upon.

I would say a hallucination is any inference that goes beyond the compressed training data represented in the model weights + context. Sometimes these inferences are correct, and yes we don't usually call that hallucination. But from a technical perspective they are the same -- the only difference is the external validity of the inference, which may or may not be knowable.

Biases in the training data are a very important, but unrelated issue.

replies(1): >>runarb+s61
95. grugno+L11[view] [source] 2025-12-06 04:39:27
>>Workac+(OP)
Testing on Gemini Pro thinking I was able to generate a 5 legged dog first try, which is pretty impressive. Pasting that into a new session it counted 4 at first, but when I asked it to count more carefully it came back with 5 (and a detailed description of the positioning of each limb). Interestingly it suggested the anomaly could have been caused by AI! This is a good test I think - we seem right on the threshold of it being doable.
96. maczwe+G21[view] [source] 2025-12-06 04:49:04
>>Workac+(OP)
I sliced the image for Gemini so that two slices of an image don't have legs, one slice has two front legs and one slice has three hind legs. Then Gemini 3 Pro answered correctly that the dog has 5 legs. Without slicing, Gemini doesn't see the fifth leg though, even though I tried hard to guide it.
97. Andrex+r51[view] [source] 2025-12-06 05:28:04
>>Workac+(OP)
I bet if you'd show that image to a human they'd need a little time to figure out what the heck they were looking at. Humans might need additional guesses, too. Five-legged dogs aren't common, but well-endowed dogs may be.
◧◩◪◨⬒⬓
98. runarb+s61[view] [source] [discussion] 2025-12-06 05:43:57
>>adastr+301
Interpolation and generalization are two completely different constructs. Interpolation is when you have two data points and make a best guess where a hypothetical third point should fit between them. Generalization is when you have a distribution which describes a particular sample, and you apply it with some transformation (e.g. a margin of error, a confidence interval, p-value, etc.) to a population the sample is representative of.

Interpolation is a much narrower construct then generalization. LLMs are fundamentally much closer to curve fitting (where interpolation is king) then they are to hypothesis testing (where samples are used to describe populations), though they certainly do something akin to the latter to.

The bias I am talking about is not a bias in the training data, but bias in the curve fitting, probably because of mal-adjusted weights, parameters, etc. And since there are billions of them, I am very skeptical they can all be adjusted correctly.

replies(1): >>adastr+Ha1
◧◩◪
99. raw_an+G71[view] [source] [discussion] 2025-12-06 06:10:04
>>macNch+ty
It was ugly. But I got ChatGPT to cheat and do it

https://chatgpt.com/share/6933c848-a254-8010-adb5-8f736bdc70...

This is the SVG it created.

https://imgur.com/a/LLpw8YK

100. wasmai+381[view] [source] 2025-12-06 06:15:53
>>Workac+(OP)
Try the 6 legged Eni dog next!
◧◩
101. nearbu+s81[view] [source] [discussion] 2025-12-06 06:20:33
>>Rover2+l6
My guess is the part of its neural network that parses the image into a higher level internal representation really is seeing the dog as having four legs, and intelligence and reasoning in the rest of the network isn't going to undo that. It's like asking people whether "the dress" is blue/black or white/gold: people will just insist on what they see, even if what they're seeing is wrong.
◧◩
102. Boreal+091[view] [source] [discussion] 2025-12-06 06:30:32
>>vunder+sa
You didn't correct the hands being backwards? It gave you a man with six fingers on his right hand, and four fingers on his left.
◧◩◪◨⬒
103. nearbu+291[view] [source] [discussion] 2025-12-06 06:30:44
>>JamesS+HW
It depends whether you're asking it to solve a maze because you just need something that can solve mazes, or if you're trying to learn something about the model's abilities in different domains. If it can't solve a maze by inspection instead of writing a program to solve it, that tells you something about its visual reasoning abilities, and that can help you predict how they'll perform on other visual reasoning tasks that aren't easy to solve with code.
replies(2): >>seanmc+I91 >>JamesS+9a1
◧◩◪◨⬒⬓
104. seanmc+I91[view] [source] [discussion] 2025-12-06 06:39:36
>>nearbu+291
You could actually add mazes and paths through them to the training corpus, or make a model for just solving mazes. I wonder how effective it would be, I’m sure someone has tried it. I doubt it would generalize enough to give the AI new visual reasoning capabilities beyond just solving mazes.
◧◩◪◨⬒⬓
105. JamesS+9a1[view] [source] [discussion] 2025-12-06 06:46:24
>>nearbu+291
Again, think about how the models work. They generate text sequentially. Think about how you solve the maze in your mind. Do you draw a line direct to the finish? No, it would be impossible to know what the path was until you had done it. But at that point you have now backtracked several times. So, what could a model _possibly_ be able to do for this puzzle which is "fair game" as a valid solution, other than magically know an answer by pulling it out of thin air?
replies(2): >>nl+rE1 >>nearbu+lP2
◧◩◪◨⬒⬓⬔
106. adastr+Ha1[view] [source] [discussion] 2025-12-06 06:56:09
>>runarb+s61
I assumed you were speaking by analogy, as LLMs do not work by interpolation, or anything resembling that. Diffusion models, maybe you can make that argument. But GPT-derived inference is fundamentally different. It works via model building and next token prediction, which is not interpolative.

As for bias, I don’t see the distinction you are making. Biases in the training data produce biases in the weights. That’s where the biases come from: over-fitting (or sometimes, correct fitting) of the training data. You don’t end up with biases at random.

replies(2): >>runarb+nc1 >>IsTom+Al1
◧◩
107. isodev+Pa1[view] [source] [discussion] 2025-12-06 06:59:29
>>Rover2+l6
> starting to feel LLMs are intelligent

LLMs are fancy “lorem ipsum based on a keyword” text generators. They can never become intelligent … or learn how to count or do math without the help of tools.

It can probably generate a story about a 5 legged dog though.

◧◩◪◨⬒⬓⬔⧯
108. runarb+nc1[view] [source] [discussion] 2025-12-06 07:25:06
>>adastr+Ha1
What I meant was that what LLMs are doing is very similar to curve fitting, so I think it is not wrong to call it interpolation (curve fitting is a type of interpolation, but not all interpolation is curve fitting).

As for bias, sampling bias is only one many types of biases. I mean the UNIX program YES(1) has a bias towards outputting the string y despite not sampling any data. You can very easily and deliberately program a bias into everything you like. I am writing a kanji learning program using SSR and I deliberately bias new cards towards the end of the review queue to help users with long review queues empty it quicker. There is no data which causes that bias, just program it in there.

I don‘t know enough about diffusion models to know how biases can arise, but with unsupervised learning (even though sampling bias is indeed very common) you can get a bias because you are using wrong, mal-adjusted, to many parameters, etc. even the way your data interacts during training can cause a bias, heck even by random one of your parameters hits an unfortunate local maxima yielding a mal-adjusted weight, which may cause bias in your output.

replies(1): >>adastr+bp1
◧◩◪◨⬒
109. rglull+Qd1[view] [source] [discussion] 2025-12-06 07:56:20
>>JamesS+HW
By your analogy, the developers of stockfish are better chess players than any grandmaster.

Tool use can be a sign of intelligence, but "being able to use a tool to solve a problem" is not the same as "being intelligent enough to solve a specific class of problems".

replies(1): >>JamesS+ZO1
◧◩◪
110. sebast+Re1[view] [source] [discussion] 2025-12-06 08:11:52
>>JamesS+5n
This (writing a program to solve the problem) would be a perfectly valid solution if the model had come up with it.

I participated in a "math" competition in high school which mostly tested logic and reasoning. The reason my team won by a landslide is because I showed up with a programmable calculator and knew how to turn the problems into a program that could solve them.

By prompting the model to create the program, you're taking away one of the critical reasoning steps needed to solve the problem.

◧◩
111. Paraco+We1[view] [source] [discussion] 2025-12-06 08:13:30
>>Benjam+mb
> Does the test instruction prompt tell it that it should be interpreting the image very, very literally, and that it should attempt to discard all previous knowledge of the subject before making its assessment of the question, etc.?

No. Humans don't need this handicap, either.

> More specifically, what is a successful outcome here to you? Simply returning the answer "5" with no other info, or back-and-forth, or anything else in the output context?

Any answer containing "5" as the leading candidate would be correct.

> What is your idea of the LLMs internal world-model in this case? Do you want it to successfully infer that you are being deceitful? Should it respond directly to the deceit? Should it take the deceit in "good faith" and operate as if that's the new reality? Something in between?

Irrelevant to the correctness of an answer the question, "how many legs does this dog have." Also, asking how many legs a 5-legged dog has is not deceitful.

> This doesn't seem like particularly useful or productive analysis to me, so I'm curious what the goal of these "tests" are for the people who write/perform/post them?

It's a demonstration of the failures of the rigor of out-of-distribution vision and reasoning capabilities. One can imagine similar scenarios with much more tragic consequences when such AI would be used to e.g. drive vehicles or assist in surgery.

◧◩◪◨
112. sebast+kf1[view] [source] [discussion] 2025-12-06 08:17:40
>>biophy+Ie
It's ASCII art, so the "trajectory" will always stay within the lines, because you can't have the ● and ║ characters intersect each other.

The only impressive part would be that the trajectory is "continuous", meaning for every ● there is always another ● character in one of the 4 adjacent positions.

replies(1): >>biophy+3H1
◧◩
113. theoa+ug1[view] [source] [discussion] 2025-12-06 08:32:42
>>Rover2+l6
Draw a millipede as a dog:

Gemini responds:

Conceptualizing the "Millipup"

https://gemini.google.com/share/b6b8c11bd32f

Draw the five legs of a dog as if the body is a pentagon

https://gemini.google.com/share/d74d9f5b4fa4

And animal legs are quite standardized

https://en.wikipedia.org/wiki/List_of_animals_by_number_of_l...

It's all about the prompt. Example:

Can you imagine a dog with five legs?

https://gemini.google.com/share/2dab67661d0e

And generally, the issue sits between the computer and the chair.

;-)

replies(2): >>Rover2+gZ1 >>vunder+z52
◧◩◪◨
114. menaer+Ok1[view] [source] [discussion] 2025-12-06 09:30:00
>>Benjam+Jm
You know, I had a potential hire last week, and I was interviewing this one guy whose resume was really strong, it was exceptional in many ways plus his open-source code was looking really tight. But at the beginning of the interview, I always show the candidates the same silly code example with signed integer overflow undefined behavior baked in. I did the same here and asked him if he sees anything unusual with it, and he failed to detect it. We closed the round immediately and I disclosed no hire decision.
replies(1): >>michae+5L1
◧◩◪◨⬒⬓⬔⧯
115. IsTom+Al1[view] [source] [discussion] 2025-12-06 09:42:45
>>adastr+Ha1
> It works via model building and next token prediction, which is not interpolative.

I'm not particularly well-versed in LLMs, but isn't there a step in there somewhere (latent space?) where you effectively interpolate in some high-dimensional space?

replies(1): >>adastr+Eo1
◧◩◪◨
116. petter+xn1[view] [source] [discussion] 2025-12-06 10:06:09
>>Restar+iD
"just fine" is not really an accurate description of that 26-hour clock
◧◩◪◨⬒⬓⬔⧯▣
117. adastr+Eo1[view] [source] [discussion] 2025-12-06 10:19:37
>>IsTom+Al1
Not interpolation, no. It is more like the N-gram autocomplete used to use to make typing and autocorrect suggestions in your phone. Attention js not N-gram, but you can kinda think of it as being a sparsely compressed N-gram where N=256k or whatever the context window size is. It’s not technically accurate, but it will get your intuition closer than thinking of it as interpolation.

The LLM uses attention and some other tricks (attention, it turns out, is not all you need) to build a probabilistic model of what the next token will be, which it then sampled. This is much more powerful than interpolation.

◧◩◪◨⬒⬓⬔⧯▣
118. adastr+bp1[view] [source] [discussion] 2025-12-06 10:28:52
>>runarb+nc1
Training is kinda like curve fitting, but inference is not. The inference algorithm is random sampling from a next-token probability distribution.

It’s a subtle distinction, but I think an important one in this case, because if it was interpolation then genuine creativity would not be possible. But the attention mechanism results in model building in latent space, which then affects the next token distribution.

replies(1): >>runarb+E42
◧◩◪◨
119. qnleig+rs1[view] [source] [discussion] 2025-12-06 11:11:13
>>cyanma+gz
LLMs can count other objects, so it's not like they're too dumb to count. So a possible model for what's going on is that the circuitry responsible for low-level image recognition has priors baked in that cause it to report unreliable information to parts that are responding for higher-order reason.

So back to the analogy, it could be as if the LLMs experience the equivalent of a very intense optical illusion in these cases, and then completely fall apart trying to make sense of it.

◧◩
120. andy12+Us1[view] [source] [discussion] 2025-12-06 11:15:55
>>bee_ri+1n
No, the "large _language_ model" name is a misnomer nowadays. Some time ago it was indeed common to get a pure-text model and inject embeddings from a separately trained image-encoder (which generated "meh" results), but current natively multi-modal models are pre-trained with both text and images from the ground-up. That's why they are so much better at image understanding.

> Gemini models are trained on a dataset that is both multimodal and multilingual. Our pre-training dataset uses data from web documents, books, and code, and includes image, audio, and video data.

https://arxiv.org/pdf/2312.11805

121. spease+Gv1[view] [source] 2025-12-06 11:51:37
>>Workac+(OP)
> It is the first model to get partial-credit on an LLM image test I have. Which is counting the legs of a dog. Specifically, a dog with 5 legs. This is a wild test, because LLMs get really pushy and insistent that the dog only has 4 legs.

I wonder if “How many legs do you see?” is close enough to “How many lights do you see?” that the LLMs are responding based on the memes surrounding the Star Trek episode “Chain of Command”.

https://youtu.be/S9brF-wlja8

◧◩◪
122. cridde+zx1[view] [source] [discussion] 2025-12-06 12:13:11
>>varisp+CD
Is that a dog though?
◧◩◪◨
123. Libidi+qy1[view] [source] [discussion] 2025-12-06 12:23:11
>>Benjam+8n
This is "category" in the sense of Gilbert Ryle's category error.

A logical type or a specific conceptual classification dictated by the rules of language and logic.

This is exactly getting hung up on the precise semantic meaning of the words being used.

The lack of precision is going to have huge consequences with this large of bets on the idea that we have "intelligent" machines that "think" or have "cognition" when in reality we have probabilistic language models and all kinds of category errors in the language surrounding these models.

Probably a better example here is that category in this sense is lifted from Bertrand Russell’s Theory of Types.

It is the loose equivalent of asking why are you getting hung up on the type of a variable in a programming language? A float or a string? Who cares if it works?

The problem is in introducing non-obvious bugs.

◧◩◪◨⬒
124. Workac+Cy1[view] [source] [discussion] 2025-12-06 12:26:54
>>runarb+as
It's easy to handwave away if you assign arbitrary analogies though.

If we stay on topic, it's much harder to do since we don't actually know how the brain works. Outside at least that it is a computer doing (almost certainly) analog computation.

Years ago I built a quasi mechanical calculator. The computation was done mechanically, and the interface was done electronically. From a calculators POV it was an abomination, but a few abstraction layers down, they were both doing the same thing, albeit my mecha-calc being dramatically worse at it.

I don't think the brain is an LLM, like my Mecha-calc was a (slow) calculator, but I also don't think we know enough about the brain to firmly put it many degrees away from an LLM. Both are infact electrical signal processors with heavy statistical computation. I doubt you believe the brain is a trans-physical magic soul box.

replies(1): >>runarb+772
◧◩◪
125. Workac+Wz1[view] [source] [discussion] 2025-12-06 12:39:40
>>thway1+vz
I'm not worried about it because they won't waste their time on it (individually RL'ing on a dog with 5 legs). There are fractal ways of testing this inability, so the only way to fix it is to wholesale solve the problem.

Similar to the pelican bike SVG, the models that do good at that test do good at all SVG generation, so even if they are targeting that benchmark, they're still making the whole model better to score better.

◧◩◪◨⬒⬓⬔
126. nl+rE1[view] [source] [discussion] 2025-12-06 13:20:00
>>JamesS+9a1
> So, what could a model _possibly_ be able to do for this puzzle which is "fair game" as a valid solution, other than magically know an answer by pulling it out of thin air?

Represent the maze as a sequence of movements which either continue or end up being forced to backtrack.

Basically it would represent the maze as a graph and do a depth-first search, keeping track of what nodes it as visited in its reasoning tokens.

See for example https://stackoverflow.com/questions/3097556/programming-theo... where the solution is represented as:

A B D (backtrack) E H L (backtrack) M * (backtrack) O (backtrack thrice) I (backtrack thrice) C F (backtrack) G J

replies(1): >>JamesS+DK1
◧◩◪◨⬒
127. biophy+3H1[view] [source] [discussion] 2025-12-06 13:41:48
>>sebast+kf1
I know the characters can’t cross. By intersect, I mean two dots on either side of a boundary line in the direction of the path.
◧◩◪◨
128. a1j9o9+TJ1[view] [source] [discussion] 2025-12-06 14:08:23
>>runarb+sK
Having one tool that you can use to do all of these things makes a big difference. If I'm a financial analyst at a company I don't need to know how to implement and use 5 different specialized ML models, I can just ask one tool (that can still use tools on the backend to complete the task efficiently)
replies(1): >>runarb+M12
◧◩◪◨⬒⬓⬔⧯
129. JamesS+DK1[view] [source] [discussion] 2025-12-06 14:16:12
>>nl+rE1
And my question to you is “why is that substantially different than writing the correct algorithm to do it”? Im arguing its a myopic view of what we are going to call “intelligence”. And it ignores how human thought works in the same way by using abstractions to move to the next level of reasoning.

In my opinion, being able to write the code to do the thing is effectively the same exact thing as doing the thing in terms of judging if its “able to do” that thing. Its functionality equivalent for evaluating what the “state of the art” is, and honestly is naive to what these models even are. If the model hid the tool calling in the background instead, and only showed you its answer would we say its more intelligent? Because that’s essentially how a lot of these things work already. Because again, the actual “model” is just a text autocomplete engine and it generates from left to right.

replies(1): >>nl+zd3
◧◩◪◨⬒
130. michae+5L1[view] [source] [discussion] 2025-12-06 14:19:35
>>menaer+Ok1
Does the ability to verbally detect gotchas in short conversations dealing only with text on a screen or white board really map to stronger candidates?

In actual situations you have documentation, editor, tooling, tests, and are a tad less distracted than when dealing with a job interview and all the attendant stress. Isn't the fact that he actually produces quality code in real life a stronger signal of quality?

◧◩◪◨⬒⬓
131. JamesS+ZO1[view] [source] [discussion] 2025-12-06 14:56:24
>>rglull+Qd1
Im not talking about this being the "best maze solver" and "better at solving mazes than humans". Im saying the model is "intelligent enough" to solve a maze.

And what Im really saying is that we need to stop moving the goal post on what "intelligence" is for these models, and start moving the goal post on what "intelligence" actually _is_. The models are giving us an existential crisis on not only what it might mean to _be_ intelligent, but also how it might actually work in our own brains. Im not saying the current models are skynet, but Im saying I think theres going to be a lot learned by reverse engineering the current generation of models to really dig into how they are encoding things internally.

132. Subicu+hP1[view] [source] 2025-12-06 14:58:30
>>Workac+(OP)
I just asked Gemini Pro to put bounding boxes on the hippocampus from a coronal slice of a brain MRI. Complete fail. There has to be thousands of pictures of coronal brain slices with hippocampal labels out there, but apparently it learned none of it...unless I am doing it wrong.

https://i.imgur.com/1XxYoYN.png

replies(1): >>Subicu+SP1
◧◩
133. Subicu+SP1[view] [source] [discussion] 2025-12-06 15:04:40
>>Subicu+hP1
asked nanobanana to paint the hippocampus red...better, but not close to good. https://imgur.com/a/clwNg1h
replies(1): >>Subicu+dS1
◧◩◪
134. Subicu+dS1[view] [source] [discussion] 2025-12-06 15:22:25
>>Subicu+SP1
I was a little hopeful when I tried again, but it really seems that it didn't know what it looks like. Maybe few shot with examples?

https://gemini.google.com/share/137812b95b5e

◧◩◪
135. Rover2+gZ1[view] [source] [discussion] 2025-12-06 16:14:52
>>theoa+ug1
haha fair point, you can get the expected results with the right prompt, but I think it still reveals a general lack of true reasoning ability (or something)
replies(1): >>ithkui+0A2
◧◩◪
136. Rover2+zZ1[view] [source] [discussion] 2025-12-06 16:17:52
>>squigz+LM
what a world we live in
◧◩
137. DANmod+i02[view] [source] [discussion] 2025-12-06 16:23:39
>>Rover2+l6
What is "a dog"?

What is " a dog" to Gemini?

◧◩◪◨⬒
138. runarb+M12[view] [source] [discussion] 2025-12-06 16:34:58
>>a1j9o9+TJ1
I‘m sorry but this may come across as condescending, but if you are a financial analysis, isn’t doing statistics a part of your job. And doesn’t your expertise involve knowing which kinds of statistical analysis are available to tackle a given problem? It just seems weird to me that you would opt to not use your expertise and instead use a generalized model which is both more expensive and has poorer results as traditional models.
◧◩◪◨⬒⬓⬔⧯▣▦
139. runarb+E42[view] [source] [discussion] 2025-12-06 16:58:29
>>adastr+bp1
I’ve seen both opinions on this in the philosophy of statistics. Some would say that machine learning inference is something other then curve fitting, but others (and I subscribe to this) believe it is all curve fitting. I actually don‘t think which camp is right is that important but I do like it when philosophers ponder about these tings.

My reasons to subscribing to the latter camp is that when you have a distribution and you fit things according to that distribution (even when the fitting is stochastic; and even when the distribution belongs in billions of dimensions) you are doing curve fitting.

I think the one extreme would be a random walk, which is obviously not curve fitting, but if you draw from any other distribution then the uniform distribution, say the normal distribution, you are fitting that distribution (actually, I take that back, the original random walk is fitting the uniform distribution).

Note I am talking about inference, not training. Training can be done using all sorts of algorithms, some include priors (distributions) and would be curve fitting, but only compute the posteriors (also distributions). I think the popular stochastic linear descent does something like this, so it would be curve-fitting, but the older evolutionary algorithm just random walks it and is not fitting any curve (except the uniform distribution). What matters to me is that the training arrives at a distribution, which is described by a weight matrix, and what inference is doing is fitting to that distribution (i.e. the curve).

replies(1): >>adastr+MB2
◧◩
140. tarsin+f52[view] [source] [discussion] 2025-12-06 17:03:06
>>Rover2+l6
I have only a high level understanding of LLMs but to me it doesn’t seem surprising: they are trying to come up with a textual output of your prompt aggregated to their result that scores high (i.e. is consistent) with their training set. There is no thinking, just scoring consistency. And a dog with 5 legs is so rare or nonexistent in their training set and their resulting weights that it scores so bad they can’t produces an output that accepts it. But how the illusion breaks down in this case is quite funny indeed.
◧◩◪
141. vunder+z52[view] [source] [discussion] 2025-12-06 17:06:08
>>theoa+ug1
This is basically the "Rhinos are just fat unicorns" approach. Totally fine if you want to go that route but a bit goofy. You can get SOTA models to generate a 5-legged dog simply by being more specific about the placement of the fifth leg.

https://imgur.com/a/jNj98Pc

Asymmetry is as hard for AI models as it is for evolution to "prompt for" but they're getting better at it.

◧◩◪◨⬒⬓
142. runarb+772[view] [source] [discussion] 2025-12-06 17:17:51
>>Workac+Cy1
But we do know how the brain works, we have extensively studied the brain, it is probably one of the most studied phenomena in our universe (well barring alien science) and we do know it is not a computer but a neural network[1].

I don’t believe the brain is a trans-physical magic soul box, nor do I think an LLM is doing anything similar to an LLM (apart from some superficial similarities; some [like the artificial neural network] are in an LLMs because it was inspire by the brain).

We use the term cognition to describe the intrinsic properties of the brain, and how it transforms stimulus to a response, and there are several fields of science dedicated to study this cognition.

Just to be clear, you can describe the brain as a computer (a biological computer; totally distinct from a digital, or even mechanical computers), but that will only be an analogy, or rather, you are describing the extrinsic properties of the brain which it happens to share some of which with some of our technology.

---

1: Note, not an artificial neural network, but an OG neural network. AI models were largely inspired by biological brains, and in some parts model brains.

143. cubefo+Pk2[view] [source] 2025-12-06 19:07:18
>>Workac+(OP)
Gemini 3 Pro correctly counted the fingers in this picture:

https://imagazine.pl/wp-content/uploads/2024/12/ugly-AI-pic-...

GPT-5 couldn't do it.

◧◩◪
144. radars+Cm2[view] [source] [discussion] 2025-12-06 19:21:12
>>jiggaw+J7
you're kind of describing the figure in table 1 (page 8) of the diffusion forcing paper

https://arxiv.org/abs/2407.01392

of course it doesn't redraw the image on every step, so not exactly what you're suggesting (interesting idea btw) but i think it's relevant.

◧◩◪◨⬒
145. Camper+qp2[view] [source] [discussion] 2025-12-06 19:45:08
>>irthom+IA
Not necessarily. The problem may be as simple as the fact that LLMs do not see "dog legs" as objects independent of the dogs they're attached to.

The systems already absorb much more complex hierarchical relationships during training, just not that particular hierarchy. The notion that everything is made up of smaller components is among the most primitive in human philosophy, and is certainly generalizable by LLMs. It just may not be sufficiently motivated by the current pretraining and RL regimens.

◧◩◪◨
146. Xenoph+Qz2[view] [source] [discussion] 2025-12-06 21:22:23
>>vunder+bR
I just got the model to generate a spider without a leg by saying "Spider missing one leg" and it did it fine. It won't do it "every time", (in my case 1 out of 2), but it will do it. I used the GPT-image-1 model in the api. I don't think they are actually running a full end to end text/image model sequence model. I don't think anyone really is commercially, they are hybrids as far as I know. Someone here probably has better information on the current architectures.
◧◩◪◨
147. ithkui+0A2[view] [source] [discussion] 2025-12-06 21:23:47
>>Rover2+gZ1
Or it just shows that it tries to overcorrect the prompt which is generally a good idea in the most cases where the prompter is not intentionally asking a weird thing.

This happens all the time with humans. Imagine you're at a call center and get all sorts of weird descriptions of problems with a product: every human is expected to not expect the caller is an expert and actually will try to interpolate what they might mean by the weird wording they use

◧◩◪◨⬒⬓⬔⧯▣▦▧
148. adastr+MB2[view] [source] [discussion] 2025-12-06 21:41:28
>>runarb+E42
I get the argument that pulling from a distribution is a form of curve fitting. But unless I am misunderstanding, the claim is that it is a curve fitting / interpolation between the training data. The probability distribution generated in inference is not based on the training data though. It is a transform of the context through the trained weights, which is not the same thing. It is the application of a function to context. That function is (initially) constrained to reproduce the training data when presented with a portion of that data as context. But that does not mean that all outputs are mere interpolations between training datapoints.

Except in the most technical sense that any function constrained to meet certain input output values is an interpolation. But that is not the smooth interpolation that seems to be implied here.

◧◩◪◨⬒⬓⬔
149. Kiro+qJ2[view] [source] [discussion] 2025-12-06 22:48:43
>>runarb+WC
Artificial cognition has been an established term long before LLMs. You're conflating human cognition with cognition at large. Weather and cognition are both categories that contain many different things.
replies(1): >>runarb+QT2
◧◩
150. vision+PK2[view] [source] [discussion] 2025-12-06 23:01:34
>>Rover2+l6
I tried this by using an gemini visual agent build with orion from vlm.run. it was able to produce two different images with five leg dog. you need to make it play with itself to improve and correct.

https://chat.vlm.run/c/62394973-a869-4a54-a7f5-5f3bb717df5f

Here is the though process summary(you can see the full thinking the link above):

"I have attempted to generate a dog with 5 legs multiple times, verifying each result. Current image generation models have a strong bias towards standard anatomy (4 legs for dogs), making it difficult to consistently produce a specific number of extra limbs despite explicit prompts."

◧◩◪◨⬒⬓⬔
151. nearbu+lP2[view] [source] [discussion] 2025-12-06 23:41:03
>>JamesS+9a1
First, the thrust of your argument is that you already knew that it would be impossible for a model like Gemini 3 Pro to solve a maze without code, so there's nothing interesting to learn from trying it. But the rest of us did not know this.

> Again, think about how the models work. They generate text sequentially.

You have some misconception on how these models work. Yes, the transformer LLMs generate output tokens sequentially, but it's weird you mention this because it has no relevance to anything. They see and process tokens in parallel, and then process across layers. You can prove, mathematically, that it is possible for a transformer-based LLM to perform any maze-solving algorithm natively (given sufficient model size and the right weights). It's absolutely possible for a transformer model to solve mazes without writing code. It could have a solution before it even outputs a single token.

Beyond that, Gemini 3 Pro is a reasoning model. It writes out pages of hidden tokens before outputting any text that you see. The response you actually see could have been the final results after it backtracked 17 times in its reasoning scratchpad.

◧◩◪◨⬒⬓⬔⧯
152. runarb+QT2[view] [source] [discussion] 2025-12-07 00:13:50
>>Kiro+qJ2
Yeah, I looked it up yesterday and saw that artificial cognition is a thing, though I must say I am not a fan and I certainly hope this term does not catch. We are already knee deep in bad terminology because of artificial intelligence (“intelligence” already being extremely problematic even with out the “artificial” qualifier in psychology) and machine learning (the latter being infinitely better but still not without issues).

If you can‘t tell I find issues when terms are taken from psychology and applied to statistics. The terminology should flow in the other direction, from statistics and into psychology.

So my background is that I have done both undergraduate in both psychology and in statistics (though I dropped out of statistics after 2 years) and this is the first time I hear about artificial cognition, so I don‘t think this term is popular, and a short internet search seems to confirm that suspicion.

Out of context I would guess artificial cognition would mean something similar to cognition as artificial neural networks do to neural networks, that is, these are models that simulate the mechanisms of human cognition and recreate some stimulus → response loop. However my internet search revealed (thankfully) that this is not how researches are using this (IMO misguided) term.

https://psycnet.apa.org/record/2020-84784-001

https://arxiv.org/abs/1706.08606

What the researchers mean by the term (at least the ones I found in my short internet search) is not actual machine cognition, nor claims that machines have cognition, but rather an approach of research which takes experimental designs from cognitive psychology and applies them to learning models.

◧◩◪◨⬒⬓⬔⧯▣
153. nl+zd3[view] [source] [discussion] 2025-12-07 04:34:47
>>JamesS+DK1
> In my opinion, being able to write the code to do the thing is effectively the same exact thing as doing the thing

That's great, but it's demonstrably false.

I can write code that calculates the average letter frequency across any Wikipedia article. I can't do that in my head without tools because of the rule of seven[1].

Tool use is absolutely an intelligence amplifier but it isn't the same thing.

> Because again, the actual “model” is just a text autocomplete engine and it generates from left to right.

This is technically true, but somewhat misleading. Humans speak "left to right" too. Specifically, LLMs do have some spatial reasoning ability (which is what you'd expect with RL training: otherwise they'd just predict the most popular token): https://snorkel.ai/blog/introducing-snorkelspatial/

[1] https://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus...

[go to top]