It is the first model to get partial-credit on an LLM image test I have. Which is counting the legs of a dog. Specifically, a dog with 5 legs. This is a wild test, because LLMs get really pushy and insistent that the dog only has 4 legs.
In fact GPT5 wrote an edge detection script to see where "golden dog feet" met "bright green grass" to prove to me that there were only 4 legs. The script found 5, and GPT-5 then said it was a bug, and adjusted the script sensitivity so it only located 4, lol.
Anyway, Gemini 3, while still being unable to count the legs first try, did identify "male anatomy" (it's own words) also visible in the picture. The 5th leg was approximately where you could expect a well endowed dog to have a "5th leg".
That aside though, I still wouldn't call it particularly impressive.
As a note, Meta's image slicer correctly highlighted all 5 legs without a hitch. Maybe not quite a transformer, but interesting that it could properly interpret "dog leg" and ID them. Also the dog with many legs (I have a few of them) all had there extra legs added by nano-banana.
Here’s how Nano Banana fared: https://x.com/danielvaughn/status/1971640520176029704?s=46
```
Create a devenv project that does the following:
- Read the image at maze.jpg
- Write a script that solves the maze in the most optimal way between the mouse and the cheese
- Generate a new image which is of the original maze, but with a red line that represents the calculated path
Use whatever lib/framework is most appropriate```
Output: https://gist.github.com/J-Swift/ceb1db348f46ba167948f734ff0fc604
Solution: https://imgur.com/a/bkJloPTTool use can be a sign of intelligence, but "being able to use a tool to solve a problem" is not the same as "being intelligent enough to solve a specific class of problems".
And what Im really saying is that we need to stop moving the goal post on what "intelligence" is for these models, and start moving the goal post on what "intelligence" actually _is_. The models are giving us an existential crisis on not only what it might mean to _be_ intelligent, but also how it might actually work in our own brains. Im not saying the current models are skynet, but Im saying I think theres going to be a lot learned by reverse engineering the current generation of models to really dig into how they are encoding things internally.