Here’s how Nano Banana fared: https://x.com/danielvaughn/status/1971640520176029704?s=46
Only one model (gpt-image-1) out of the 18 tested managed to pass the test successfully. Gemini 3.0 Pro got VERY close.
That's essentially what's going on with AI models, they're struggling because they only get "one step" to solve the problem instead of being able to trace through the maze slowly.
An interesting experiment would be to ask the AI to incrementally solve the maze. Ask it to draw a line starting at the entrance a little ways into the maze, then a little bit further, etc... until it gets to the end.
When you think about posing the "solve a visual image of a maze" to something like ChatGPT, there's a good chance it'll try to throw a python VM at it, threshold it with something like OpenCV, and use a shortest-path style algorithm to try and solve it.
``` > [Image #1] Create a unicode "ascii-art" version of this image, with the optimal path through the maze highlighted in a solid colour.
I'll create an ASCII art version of this maze with the solution path highlighted!
┌─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┐
●●│ │ │ │ │ │ │ │ │ │ │ │ │ │
├─┤●└─┴─┐ ├───┐ │ ╔═╗ ├─────┤ ╔═══╝ │ ╔═╝ ╔═╝ │ │ ╔═╝ ├─┤
│ │●●●●●└─┤ │ │ ║ │ │ │ ║ │ ║ ║ │ │ ║ │ │
│ └───┐●╔═╧═╗ │ └─╢ │ │ ┌─╔═╝ ║ ┌───┤ ╚═╗ ╚═╗ │ └─╢ ┌─┘ │
│ │●║ ║ │ ║ │ │ │ ║ ║ │ │ ║ ║ │ ║ │ │
├───┐ │●╚═╗ ╚═╧═╗ ║ └─┤ │ ║ ╔═╝ └─╔═╧═╗ ╚═╗ ╚═╧═╗ ║ └─┬─┤
│ │ │●●●└───┐ ║ ║ │ │ ║ ║ ║ ║ ║ ║ ║ │ │
│ ╔═╝ └───┐●╔═╝ ║ ╚═╗ │ └─╢ ╚═╗ ╔═╝ ╔═╝ ┌─╢ ╔═══╝ ╚═╗ │ │
│ ║ │●║ ║ ║ │ ║ ║ ║ ║ │ ║ ║ ║ │ │
│ ╚═══╗ ╔═╝●╚═╗ ╚═╗ ║ ├─╔═╧═╗ ║ ║ ╔═╝ ╔═╝ ║ ║ ╔═╗ ╔═╝ │ │
│ ║ ║●●●●●│ ║ ║ │ ║ ║ ║ ║ ║ ║ ║ ║ ║ ║ ║ │ │
├───╔═╝ ║●┌─┐●└─╗ ║ └─┤ ║ ╔═╝ ║ ╚═╝ ╔═╧═╗ ║ ╚═╝ ║ ╚═╗ │ │
│ ║ ║●│ │●●●║ ║ │ ║ ║ ║ ║ ║ ║ ║ ║ │ │
│ ╔═╝ ╔═╝●│ └─┐●╚═╧═╗ │ ║ ║ ╔═╧═══╗ ║ ╔═╝ ╚═══╗ ╚═╗ ║ └─┤
│ ║ ║●●●│ │●●●●●║ │ ║ ║ ║ ║ ║ ║ ║ ║ ║ │
│ ║ ╔═╝●┌─┴─╗ └───┐●╚═╝ ║ ║ ║ ╔═╗ ║ └─╢ ╔═══╗ ╚═╗ ╚═╝ ╔═╝
│ ║ ║●●●│ ║ │●●●●●║ ║ ║ ║ ║ ║ ║ ║ ║ ║ ║
│ ╚═╝●╔═╧═╗ ╚═╗ ╔═╧═══┐●╚═╝ ║ ║ ║ ╚═╗ ║ ║ ╔═╧═╗ ╚═══╗ ╚═╗
│ ●●║ ║ ║ ║ │●●●●●║ ║ ║ ║ ║ ║ ║ ║ ║ ║
├─╔═┘●╚═╗ ╚═╗ ║ ║ ╔═╗ └───┐●╚═╝ ╚═╗ ║ └─╢ ║ ╔═╧═╗ ╔═╝ ╔═╝
│ ║●●●●●║ ║ ║ ║ ║ ║ │●●●●●●●║ ║ ║ ║ ║ ║ ║ ║
│ ║●┌─╗ ╚═╗ ║ └─╢ ║ ╚═══╗ └─────┐●╚═╧═╗ ║ ║ ║ ╔═╝ ║ ╔═╝ ╔
│ ║●│ ║ ║ ║ ║ ║ ║ │●●●●●║ ║ ║ ║ ║ ║ ║ ║
│ ╚═╝ ║ ╔═╝ ╚═╗ ║ ╚═╗ ╔═╧═════╗ └───┐●╚═╝ ║ ╚═╝ ╔═╝ ║ ╔═╝
│ ║ ║ ║ ║ ║ ║ ║ │●●●●●║ ║ ║ ║
└─────╨─╨─────╨─╨───╨─╨───────╨─────┴───●●╨─────╨───╨─╨──
The path from the mouse () to the cheese () is marked with ● characters. This shows the optimal route through the maze, navigating through the corridors while avoiding all the walls!
```Suffice to say, it didn't do either part right.
Gemini 3 then one-shot ported the whole thing (which uses CV py libraries) to a single page html+js version which works just as well.
I gave that to Claude to assess and assign a FAANG hiring level to, and it was amazed and said Gemini 3 codes like an L6.
Since I work for Google and used my phone in the office to do this, I think I can't share the source or file.
```
Create a devenv project that does the following:
- Read the image at maze.jpg
- Write a script that solves the maze in the most optimal way between the mouse and the cheese
- Generate a new image which is of the original maze, but with a red line that represents the calculated path
Use whatever lib/framework is most appropriate```
Output: https://gist.github.com/J-Swift/ceb1db348f46ba167948f734ff0fc604
Solution: https://imgur.com/a/bkJloPTI dunno why people are surprised by this. This is what you get with text->text. Reasoning doesn't work text->text.
Tool use can be a sign of intelligence, but "being able to use a tool to solve a problem" is not the same as "being intelligent enough to solve a specific class of problems".
I participated in a "math" competition in high school which mostly tested logic and reasoning. The reason my team won by a landslide is because I showed up with a programmable calculator and knew how to turn the problems into a program that could solve them.
By prompting the model to create the program, you're taking away one of the critical reasoning steps needed to solve the problem.
The only impressive part would be that the trajectory is "continuous", meaning for every ● there is always another ● character in one of the 4 adjacent positions.
Represent the maze as a sequence of movements which either continue or end up being forced to backtrack.
Basically it would represent the maze as a graph and do a depth-first search, keeping track of what nodes it as visited in its reasoning tokens.
See for example https://stackoverflow.com/questions/3097556/programming-theo... where the solution is represented as:
A B D (backtrack) E H L (backtrack) M * (backtrack) O (backtrack thrice) I (backtrack thrice) C F (backtrack) G J
In my opinion, being able to write the code to do the thing is effectively the same exact thing as doing the thing in terms of judging if its “able to do” that thing. Its functionality equivalent for evaluating what the “state of the art” is, and honestly is naive to what these models even are. If the model hid the tool calling in the background instead, and only showed you its answer would we say its more intelligent? Because that’s essentially how a lot of these things work already. Because again, the actual “model” is just a text autocomplete engine and it generates from left to right.
And what Im really saying is that we need to stop moving the goal post on what "intelligence" is for these models, and start moving the goal post on what "intelligence" actually _is_. The models are giving us an existential crisis on not only what it might mean to _be_ intelligent, but also how it might actually work in our own brains. Im not saying the current models are skynet, but Im saying I think theres going to be a lot learned by reverse engineering the current generation of models to really dig into how they are encoding things internally.
https://arxiv.org/abs/2407.01392
of course it doesn't redraw the image on every step, so not exactly what you're suggesting (interesting idea btw) but i think it's relevant.
> Again, think about how the models work. They generate text sequentially.
You have some misconception on how these models work. Yes, the transformer LLMs generate output tokens sequentially, but it's weird you mention this because it has no relevance to anything. They see and process tokens in parallel, and then process across layers. You can prove, mathematically, that it is possible for a transformer-based LLM to perform any maze-solving algorithm natively (given sufficient model size and the right weights). It's absolutely possible for a transformer model to solve mazes without writing code. It could have a solution before it even outputs a single token.
Beyond that, Gemini 3 Pro is a reasoning model. It writes out pages of hidden tokens before outputting any text that you see. The response you actually see could have been the final results after it backtracked 17 times in its reasoning scratchpad.
That's great, but it's demonstrably false.
I can write code that calculates the average letter frequency across any Wikipedia article. I can't do that in my head without tools because of the rule of seven[1].
Tool use is absolutely an intelligence amplifier but it isn't the same thing.
> Because again, the actual “model” is just a text autocomplete engine and it generates from left to right.
This is technically true, but somewhat misleading. Humans speak "left to right" too. Specifically, LLMs do have some spatial reasoning ability (which is what you'd expect with RL training: otherwise they'd just predict the most popular token): https://snorkel.ai/blog/introducing-snorkelspatial/
[1] https://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus...