zlacker

[parent] [thread] 2 comments
1. nl+(OP)[view] [source] 2025-12-06 13:20:00
> So, what could a model _possibly_ be able to do for this puzzle which is "fair game" as a valid solution, other than magically know an answer by pulling it out of thin air?

Represent the maze as a sequence of movements which either continue or end up being forced to backtrack.

Basically it would represent the maze as a graph and do a depth-first search, keeping track of what nodes it as visited in its reasoning tokens.

See for example https://stackoverflow.com/questions/3097556/programming-theo... where the solution is represented as:

A B D (backtrack) E H L (backtrack) M * (backtrack) O (backtrack thrice) I (backtrack thrice) C F (backtrack) G J

replies(1): >>JamesS+c6
2. JamesS+c6[view] [source] 2025-12-06 14:16:12
>>nl+(OP)
And my question to you is “why is that substantially different than writing the correct algorithm to do it”? Im arguing its a myopic view of what we are going to call “intelligence”. And it ignores how human thought works in the same way by using abstractions to move to the next level of reasoning.

In my opinion, being able to write the code to do the thing is effectively the same exact thing as doing the thing in terms of judging if its “able to do” that thing. Its functionality equivalent for evaluating what the “state of the art” is, and honestly is naive to what these models even are. If the model hid the tool calling in the background instead, and only showed you its answer would we say its more intelligent? Because that’s essentially how a lot of these things work already. Because again, the actual “model” is just a text autocomplete engine and it generates from left to right.

replies(1): >>nl+8z1
◧◩
3. nl+8z1[view] [source] [discussion] 2025-12-07 04:34:47
>>JamesS+c6
> In my opinion, being able to write the code to do the thing is effectively the same exact thing as doing the thing

That's great, but it's demonstrably false.

I can write code that calculates the average letter frequency across any Wikipedia article. I can't do that in my head without tools because of the rule of seven[1].

Tool use is absolutely an intelligence amplifier but it isn't the same thing.

> Because again, the actual “model” is just a text autocomplete engine and it generates from left to right.

This is technically true, but somewhat misleading. Humans speak "left to right" too. Specifically, LLMs do have some spatial reasoning ability (which is what you'd expect with RL training: otherwise they'd just predict the most popular token): https://snorkel.ai/blog/introducing-snorkelspatial/

[1] https://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus...

[go to top]