zlacker

[return to "The Illusion of Thinking: Strengths and limitations of reasoning models [pdf]"]
1. thomas+kb1[view] [source] 2025-06-07 07:42:55
>>amrrs+(OP)
All the environments the test (Tower of Hanoi, Checkers Jumping, River Crossing, Block World) could easily be solved perfectly by any of the LLMs if the authors had allowed it to write code.

I don't really see how this is different from "LLMs can't multiply 20 digit numbers"--which btw, most humans can't either. I tried it once (using pen and paper) and consistently made errors somewhere.

◧◩
2. mjburg+mN1[view] [source] 2025-06-07 15:56:04
>>thomas+kb1
The goal isnt to assess the LLM capability at solving any of those problems. The point isnt how good they are at block world puzzles.

The point is to construct non-circular ways of quantifying model performance in reasoning. That the LLM has access to prior exemplars of any given problem is exactly the issue in establishing performance in reasoning, over historical synthesis.

◧◩◪
3. thomas+iz2[view] [source] 2025-06-07 23:41:34
>>mjburg+mN1
How are these problems more interesting than simple arithmetic or algorithmic problems?
[go to top]