zlacker

[return to "The Illusion of Thinking: Strengths and limitations of reasoning models [pdf]"]
1. piskov+Pm3[view] [source] 2025-06-08 12:16:31
>>amrrs+(OP)
All "reasoning" models hit a complexity wall where they completely collapse to 0% accuracy.

No matter how much computing power you give them, they can't solve harder problems.

This research suggests we're not as close to AGI as the hype suggests.

Current "reasoning" breakthroughs may be hitting fundamental walls that can't be solved by just adding more data or compute.

Apple's researchers used controllable puzzle environments specifically because:

• They avoid data contamination • They require pure logical reasoning • They can scale complexity precisely • They reveal where models actually break

Models could handle 100+ moves in Tower of Hanoi puzzles but failed after just 4 moves in River Crossing puzzles.

This suggests they memorized Tower of Hanoi solutions during training but can't actually reason.

https://x.com/RubenHssd/status/1931389580105925115

◧◩
2. jqpabc+1w3[view] [source] 2025-06-08 14:03:35
>>piskov+Pm3
No matter how much computing power you give them, they can't solve harder problems.

Why would anyone ever expect otherwise?

These models are inherently handicapped and always will be in terms of real world experience. They have no real grasp of things people understand intuitively --- like time or money or truth ... or even death.

They only *reality* they have to work from is a flawed statistical model built from their training data.

[go to top]