zlacker

[return to "The Illusion of Thinking: Strengths and limitations of reasoning models [pdf]"]
1. stephc+vz1[view] [source] 2025-06-07 13:42:25
>>amrrs+(OP)
Human language is far from perfect as a cognitive tool but still serves us well because it is not foundational. We use it both for communication and some reasoning/planning as a high level layer.

I strongly believe that human language is too weak (vague, inconsistent, not expressive enough etc.) to replace interactions with the world as a basis to build strong cognition.

We're easily fooled by the results of LLM/LRM models because we typically use language fluency and knowledge retrieval as a proxy benchmark for intelligence among our peers.

◧◩
2. anton-+II1[view] [source] 2025-06-07 15:10:34
>>stephc+vz1
Sounds like we need ai legalese as that's how we navigate the vagueness of language in the real world.

Ofc I imagine they've tried similar things and that it almost takes away the point if u had to prompt that way.

[go to top]