zlacker

[parent] [thread] 4 comments
1. alsetm+(OP)[view] [source] 2026-02-05 00:25:50
This person doesn't understand how LLMs work.
replies(2): >>Dennis+d4 >>pradee+Ob
2. Dennis+d4[view] [source] 2026-02-05 00:56:32
>>alsetm+(OP)
Care to be more specific?
replies(1): >>alsetm+yA
3. pradee+Ob[view] [source] 2026-02-05 02:01:06
>>alsetm+(OP)
Not sure how you could read this essay and come to that conclusion. It definitely aligns with my own understanding, and his conclusions seem pretty reasonable (though the AI 2027/Situational Awareness part might be arguable)
replies(1): >>alsetm+tA
◧◩
4. alsetm+tA[view] [source] [discussion] 2026-02-05 06:00:49
>>pradee+Ob
Absolutely:

> In order to predict where thinking and reasoning capabilities are going, it's important to understand the trail of thought that went into today's thinking LLMs.

No. You don't understand at all. They don't think. They don't reason. They are statistical word generators. They are very impressive at doing things like writing code, but they don't work the way that is being inferred here.

◧◩
5. alsetm+yA[view] [source] [discussion] 2026-02-05 06:01:10
>>Dennis+d4
See reply in related thread.
[go to top]