zlacker

[return to "The Illusion of Thinking: Strengths and limitations of reasoning models [pdf]"]
1. actini+1H[view] [source] 2025-06-06 23:59:01
>>amrrs+(OP)
Man, remember when everyone was like 'AGI just around the corner!' Funny how well the Gartner hype cycle captures these sorts of things
◧◩
2. tonyha+7K[view] [source] 2025-06-07 00:32:58
>>actini+1H
I think we just around at 80% of progress

the easy part is done but the hard part is so hard it takes years to progress

◧◩◪
3. george+cY[view] [source] 2025-06-07 03:52:30
>>tonyha+7K
> the easy part is done but the hard part is so hard it takes years to progress

There is also no guarantee of continued progress to a breakthrough.

We have been through several "AI Winters" before where promising new technology was discovered and people in the field were convinced that the breakthrough was just around the corner and it never came.

LLMs aren't quite the same situation as they do have some undeniable utility to a wide variety of people even without AGI springing out of them, but the blind optimism that surely progress will continue at a rapid pace until the assumed breakthrough is realized feels pretty familiar to the hype cycle preceding past AI "Winters".

◧◩◪◨
4. Swizec+b31[view] [source] 2025-06-07 05:23:32
>>george+cY
> We have been through several "AI Winters" before

Yeah, remember when we spent 15 years (~2000 to ~2015) calling it “machine learning” because AI was a bad word?

We use so much AI in production every day but nobody notices because as soon as a technology becomes useful, we stop calling it AI. Then it’s suddenly “just face recognition” or “just product recommendations” or “just [plane] autopilot” or “just adaptive cruise control” etc

You know a technology isn’t practical yet because it’s still being called AI.

◧◩◪◨⬒
5. blks+ca1[view] [source] 2025-06-07 07:21:25
>>Swizec+b31
I don’t think there’s any “AI” in aircraft autopilots.
◧◩◪◨⬒⬓
6. within+5t1[view] [source] 2025-06-07 12:30:34
>>blks+ca1
AI encompasses a wide range of algorithms and techniques; not just LLMs or neural nets. Also, it is worth pointing out that the definition of AI has changed drastically over the last few years and narrowed pretty significantly. If you’re viewing the definition from the 80–90’s, most of what we call "automation" today would have been considered AI.
◧◩◪◨⬒⬓⬔
7. fc417f+qH2[view] [source] 2025-06-08 01:34:39
>>within+5t1
Ah yes the mythical strawman definition of AI that you can never seem to pin down, was never rigorous, and never enjoyed wide expert acceptance. It's on par with "well many people used to say, or at least so I've been told, that ...".
◧◩◪◨⬒⬓⬔⧯
8. Swizec+Fp4[view] [source] 2025-06-09 00:56:04
>>fc417f+qH2
That’s the point: AI is a marketing term and always has been. The underlying tech changes with every hype wave.

One of the first humanoid robots was an 18th century clockwork mechanism inside a porcelain doll that autonomously wrote out “Cogito Ergo Sum” in cursive with a pen. It was considered thought provoking at the time because it implied that some day machines could think.

BBC video posted to reddit 10 years ago: https://www.reddit.com/r/history/s/d6xTeqfKCv

◧◩◪◨⬒⬓⬔⧯▣
9. fc417f+ov4[view] [source] 2025-06-09 02:05:11
>>Swizec+Fp4
It certainly sees use as an ever shifting marketing term. That does not exclude it from being a useful technical term. Indeed if the misuse of a term by marketers was sufficient to rob a word of meaning then I doubt we'd have any means of communication left.

> It was considered thought provoking at the time because it implied that some day machines could think.

What constitutes "thinking"? That's approximately the same question as what qualifies as AGI. LLMs and RL seem to be the first time humanity has achieved anything that begins to resemble that but clearly both of those come up short ... at least so far.

Meanwhile I'm quite certain that a glorified PID loop (ie autopilot) does not qualify as machine learning (AI if you'd prefer). If someone wants to claim that it does then he's going to need to explain how his definition excludes mechanical clockwork.

◧◩◪◨⬒⬓⬔⧯▣▦
10. within+wO4[view] [source] 2025-06-09 07:06:02
>>fc417f+ov4
What do you think an executing LLM is? It’s basically a glorified PID loop. It isn’t learning anything new. It isn’t thinking about your conversation while you go take a poo.

And I think the point is that the definition doesn’t exclude pure mechanical devices since that’s exactly what a computer is.

[go to top]