zlacker

[return to "The Illusion of Thinking: Strengths and limitations of reasoning models [pdf]"]
1. actini+1H[view] [source] 2025-06-06 23:59:01
>>amrrs+(OP)
Man, remember when everyone was like 'AGI just around the corner!' Funny how well the Gartner hype cycle captures these sorts of things
◧◩
2. tonyha+7K[view] [source] 2025-06-07 00:32:58
>>actini+1H
I think we just around at 80% of progress

the easy part is done but the hard part is so hard it takes years to progress

◧◩◪
3. george+cY[view] [source] 2025-06-07 03:52:30
>>tonyha+7K
> the easy part is done but the hard part is so hard it takes years to progress

There is also no guarantee of continued progress to a breakthrough.

We have been through several "AI Winters" before where promising new technology was discovered and people in the field were convinced that the breakthrough was just around the corner and it never came.

LLMs aren't quite the same situation as they do have some undeniable utility to a wide variety of people even without AGI springing out of them, but the blind optimism that surely progress will continue at a rapid pace until the assumed breakthrough is realized feels pretty familiar to the hype cycle preceding past AI "Winters".

◧◩◪◨
4. Swizec+b31[view] [source] 2025-06-07 05:23:32
>>george+cY
> We have been through several "AI Winters" before

Yeah, remember when we spent 15 years (~2000 to ~2015) calling it “machine learning” because AI was a bad word?

We use so much AI in production every day but nobody notices because as soon as a technology becomes useful, we stop calling it AI. Then it’s suddenly “just face recognition” or “just product recommendations” or “just [plane] autopilot” or “just adaptive cruise control” etc

You know a technology isn’t practical yet because it’s still being called AI.

◧◩◪◨⬒
5. blks+ca1[view] [source] 2025-06-07 07:21:25
>>Swizec+b31
I don’t think there’s any “AI” in aircraft autopilots.
◧◩◪◨⬒⬓
6. within+5t1[view] [source] 2025-06-07 12:30:34
>>blks+ca1
AI encompasses a wide range of algorithms and techniques; not just LLMs or neural nets. Also, it is worth pointing out that the definition of AI has changed drastically over the last few years and narrowed pretty significantly. If you’re viewing the definition from the 80–90’s, most of what we call "automation" today would have been considered AI.
◧◩◪◨⬒⬓⬔
7. Jensso+su1[view] [source] 2025-06-07 12:47:12
>>within+5t1
Autopilots were a thing before computers were a thing, you can implement one using mechanics and control theory. So no, traditional autopilots are not AI under any reasonable definition, otherwise every single machine we build would be considered AI as almost all machines has some form of control systems in them, for example is your microwave clock an AI?

So I'd argue any algorithm that comes from control theory is not AI, those are just basic old dumb machines. You can't make planes without control theory, humans can't keep a plane steady without it, so Wrights Brothers adding this to their plane is why they succeeded making a flying machine.

So if autopilots are AI then the Wrights Brothers developed an AI to control their plane. I don't think anyone sees that as AI, not even at the time they did the first flight.

◧◩◪◨⬒⬓⬔⧯
8. trc001+HE1[view] [source] 2025-06-07 14:37:51
>>Jensso+su1
Uh, the bellman equation was first used for control theory and is the foundation of modern reinforcement learning... so wouldn't that imply LLMs "come from" control theory?
[go to top]