zlacker

[return to "The Illusion of Thinking: Strengths and limitations of reasoning models [pdf]"]
1. actini+1H[view] [source] 2025-06-06 23:59:01
>>amrrs+(OP)
Man, remember when everyone was like 'AGI just around the corner!' Funny how well the Gartner hype cycle captures these sorts of things
◧◩
2. tonyha+7K[view] [source] 2025-06-07 00:32:58
>>actini+1H
I think we just around at 80% of progress

the easy part is done but the hard part is so hard it takes years to progress

◧◩◪
3. george+cY[view] [source] 2025-06-07 03:52:30
>>tonyha+7K
> the easy part is done but the hard part is so hard it takes years to progress

There is also no guarantee of continued progress to a breakthrough.

We have been through several "AI Winters" before where promising new technology was discovered and people in the field were convinced that the breakthrough was just around the corner and it never came.

LLMs aren't quite the same situation as they do have some undeniable utility to a wide variety of people even without AGI springing out of them, but the blind optimism that surely progress will continue at a rapid pace until the assumed breakthrough is realized feels pretty familiar to the hype cycle preceding past AI "Winters".

◧◩◪◨
4. Swizec+b31[view] [source] 2025-06-07 05:23:32
>>george+cY
> We have been through several "AI Winters" before

Yeah, remember when we spent 15 years (~2000 to ~2015) calling it “machine learning” because AI was a bad word?

We use so much AI in production every day but nobody notices because as soon as a technology becomes useful, we stop calling it AI. Then it’s suddenly “just face recognition” or “just product recommendations” or “just [plane] autopilot” or “just adaptive cruise control” etc

You know a technology isn’t practical yet because it’s still being called AI.

◧◩◪◨⬒
5. blks+ca1[view] [source] 2025-06-07 07:21:25
>>Swizec+b31
I don’t think there’s any “AI” in aircraft autopilots.
◧◩◪◨⬒⬓
6. within+5t1[view] [source] 2025-06-07 12:30:34
>>blks+ca1
AI encompasses a wide range of algorithms and techniques; not just LLMs or neural nets. Also, it is worth pointing out that the definition of AI has changed drastically over the last few years and narrowed pretty significantly. If you’re viewing the definition from the 80–90’s, most of what we call "automation" today would have been considered AI.
◧◩◪◨⬒⬓⬔
7. fc417f+qH2[view] [source] 2025-06-08 01:34:39
>>within+5t1
Ah yes the mythical strawman definition of AI that you can never seem to pin down, was never rigorous, and never enjoyed wide expert acceptance. It's on par with "well many people used to say, or at least so I've been told, that ...".
◧◩◪◨⬒⬓⬔⧯
8. Swizec+Fp4[view] [source] 2025-06-09 00:56:04
>>fc417f+qH2
That’s the point: AI is a marketing term and always has been. The underlying tech changes with every hype wave.

One of the first humanoid robots was an 18th century clockwork mechanism inside a porcelain doll that autonomously wrote out “Cogito Ergo Sum” in cursive with a pen. It was considered thought provoking at the time because it implied that some day machines could think.

BBC video posted to reddit 10 years ago: https://www.reddit.com/r/history/s/d6xTeqfKCv

◧◩◪◨⬒⬓⬔⧯▣
9. danari+Eu5[view] [source] 2025-06-09 14:06:45
>>Swizec+Fp4
AI is multiple things.

AI is a marketing term for various kinds of machine learning applications.

AI is an academic field within computer science.

AI is the computer-controlled enemies you face in (especially, but not solely, offline) games.

This has been the case for decades now—especially the latter two.

Trying to claim that AI either "has always been" one particular thing, or "has now become" one particular thing, is always going to run into trouble because of this multiplicity. The one thing that AI "has always been" is multiple things.

[go to top]