zlacker

[return to "The Illusion of Thinking: Strengths and limitations of reasoning models [pdf]"]
1. actini+1H[view] [source] 2025-06-06 23:59:01
>>amrrs+(OP)
Man, remember when everyone was like 'AGI just around the corner!' Funny how well the Gartner hype cycle captures these sorts of things
◧◩
2. bayind+0I[view] [source] 2025-06-07 00:10:02
>>actini+1H
They're similar to self-driving vehicles. Both are around the corner, but neither can negotiate the turn.
◧◩◪
3. nmca+v91[view] [source] 2025-06-07 07:08:39
>>bayind+0I
I saw your comment and counted — in May I took a Waymo thirty times.
◧◩◪◨
4. bayind+qj1[view] [source] 2025-06-07 09:59:38
>>nmca+v91
Waymo is a popular argument in self-driving cars, and they do well.

However, Waymo is Deep Blue of self-driving cars. Doing very well in a closed space. As a result of this geofencing, they have effectively exhausted their search space, hence they work well as a consequence of lack of surprises.

AI works well when search space is limited, but General AI in any category needs to handle a vastly larger search space, and they fall flat.

At the end of the day, AI is informed search. They get inputs, and generate a suitable output as deemed by their trainers.

◧◩◪◨⬒
5. anonzz+W83[view] [source] 2025-06-08 08:45:11
>>bayind+qj1
Yeah, AI has been good for a long time in limited search space areas. So good that many of these things that were called AI in the past are not called AI now, but 'just' 'algorithm'.
◧◩◪◨⬒⬓
6. bayind+pe3[view] [source] 2025-06-08 10:20:29
>>anonzz+W83
Everything is “just” an algorithm. LLM is a weighted graph with some randomization which is tuned with tons of data. You have input and output encoders on top of it.

That’s all.

[go to top]