zlacker

[return to "Yann LeCun, Pioneer of AI, Thinks Today's LLM's Are Nearly Obsolete"]
1. GMorom+8h8[view] [source] 2025-04-05 16:22:50
>>alphad+(OP)
I remember reading Douglas Hofstadter's Fluid Concepts and Creative Analogies [https://en.wikipedia.org/wiki/Fluid_Concepts_and_Creative_An...]

He wrote about Copycat, a program for understanding analogies ("abc is to 123 as cba is to ???"). The program worked at the symbolic level, in the sense that it hard-coded a network of relationships between words and characters. I wonder how close he was to "inventing" an LLM? The insight he needed was that instead of hard-coding patterns, he should have just trained on a vast set of patterns.

Hofstadter focused on Copycat because he saw pattern-matching as the core ability of intelligence. Unlocking that, in his view, would unlock AI. And, of course, pattern-matching is exactly what LLMs are good for.

I think he's right. Intelligence isn't about logic. In the early days of AI, people thought that a chess-playing computer would necessarily be intelligent, but that was clearly a dead-end. Logic is not the hard part. The hard part is pattern-matching.

In fact, pattern-matching is all there is: That's a bear, run away; I'm in a restaurant, I need to order; this is like a binary tree, I can solve it recursively.

I honestly can't come up with a situation that calls for intelligence that can't be solved by pattern-matching.

In my opinion, LeCun is moving the goal-posts. He's saying LLMs make mistakes and therefore they aren't intelligent and aren't useful. Obviously that's wrong: humans make mistakes and are usually considered both intelligent and useful.

I wonder if there is a necessary relationship between intelligence and mistakes. If you can solve a problem algorithmically (e.g., long-division) then there won't be mistakes, but you don't need intelligence (you just follow the algorithm). But if you need intelligence (because no algorithm exists) then there will always be mistakes.

◧◩
2. guhida+Ej8[view] [source] 2025-04-05 16:43:23
>>GMorom+8h8
I wouldn't call pattern matching intelligence, I would call it something closer to "trainability" or "educatable" but not intelligence. You can train a person to do a task without understanding why they have to do it like that, but when confronted with a new never-before-seen situation they have to understand the physical laws of the universe to find a solution.

Ask ChatGPT to answer something that no one on the internet has done before and it will struggle to come up with a solution.

◧◩◪
3. andoan+rm8[view] [source] 2025-04-05 17:05:40
>>guhida+Ej8
What precludes pattern matching from understanding the physical laws? You see a ball hit a wall, and it bounces back. Congratulations, you learned the abstract pattern:

x->|

x|

x<-|

[go to top]