zlacker

[return to "Yann LeCun, Pioneer of AI, Thinks Today's LLM's Are Nearly Obsolete"]
1. GMorom+8h8[view] [source] 2025-04-05 16:22:50
>>alphad+(OP)
I remember reading Douglas Hofstadter's Fluid Concepts and Creative Analogies [https://en.wikipedia.org/wiki/Fluid_Concepts_and_Creative_An...]

He wrote about Copycat, a program for understanding analogies ("abc is to 123 as cba is to ???"). The program worked at the symbolic level, in the sense that it hard-coded a network of relationships between words and characters. I wonder how close he was to "inventing" an LLM? The insight he needed was that instead of hard-coding patterns, he should have just trained on a vast set of patterns.

Hofstadter focused on Copycat because he saw pattern-matching as the core ability of intelligence. Unlocking that, in his view, would unlock AI. And, of course, pattern-matching is exactly what LLMs are good for.

I think he's right. Intelligence isn't about logic. In the early days of AI, people thought that a chess-playing computer would necessarily be intelligent, but that was clearly a dead-end. Logic is not the hard part. The hard part is pattern-matching.

In fact, pattern-matching is all there is: That's a bear, run away; I'm in a restaurant, I need to order; this is like a binary tree, I can solve it recursively.

I honestly can't come up with a situation that calls for intelligence that can't be solved by pattern-matching.

In my opinion, LeCun is moving the goal-posts. He's saying LLMs make mistakes and therefore they aren't intelligent and aren't useful. Obviously that's wrong: humans make mistakes and are usually considered both intelligent and useful.

I wonder if there is a necessary relationship between intelligence and mistakes. If you can solve a problem algorithmically (e.g., long-division) then there won't be mistakes, but you don't need intelligence (you just follow the algorithm). But if you need intelligence (because no algorithm exists) then there will always be mistakes.

◧◩
2. andoan+Dk8[view] [source] 2025-04-05 16:50:18
>>GMorom+8h8
I been thinking about something similar for a long time now. I think abstraction of patterns is at the core requirement of intelligence.

But whats critical, and I think is what's missing is a knowledge representation of events in space-time. We need something more fundamental than text or pixels, we need something that captures space and transformations in space itself.

[go to top]