He wrote about Copycat, a program for understanding analogies ("abc is to 123 as cba is to ???"). The program worked at the symbolic level, in the sense that it hard-coded a network of relationships between words and characters. I wonder how close he was to "inventing" an LLM? The insight he needed was that instead of hard-coding patterns, he should have just trained on a vast set of patterns.
Hofstadter focused on Copycat because he saw pattern-matching as the core ability of intelligence. Unlocking that, in his view, would unlock AI. And, of course, pattern-matching is exactly what LLMs are good for.
I think he's right. Intelligence isn't about logic. In the early days of AI, people thought that a chess-playing computer would necessarily be intelligent, but that was clearly a dead-end. Logic is not the hard part. The hard part is pattern-matching.
In fact, pattern-matching is all there is: That's a bear, run away; I'm in a restaurant, I need to order; this is like a binary tree, I can solve it recursively.
I honestly can't come up with a situation that calls for intelligence that can't be solved by pattern-matching.
In my opinion, LeCun is moving the goal-posts. He's saying LLMs make mistakes and therefore they aren't intelligent and aren't useful. Obviously that's wrong: humans make mistakes and are usually considered both intelligent and useful.
I wonder if there is a necessary relationship between intelligence and mistakes. If you can solve a problem algorithmically (e.g., long-division) then there won't be mistakes, but you don't need intelligence (you just follow the algorithm). But if you need intelligence (because no algorithm exists) then there will always be mistakes.
Ask ChatGPT to answer something that no one on the internet has done before and it will struggle to come up with a solution.
But whats critical, and I think is what's missing is a knowledge representation of events in space-time. We need something more fundamental than text or pixels, we need something that captures space and transformations in space itself.
This is not correct. It does not explain creativity at all. It cannot solely be based on pattern matching. I'm not saying no AI is creative, but this logic does not explain creativity
x->|
x|
x<-|
Your unsolved problems would likely involve the extremes of maps that you currently think in terms of. Maps become less useful as you get closer to undefined extreme conditions within them (a famous one is us humans ourselves, and why so many unsolved challenges to various degrees of obviousness concern our psyche and physiology—world peace, cancer, and so on), and I assume useful pattern matching is similarly less effective. Data to pattern-match against is collected and classified according to a preexisting model; if the model is wrong (which it is), the data may lead to spurious matches with wrong or nonsensical answers. Furthermore, if the answer has to be in terms of a new system, another fallible map hitherto unfamiliar to human mind, pattern-matching based on preexisting products of that very mind is unlikely to produce one.
If yes, it seems to me that LLMs should be much better at that than humans, and I believe the frontier models like o3 might already be better than humans, we are just starting to use them for these tasks. Give it a couple more years before making any conclusions.
What is intelligence?
Is it reacting to the environment? No, a thermostat can do that.
Is being logical? No, the simplest program can do that.
Is it creating something never seen before? No, a random number generator can do that.
We can even combine all of the above into a program and it still wouldn't be intelligent or creative. So what's the missing piece? The missing piece is pattern-matching.
Pattern-matching is taking a concrete input (a series of numbers or a video stream) and extracting abstract concepts and relationships. We can even nest patterns: we can match a pattern of concepts, each of which is composed of sub-patterns, and so on.
Creativity is just pattern matching the output of a pseudo-random generator against a critique pattern (is this output good?). When an artist creates something, they are constantly pattern matching against their own internal critic and the existing art out there. They are trying to find something that matches the beauty/impact of the art they've seen, while matching their own aesthetic, and not reproducing an existing pattern. It's pattern-matching all the way down!
Science is just a special form of creativity. You are trying to create a model that reproduces experimental outcomes. How do you do that? You absorb the existing models and experiments (which involves pattern-matching to compress into abstract concepts), and then you generate new models that fit the data.
Pattern-matching unlocks AI, which is why LLMs have been so successful. Obviously, you still need logic, inference, etc., but that's the easy part. Pattern-matching was the last missing piece!