When you lean a language, aren't you just matching sounds with the contexts in which they're used? What does "love" mean? 10 different people would probably give you 10 different answers, and few of them would mention that the way you love your apple is pretty distinct from the way you love your spouse. Though, even though they failed to mention it, they wouldn't misunderstand you when you did mention loving some apple!
And it's not just vocabulary, the successes of RNNs show that grammar is also mostly patterns. Complicated and hard to describe patterns, for sure, but the RNN learns it can't say "the ball run" in just the same way you learn to say "the ball runs", by seeing enough examples that some constructions just sound right and some sound wrong.
If you hadn't heard of AlphaGo you probably wouldn't agree that Go was "just" pattern matching. There's tactics, strategy(!), surely it's more than just looking at a board and deciding which moves feel right. And the articles about how chess masters "only see good moves"? Probably not related, right?
What does your expensive database consultant do? Do they really do anything more than looking at some charts and matching those against problems they've seen before? Are you sure? https://blog.acolyer.org/2017/08/11/automatic-database-manag...
The shape of resultant word strings indeed form patterns. However, matching a pattern is, in fact, different than being able to knowledgeably generate those patterns so they make sense in the context of a human conversation. It has been said that mathematics is so successful because it is contentless. This is a problem for areas that cannot be treated this way.
Go can be described in a contentless (mathematical) way, therefore success is not surprising (maybe to some it was).
It is those things that cannot be described in this manner where 'AGI' (Edit: 'AGI' based on current DL) will consistently fall down. You can see it in the datasets....try to imagine creating a dataset for the machine to 'feel angry'. What are you going to do....show it pictures of pissed off people? This may seem like a silly argument at first, but try to think of other things that might be characteristic of 'GI' that it would be difficult to envision creating a training set for.
Pattern recognition works when there is a pattern (repetitive structure). But in the case of outliers, there is no repetitive structure and hence there is no pattern. For example, what is the pattern when a kid first learns 1+1=2? or why must 'B' come after 'A'? It is taught as a rule(or axiom or abstraction) using which higher level patterns can be built. So, I believe that while pattern recognition is useful for intelligence, it is not all there is to intelligence.
I have found Quantum ideas and observations too unnerving to accept a finite and discretized universe.
Edit: this in in response to GO, or Starcraft or anything that is boxed off -- these AIs will eventually outperform humans on a grand scale, but the existence of 'constants' or being in a sandbox immediately precludes the results from speaking to AI's generalizability.
Your arguments seem to also apply to humans, and clearly humans have figured out how to be intelligent in this universe.
Or maybe you're saying that brains are taking advantage of something at the quantum level? Computers are unable to efficiently simulate quantum effects, so AGI is too difficult to be feasible?
I admit that's possible, but it's a strong claim and I don't see why it's more likely than the idea that brains are very well structured neural networks which we're slowly making better and better approximations of.
Given this track record, I have learned to be suspicious of that part of my brain which reflexively says "no, I'm doing something more than pattern matching"
It sure feels like there's something more. It feels like what I do when I program or think about solutions to climate change is more than pattern matching. But I don't understand how you can be so sure that it isn't.