zlacker

[parent] [thread] 6 comments
1. brian_+(OP)[view] [source] 2019-03-11 21:08:10
What makes you so sure that what you're doing isn't pattern recognition?

When you lean a language, aren't you just matching sounds with the contexts in which they're used? What does "love" mean? 10 different people would probably give you 10 different answers, and few of them would mention that the way you love your apple is pretty distinct from the way you love your spouse. Though, even though they failed to mention it, they wouldn't misunderstand you when you did mention loving some apple!

And it's not just vocabulary, the successes of RNNs show that grammar is also mostly patterns. Complicated and hard to describe patterns, for sure, but the RNN learns it can't say "the ball run" in just the same way you learn to say "the ball runs", by seeing enough examples that some constructions just sound right and some sound wrong.

If you hadn't heard of AlphaGo you probably wouldn't agree that Go was "just" pattern matching. There's tactics, strategy(!), surely it's more than just looking at a board and deciding which moves feel right. And the articles about how chess masters "only see good moves"? Probably not related, right?

What does your expensive database consultant do? Do they really do anything more than looking at some charts and matching those against problems they've seen before? Are you sure? https://blog.acolyer.org/2017/08/11/automatic-database-manag...

replies(3): >>codeki+d5 >>sriniv+Z7 >>charli+q8
2. codeki+d5[view] [source] 2019-03-11 21:48:06
>>brian_+(OP)
> And it's not just vocabulary, the successes of RNNs show that grammar is also mostly patterns.

The shape of resultant word strings indeed form patterns. However, matching a pattern is, in fact, different than being able to knowledgeably generate those patterns so they make sense in the context of a human conversation. It has been said that mathematics is so successful because it is contentless. This is a problem for areas that cannot be treated this way.

Go can be described in a contentless (mathematical) way, therefore success is not surprising (maybe to some it was).

It is those things that cannot be described in this manner where 'AGI' (Edit: 'AGI' based on current DL) will consistently fall down. You can see it in the datasets....try to imagine creating a dataset for the machine to 'feel angry'. What are you going to do....show it pictures of pissed off people? This may seem like a silly argument at first, but try to think of other things that might be characteristic of 'GI' that it would be difficult to envision creating a training set for.

3. sriniv+Z7[view] [source] 2019-03-11 22:07:08
>>brian_+(OP)
You have pointed to the examples where the tasks are pattern recognition. I certainly agree that many tasks that humans perform are pattern recognition. But my point is that not ALL tasks are pattern recognition and intelligence involves pattern recognition but that not all of intelligence is pattern recognition.

Pattern recognition works when there is a pattern (repetitive structure). But in the case of outliers, there is no repetitive structure and hence there is no pattern. For example, what is the pattern when a kid first learns 1+1=2? or why must 'B' come after 'A'? It is taught as a rule(or axiom or abstraction) using which higher level patterns can be built. So, I believe that while pattern recognition is useful for intelligence, it is not all there is to intelligence.

replies(2): >>verma7+6N >>brian_+z83
4. charli+q8[view] [source] 2019-03-11 22:09:43
>>brian_+(OP)
Anyone that argues AGI is possible intrinsically believes the universe is finite and discretized.

I have found Quantum ideas and observations too unnerving to accept a finite and discretized universe.

Edit: this in in response to GO, or Starcraft or anything that is boxed off -- these AIs will eventually outperform humans on a grand scale, but the existence of 'constants' or being in a sandbox immediately precludes the results from speaking to AI's generalizability.

replies(1): >>brian_+NI2
◧◩
5. verma7+6N[view] [source] [discussion] 2019-03-12 06:34:37
>>sriniv+Z7
Aren't axioms just training data that you feed to the model?
◧◩
6. brian_+NI2[view] [source] [discussion] 2019-03-12 21:37:21
>>charli+q8
I'm not sure what you're saying here.

Your arguments seem to also apply to humans, and clearly humans have figured out how to be intelligent in this universe.

Or maybe you're saying that brains are taking advantage of something at the quantum level? Computers are unable to efficiently simulate quantum effects, so AGI is too difficult to be feasible?

I admit that's possible, but it's a strong claim and I don't see why it's more likely than the idea that brains are very well structured neural networks which we're slowly making better and better approximations of.

◧◩
7. brian_+z83[view] [source] [discussion] 2019-03-13 01:32:51
>>sriniv+Z7
What I'm trying to point out is that if you had asked someone whether any of those examples were "pattern matching" prior to the discovery that neural networks were so good at them, very reasonable and knowledgeable people would have said no. They would have said that generating sentences which make sense is more than any system _which simply predicted the next character in a sequence of characters_ could do.

Given this track record, I have learned to be suspicious of that part of my brain which reflexively says "no, I'm doing something more than pattern matching"

It sure feels like there's something more. It feels like what I do when I program or think about solutions to climate change is more than pattern matching. But I don't understand how you can be so sure that it isn't.

[go to top]