zlacker

[return to "OpenAI LP"]
1. jpdus+ew[view] [source] 2019-03-11 19:18:50
>>gdb+(OP)
Wow. Screw non-profit, we want to get rich.

Sorry guys, but before you were probably able to get talent which is not (primarily) motivated by money. Now you are just another AI startup. If the cap would be 2x, it could still make sense. But 100x times? That's laughable! And the split board, made up of friends and closely connected people smells like "greenwashing" as well. Don't get me wrong, it's totally ok to be an AI startup. You just shouldn't pretend to be a non-profit then...

◧◩
2. ilyasu+EE[view] [source] 2019-03-11 20:23:49
>>jpdus+ew
We have to raise a lot money to get a lot of compute, so we've created the best structure possible that will allow us to do so while maintaining maximal adherence to our mission. And if we actually succeed in building the safe AGI, we will generate far more value than any existing company, which will make the 100x cap very relevant.
◧◩◪
3. not_ai+tH[view] [source] 2019-03-11 20:43:25
>>ilyasu+EE
What makes you think AGI is even possible? Most of current 'AI' is pattern recognition/pattern generation. I'm skeptical about the claims of AGI even being possible but I am confident that pattern recognition will be tremendously useful.
◧◩◪◨
4. brian_+0L[view] [source] 2019-03-11 21:08:10
>>not_ai+tH
What makes you so sure that what you're doing isn't pattern recognition?

When you lean a language, aren't you just matching sounds with the contexts in which they're used? What does "love" mean? 10 different people would probably give you 10 different answers, and few of them would mention that the way you love your apple is pretty distinct from the way you love your spouse. Though, even though they failed to mention it, they wouldn't misunderstand you when you did mention loving some apple!

And it's not just vocabulary, the successes of RNNs show that grammar is also mostly patterns. Complicated and hard to describe patterns, for sure, but the RNN learns it can't say "the ball run" in just the same way you learn to say "the ball runs", by seeing enough examples that some constructions just sound right and some sound wrong.

If you hadn't heard of AlphaGo you probably wouldn't agree that Go was "just" pattern matching. There's tactics, strategy(!), surely it's more than just looking at a board and deciding which moves feel right. And the articles about how chess masters "only see good moves"? Probably not related, right?

What does your expensive database consultant do? Do they really do anything more than looking at some charts and matching those against problems they've seen before? Are you sure? https://blog.acolyer.org/2017/08/11/automatic-database-manag...

◧◩◪◨⬒
5. sriniv+ZS[view] [source] 2019-03-11 22:07:08
>>brian_+0L
You have pointed to the examples where the tasks are pattern recognition. I certainly agree that many tasks that humans perform are pattern recognition. But my point is that not ALL tasks are pattern recognition and intelligence involves pattern recognition but that not all of intelligence is pattern recognition.

Pattern recognition works when there is a pattern (repetitive structure). But in the case of outliers, there is no repetitive structure and hence there is no pattern. For example, what is the pattern when a kid first learns 1+1=2? or why must 'B' come after 'A'? It is taught as a rule(or axiom or abstraction) using which higher level patterns can be built. So, I believe that while pattern recognition is useful for intelligence, it is not all there is to intelligence.

◧◩◪◨⬒⬓
6. brian_+zT3[view] [source] 2019-03-13 01:32:51
>>sriniv+ZS
What I'm trying to point out is that if you had asked someone whether any of those examples were "pattern matching" prior to the discovery that neural networks were so good at them, very reasonable and knowledgeable people would have said no. They would have said that generating sentences which make sense is more than any system _which simply predicted the next character in a sequence of characters_ could do.

Given this track record, I have learned to be suspicious of that part of my brain which reflexively says "no, I'm doing something more than pattern matching"

It sure feels like there's something more. It feels like what I do when I program or think about solutions to climate change is more than pattern matching. But I don't understand how you can be so sure that it isn't.

[go to top]