zlacker

[return to "OpenAI LP"]
1. jpdus+ew[view] [source] 2019-03-11 19:18:50
>>gdb+(OP)
Wow. Screw non-profit, we want to get rich.

Sorry guys, but before you were probably able to get talent which is not (primarily) motivated by money. Now you are just another AI startup. If the cap would be 2x, it could still make sense. But 100x times? That's laughable! And the split board, made up of friends and closely connected people smells like "greenwashing" as well. Don't get me wrong, it's totally ok to be an AI startup. You just shouldn't pretend to be a non-profit then...

◧◩
2. ilyasu+EE[view] [source] 2019-03-11 20:23:49
>>jpdus+ew
We have to raise a lot money to get a lot of compute, so we've created the best structure possible that will allow us to do so while maintaining maximal adherence to our mission. And if we actually succeed in building the safe AGI, we will generate far more value than any existing company, which will make the 100x cap very relevant.
◧◩◪
3. not_ai+tH[view] [source] 2019-03-11 20:43:25
>>ilyasu+EE
What makes you think AGI is even possible? Most of current 'AI' is pattern recognition/pattern generation. I'm skeptical about the claims of AGI even being possible but I am confident that pattern recognition will be tremendously useful.
◧◩◪◨
4. brian_+0L[view] [source] 2019-03-11 21:08:10
>>not_ai+tH
What makes you so sure that what you're doing isn't pattern recognition?

When you lean a language, aren't you just matching sounds with the contexts in which they're used? What does "love" mean? 10 different people would probably give you 10 different answers, and few of them would mention that the way you love your apple is pretty distinct from the way you love your spouse. Though, even though they failed to mention it, they wouldn't misunderstand you when you did mention loving some apple!

And it's not just vocabulary, the successes of RNNs show that grammar is also mostly patterns. Complicated and hard to describe patterns, for sure, but the RNN learns it can't say "the ball run" in just the same way you learn to say "the ball runs", by seeing enough examples that some constructions just sound right and some sound wrong.

If you hadn't heard of AlphaGo you probably wouldn't agree that Go was "just" pattern matching. There's tactics, strategy(!), surely it's more than just looking at a board and deciding which moves feel right. And the articles about how chess masters "only see good moves"? Probably not related, right?

What does your expensive database consultant do? Do they really do anything more than looking at some charts and matching those against problems they've seen before? Are you sure? https://blog.acolyer.org/2017/08/11/automatic-database-manag...

◧◩◪◨⬒
5. charli+qT[view] [source] 2019-03-11 22:09:43
>>brian_+0L
Anyone that argues AGI is possible intrinsically believes the universe is finite and discretized.

I have found Quantum ideas and observations too unnerving to accept a finite and discretized universe.

Edit: this in in response to GO, or Starcraft or anything that is boxed off -- these AIs will eventually outperform humans on a grand scale, but the existence of 'constants' or being in a sandbox immediately precludes the results from speaking to AI's generalizability.

◧◩◪◨⬒⬓
6. brian_+Nt3[view] [source] 2019-03-12 21:37:21
>>charli+qT
I'm not sure what you're saying here.

Your arguments seem to also apply to humans, and clearly humans have figured out how to be intelligent in this universe.

Or maybe you're saying that brains are taking advantage of something at the quantum level? Computers are unable to efficiently simulate quantum effects, so AGI is too difficult to be feasible?

I admit that's possible, but it's a strong claim and I don't see why it's more likely than the idea that brains are very well structured neural networks which we're slowly making better and better approximations of.

[go to top]