zlacker

[return to "OpenAI LP"]
1. jpdus+ew[view] [source] 2019-03-11 19:18:50
>>gdb+(OP)
Wow. Screw non-profit, we want to get rich.

Sorry guys, but before you were probably able to get talent which is not (primarily) motivated by money. Now you are just another AI startup. If the cap would be 2x, it could still make sense. But 100x times? That's laughable! And the split board, made up of friends and closely connected people smells like "greenwashing" as well. Don't get me wrong, it's totally ok to be an AI startup. You just shouldn't pretend to be a non-profit then...

◧◩
2. ilyasu+EE[view] [source] 2019-03-11 20:23:49
>>jpdus+ew
We have to raise a lot money to get a lot of compute, so we've created the best structure possible that will allow us to do so while maintaining maximal adherence to our mission. And if we actually succeed in building the safe AGI, we will generate far more value than any existing company, which will make the 100x cap very relevant.
◧◩◪
3. not_ai+tH[view] [source] 2019-03-11 20:43:25
>>ilyasu+EE
What makes you think AGI is even possible? Most of current 'AI' is pattern recognition/pattern generation. I'm skeptical about the claims of AGI even being possible but I am confident that pattern recognition will be tremendously useful.
◧◩◪◨
4. tomp+bP[view] [source] 2019-03-11 21:38:59
>>not_ai+tH
Unless you assume some magic/soul/etc, then a human brain is a proof that there exists a non-impossible algorithm that learn to be a General Intelligence, and it can run on non-impossible hardware.
◧◩◪◨⬒
5. sriniv+IU[view] [source] 2019-03-11 22:20:11
>>tomp+bP
Yes, I assume a magic/soul/etc. and I believe that the human brain is not stand-alone in creating intelligence. Check out this exciting video for discussion on how 'thinking' can happen outside brain. https://neurips.cc/Conferences/2018/Schedule?showEvent=12487
[go to top]