zlacker

[return to "Sam Altman, Greg Brockman and others to join Microsoft"]
1. 9dev+w9[view] [source] 2023-11-20 08:37:33
>>JimDab+(OP)
I don’t quite buy your Cyberpunk utopia where the Megacorp finally rids us of those pesky ethics qualms (or ”shackles“, as you phrased it.) Microsoft can now proceed without the guidance of a council that actually has humanities interests in mind, not only those of Microsoft shareholders. I don’t know whether all that caution will turn out to have been necessary, but I guess we’re just gleefully heading into whatever lies ahead without any concern whatsoever, and learn it the hard way.

It’s a bit tragic that Ilya and company achieved the exact opposite of what they intended apparently, by driving those they attempted to slow down into the arms of people with more money and less morals. Well.

◧◩
2. Legend+Pa[view] [source] 2023-11-20 08:42:31
>>9dev+w9
OpenAI's ideas of humanities best interests were like a catholic mom's. Less morals are okay by me.
◧◩◪
3. rdtsc+Cc[view] [source] 2023-11-20 08:51:18
>>Legend+Pa
> OpenAI's ideas of humanities best interests were like a catholic mom's

How do you mean? Don’t see what OpenAI has in common with Catholicism or motherhood.

◧◩◪◨
4. ric2b+8f[view] [source] 2023-11-20 09:04:22
>>rdtsc+Cc
They basically defined AI safety as "AI shouldn't say bad words or tell people how to do drugs" instead of actually making sure that a sufficiently intelligent AI doesn't go rogue against humanity's interests.
◧◩◪◨⬒
5. SpicyL+9g[view] [source] 2023-11-20 09:11:57
>>ric2b+8f
I'm not sure where you're getting that definition from. They have a team working on exactly the problem you're describing. (https://openai.com/blog/introducing-superalignment)
◧◩◪◨⬒⬓
6. timeon+Qj[view] [source] 2023-11-20 09:33:12
>>SpicyL+9g
> getting that definition from

That was not about actual definition fro OpenAi but about definition implied by user Legend2440 here >>38344867

[go to top]