zlacker

[parent] [thread] 4 comments
1. reustl+(OP)[view] [source] 2023-11-22 06:33:27
I'm probably reading too much into it, but interesting that he specifically called out maximizing safety.
replies(3): >>xigenc+C1 >>dragon+e3 >>jq-r+75
2. xigenc+C1[view] [source] 2023-11-22 06:43:19
>>reustl+(OP)
Sam does believe in safety. He also knows that there is a first-mover advantage when it comes to setting societal expectations and that you can’t build safe AI by not building AI.
3. dragon+e3[view] [source] 2023-11-22 06:53:30
>>reustl+(OP)
"Safety" has been the pretext for Altman's lobbying for regulatory barriers to new entrants in the field, protecting incumbents. OpenAI's nonprofit charter is the perfect PR pretext for what amounts to industry lobbying to protect a narrow set of early leaders and obstruct any other competition, and Altman was the man executing that mission, which is why OpenAI led by Sam was a valuable asset for Microsoft to preserve.
4. jq-r+75[view] [source] 2023-11-22 07:05:00
>>reustl+(OP)
That’s just a buzzword of the week devoid of any real meaning. If he would have written this years ago, it would’ve been “leveraging synergies”.
replies(1): >>astran+Pd
◧◩
5. astran+Pd[view] [source] [discussion] 2023-11-22 08:10:24
>>jq-r+75
Shear is a genuine member of the AI safety rationalism cult, to the point he's an Aella reply guy and probably goes to her orgies.

(It's a Berkeley cult so of course it's got those.)

[go to top]