zlacker

[parent] [thread] 1 comments
1. reduce+(OP)[view] [source] 2024-05-17 23:53:04
> I just refuse to believe that people whose opinions and decisions actually matter are influenced by such fears

Well, at least I'm glad you admit it's due to your stubbornness and unwillingness to change beliefs when confronted with evidence.

Sam Altman ("Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity"), Ilya Sutskever, Geoffrey Hinton, Yoshua Bengio, Jan Leike, Paul Christiano (creator of RLHF), Dario Amodei (Anthropic), Demis Hassabis (Google DeepMind) all believe AGI poses an existential risk to humanity.

replies(1): >>123yaw+A4
2. 123yaw+A4[view] [source] 2024-05-18 00:40:56
>>reduce+(OP)
is it not ironic to list Sam Altman ("Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity") among those paragons of virtue, half a year after the internal struggle between safetyists and pragmatists at openai has been revealed? is it not naive to assume that the other virtuous men you list are not the minority, given that the vast majority of openai had sided with Sam, revealing themselves as pragmatists who valued their lavish salaries over mitigating the supposed x-risks? is it not especially ironic to present Sam as a paragon of virtue here, in the context of a safety cultist leaving openai because he realized what you do not - it's all bullshit, mirror and smoke to misdirect the press and convince the politicians to smother the smaller competitors (not backed with billions of VC and unable to lobby for concessions) with regoolations.

>But over the past few years, safety culture and processes have taken a backseat to shiny products.

you know what else happened over the past few years? openai started to make money. so while sama was making soundbites for headlines about the existential threat of AI, internally, all the useful idiots were already told to shut the fuck up.

[go to top]