zlacker

[return to "Introducing Superalignment"]
1. andrew+qs[view] [source] 2023-07-05 18:43:15
>>tim_sw+(OP)
Why is Sam Altman pursuing superintelligence if he also says AI could destroy humanity?
◧◩
2. eutrop+vw[view] [source] 2023-07-05 18:59:42
>>andrew+qs
...something something good guy with AGI is the only way to stop bad guy with AGI.

Less glibly: anyone with a horse in this race wants theirs to win. Dropping out doesn't make others stop trying, and arguably the only scalable way to prevent others from making and using unaligned AGI is to develop an aligned AGI first.

Also, having AGI tech would be incredibly, stupidly profitable. And so if other people are going to try and make it anyways: why should you in particular stop? Prisoner's dilemma analysis shows that "defect" is always the winning move unless perfect information and cooperation shows up.

[go to top]