zlacker

[return to "Introducing Superalignment"]
1. andrew+qs[view] [source] 2023-07-05 18:43:15
>>tim_sw+(OP)
Why is Sam Altman pursuing superintelligence if he also says AI could destroy humanity?
◧◩
2. loandb+yW[view] [source] 2023-07-05 20:58:52
>>andrew+qs
He answered that question in interviews many times.

1. AGI has a huge upside. If it's properly aligned, it will bring about a de facto utopia. 2. OpenAI stopping development won't make others to stop development. It's better if OpenAI creates AGI first because their founders set up this organization with the goal to benefit all humanity.

[go to top]