zlacker

[return to "Introducing Superalignment"]
1. andrew+qs[view] [source] 2023-07-05 18:43:15
>>tim_sw+(OP)
Why is Sam Altman pursuing superintelligence if he also says AI could destroy humanity?
◧◩
2. loandb+yW[view] [source] 2023-07-05 20:58:52
>>andrew+qs
He answered that question in interviews many times.

1. AGI has a huge upside. If it's properly aligned, it will bring about a de facto utopia. 2. OpenAI stopping development won't make others to stop development. It's better if OpenAI creates AGI first because their founders set up this organization with the goal to benefit all humanity.

◧◩◪
3. dougb5+4o1[view] [source] 2023-07-05 23:30:14
>>loandb+yW
I wish he'd go into more detail on point 1 than he has so far. It's never been clear to me how AI gets us to the utopia he envisions. Looking around, the biggest problems seem to be people problems at the core, not technology problems. Maybe technology can help us sort out some of the people problems, but it seems to be causing new people problems faster than it is solving them.
◧◩◪◨
4. loandb+LX1[view] [source] 2023-07-06 03:48:12
>>dougb5+4o1
AGI will be good at producing goods and services. It will end economic scarcity.
◧◩◪◨⬒
5. antifa+OP2[view] [source] 2023-07-06 11:34:08
>>loandb+LX1
But can it end corporation-enforced artificial scarcity?
[go to top]