zlacker

[return to "Introducing Superalignment"]
1. andrew+qs[view] [source] 2023-07-05 18:43:15
>>tim_sw+(OP)
Why is Sam Altman pursuing superintelligence if he also says AI could destroy humanity?
◧◩
2. loandb+yW[view] [source] 2023-07-05 20:58:52
>>andrew+qs
He answered that question in interviews many times.

1. AGI has a huge upside. If it's properly aligned, it will bring about a de facto utopia. 2. OpenAI stopping development won't make others to stop development. It's better if OpenAI creates AGI first because their founders set up this organization with the goal to benefit all humanity.

◧◩◪
3. dougb5+4o1[view] [source] 2023-07-05 23:30:14
>>loandb+yW
I wish he'd go into more detail on point 1 than he has so far. It's never been clear to me how AI gets us to the utopia he envisions. Looking around, the biggest problems seem to be people problems at the core, not technology problems. Maybe technology can help us sort out some of the people problems, but it seems to be causing new people problems faster than it is solving them.
◧◩◪◨
4. loandb+LX1[view] [source] 2023-07-06 03:48:12
>>dougb5+4o1
AGI will be good at producing goods and services. It will end economic scarcity.
◧◩◪◨⬒
5. dougb5+xd2[view] [source] 2023-07-06 06:03:44
>>loandb+LX1
To buy into that bold prediction I would expect to see evidence that current AI-based technologies are reducing economic scarcity already, and that they're moving us toward Sam's utopia in obvious, measurable ways. Maybe they are -- I just haven't seen the evidence, while I have seen plenty of evidence of harms. (Don't get me wrong, the capabilities of LLMs are mind-boggling, and they clearly make all kinds of knowledge work more efficient. But there's nothing about AI, or any technological efficiency, that guarantees that its fruits are distributed in a way that relieves scarcity rather than exacerbates it.)
[go to top]