zlacker

[parent] [thread] 8 comments
1. andrew+(OP)[view] [source] 2023-07-05 18:43:15
Why is Sam Altman pursuing superintelligence if he also says AI could destroy humanity?
replies(4): >>eutrop+54 >>mfitto+3c >>Al0neS+re >>loandb+8u
2. eutrop+54[view] [source] 2023-07-05 18:59:42
>>andrew+(OP)
...something something good guy with AGI is the only way to stop bad guy with AGI.

Less glibly: anyone with a horse in this race wants theirs to win. Dropping out doesn't make others stop trying, and arguably the only scalable way to prevent others from making and using unaligned AGI is to develop an aligned AGI first.

Also, having AGI tech would be incredibly, stupidly profitable. And so if other people are going to try and make it anyways: why should you in particular stop? Prisoner's dilemma analysis shows that "defect" is always the winning move unless perfect information and cooperation shows up.

3. mfitto+3c[view] [source] 2023-07-05 19:33:00
>>andrew+(OP)
Guessing, but he could know someone else is going to pursue it anyway, frets about it, thinks "at least I can do something about it if I'm in charge."
4. Al0neS+re[view] [source] 2023-07-05 19:44:30
>>andrew+(OP)
Sam Altman, much like the LW crowd, is an evangelical preacher. He uses anxiousness as a front when, in reality, he's just telling us about his hopes and dreams.
5. loandb+8u[view] [source] 2023-07-05 20:58:52
>>andrew+(OP)
He answered that question in interviews many times.

1. AGI has a huge upside. If it's properly aligned, it will bring about a de facto utopia. 2. OpenAI stopping development won't make others to stop development. It's better if OpenAI creates AGI first because their founders set up this organization with the goal to benefit all humanity.

replies(1): >>dougb5+EV
◧◩
6. dougb5+EV[view] [source] [discussion] 2023-07-05 23:30:14
>>loandb+8u
I wish he'd go into more detail on point 1 than he has so far. It's never been clear to me how AI gets us to the utopia he envisions. Looking around, the biggest problems seem to be people problems at the core, not technology problems. Maybe technology can help us sort out some of the people problems, but it seems to be causing new people problems faster than it is solving them.
replies(1): >>loandb+lv1
◧◩◪
7. loandb+lv1[view] [source] [discussion] 2023-07-06 03:48:12
>>dougb5+EV
AGI will be good at producing goods and services. It will end economic scarcity.
replies(2): >>dougb5+7L1 >>antifa+on2
◧◩◪◨
8. dougb5+7L1[view] [source] [discussion] 2023-07-06 06:03:44
>>loandb+lv1
To buy into that bold prediction I would expect to see evidence that current AI-based technologies are reducing economic scarcity already, and that they're moving us toward Sam's utopia in obvious, measurable ways. Maybe they are -- I just haven't seen the evidence, while I have seen plenty of evidence of harms. (Don't get me wrong, the capabilities of LLMs are mind-boggling, and they clearly make all kinds of knowledge work more efficient. But there's nothing about AI, or any technological efficiency, that guarantees that its fruits are distributed in a way that relieves scarcity rather than exacerbates it.)
◧◩◪◨
9. antifa+on2[view] [source] [discussion] 2023-07-06 11:34:08
>>loandb+lv1
But can it end corporation-enforced artificial scarcity?
[go to top]