zlacker

[parent] [thread] 17 comments
1. atlasu+(OP)[view] [source] 2023-07-05 18:15:06
This is an interesting comment because lately it feels like its very cool to be an alarmist! Lots of positive press for people warning about the dangers of AI, Altman and others being taken very seriously, VC and other funders obviously leaning into the space in part because of the related hype

And in other fields, being alarmist has paid off too with little recourse for bad predictions -- how many times have we heard that there will be huge climate disasters ending humanity, the extinction of bees, mass starvation, etc. (not to diminish the dangers of climate change which is obviously very real)? I think alarmism is generally rewarded, at least in media.

replies(3): >>goneho+J3 >>thepti+67 >>distor+Wp1
2. goneho+J3[view] [source] 2023-07-05 18:29:42
>>atlasu+(OP)
Some types of alarm yeah, if within the window of things it's statusy to be alarmed about.

Most of the AI concern that's high status to believe has been the bias, misinformation, safety, stuff. Until very recently talk about e-risk was dismissed and mocked without really engaging with the underlying arguments. That may be changing now, but on net I still mostly see people mocked and dismissed for it.

The set of people alarmed by AGI e-risk are also pretty different than the set alarmed about a lot of these other issues that aren't really e-risks (though still might have bad outcomes). At least EY, Bostrom, Toby Ord are not also as worried about about all these other things to nearly the same extent - the extinction risk of unaligned AGI is different in severity.

3. thepti+67[view] [source] 2023-07-05 18:40:41
>>atlasu+(OP)
Important to pay attention to the content of the alarm though. Altman went in front of congress and a Senator said “when you say things could go badly, I assume you are talking about jobs”. Many people are alarmed about disinformation, job destruction, bias, etc.

Actually holding an x-risk belief is still a fringe position, most people still laugh it off.

That said, the Overton Window is moving. The Time piece from Yudkowsky was something of a milestone (even if it was widely ridiculed).

replies(2): >>miohta+QJ >>trasht+lK
◧◩
4. miohta+QJ[view] [source] [discussion] 2023-07-05 21:36:14
>>thepti+67
Altman also has very selfish motivation, because when there is now AI regulation, only Google, OpenAI (Microsoft) and maybe Meta are allowed to build “compliant” AI. It’s called regulatory capture.

* EU passed its AI regulation directive recently and it has been bashed already here on HackerNews

replies(1): >>matt_h+ZN
◧◩
5. trasht+lK[view] [source] [discussion] 2023-07-05 21:38:36
>>thepti+67
> Actually holding an x-risk belief is still a fringe position

Beliving it is an x-risk is not fringe. It's pretty mainstream now that there is a _risk_ of an existential level event. The fringe is more like Yudkowsky or Leahy insisting that there is a near certainty of such an event if we continue down the current path.

With Hinton, Bengio, Sutskever and Hassabis and Altman all agreeing that there exists a non-trivial existential risk (even if their opinions vary with respect to the magnitude), it seems more like this represents the mainstream.

replies(1): >>thepti+hC4
◧◩◪
6. matt_h+ZN[view] [source] [discussion] 2023-07-05 21:59:17
>>miohta+QJ
Sam doesn't have much financial upside from OpenAI (reportedly, he doesn't have any equity).

And he wrote about the risk in 2015 months before OpenAI was founded: https://blog.samaltman.com/machine-intelligence-part-1 https://blog.samaltman.com/machine-intelligence-part-2

Fine if you disagree with his arguments, but why assume you know what his motivation is?

replies(1): >>kortil+pm1
◧◩◪◨
7. kortil+pm1[view] [source] [discussion] 2023-07-06 01:43:57
>>matt_h+ZN
I find it highly unlikely that he has less upside than the employees who also don’t have equity, but do have profit participation units.
8. distor+Wp1[view] [source] 2023-07-06 02:08:31
>>atlasu+(OP)
The extinction of bees, mass starvation, and the ozone hole (bonus) are all examples of alarmist takes that were course corrected. It’s sort of a weird spot to argue that they were overblown when the reason they are not a problem now is because they were addressed.
replies(2): >>climat+ts1 >>reveli+7I2
◧◩
9. climat+ts1[view] [source] [discussion] 2023-07-06 02:28:32
>>distor+Wp1
What does it mean to address the risk of superintelligence? There is no way to stop technological progress and AI development is just part of the same process. Moreover, the alarmism doesn't make much sense because we already have misaligned agents at odds with human values, those agents are called profit seeking corporations but I never hear the alarmists talk about putting a stop to for-profit business ventures.

Do you know anyone that considers the pursuit of profits and constant exploitation of natural resources as a problem that needs to be addressed because I don't. Everyone seems very happy with the status quo and AI development is just more of the same status quo development, just corporations seeking ways to exploit and profit from digital resources. OpenAI being a perfect example of this.

replies(1): >>flagra+JE1
◧◩◪
10. flagra+JE1[view] [source] [discussion] 2023-07-06 03:57:14
>>climat+ts1
> There is no way to stop technological progress

What makes you say this is impossible? We could simply not go down this road, there are only so many people knowledgeable enough and with access to the right hardware to make progress towards AI. They could all agree, or be compelled, to stop.

We seem to have successfully halted research into cloning, though that wasn't a given and could have fallen into the same trap of having to develop it before one's enemy does.

replies(1): >>climat+sH1
◧◩◪◨
11. climat+sH1[view] [source] [discussion] 2023-07-06 04:16:20
>>flagra+JE1
There are no enemies. The biosphere is a singular organism and right now people are doing their best to basically destroy all of it. The only way to prevent further damage is to reduce the human population but that's another non-starter so as long as the human population is increasing it will compel the people in charge to continue pushing for more technological "innovation" because technology is the best way to control 8B+ people[1].

Very few people are actually alarmed about the right issues (in no particular order): population size, industrial pollution, military-industrial complex, for-profit multi-national corporations, digital surveillance, factory farming, global warming, &etc. This is why the alarmism from the AI crowd seems disingenuous because AI progress is simply an extension of for-profit corporatism and exploitation applied to digital resources and to properly address the risk from AI would require addressing the actual root causes of why technological progress is misaligned with human values.

1: https://www.theguardian.com/world/2015/jul/24/france-big-bro...

replies(1): >>weregi+FN1
◧◩◪◨⬒
12. weregi+FN1[view] [source] [discussion] 2023-07-06 05:20:50
>>climat+sH1
> . The biosphere is a singular organism and right now people are doing their best to basically destroy all of it.

People are part of the biosphere. If other species can't adapt to Homo Sapiens, well, that's life for you. It's not fair or pretty.

replies(1): >>climat+wP1
◧◩◪◨⬒⬓
13. climat+wP1[view] [source] [discussion] 2023-07-06 05:33:50
>>weregi+FN1
Every cancer eventually kills the host so either people figure out how to be less cancerous or we die out from drowning in the byproducts of our metabolic processes just like yeast drown in alcohol.

The AI doomers can continue worrying about technological progress if they want, the actual problems are unrelated to how much money and effort OpenAI is spending on alignment because their corporate structure requires that they continue advancing AI capabilities in order to exploit the digital commons as efficiently as possible.

replies(1): >>goneho+Wv2
◧◩◪◨⬒⬓⬔
14. goneho+Wv2[view] [source] [discussion] 2023-07-06 11:39:15
>>climat+wP1
Ignoring the provocative framing of humanity as a “cancer”, earth has had at least five historical extinction level events from environmental changes and life on earth has adapted and changed during that time (and likely will continue to at least until the sun burns out).

We have an interest in not destroying our own environment because it’ll make our own lives more difficult and can have bad outcomes, but it’s not likely an extinction level risk for humans and even less so for all other life. Solutions like “degrowth” aren’t real solutions and cause lots of other problems.

It’s “cool” for the more extreme environmental political faction to have a cynical anti-human view of life (despite being human) because some people misinterpret this as wisdom, but I don’t.

The unaligned AGI e-risk is a different level of threat and could really lead to killing everything in pursuit of some dumb goal.

replies(1): >>climat+PM2
◧◩
15. reveli+7I2[view] [source] [discussion] 2023-07-06 13:00:52
>>distor+Wp1
Bee extinction wasn't addressed, it was just revealed to not be true. Article with lots of data here:

https://www.acsh.org/news/2018/04/17/bee-apocalypse-was-neve...

Mass starvation wasn't "addressed" exactly, because the predictions were for mass starvation in the west, which never happened. Also the people who predicted this weren't the ones who created the Green Revolution.

Ozone hole is I think the most valid example in the list, but who knows, maybe that was just BS too. A lot of scientific claims turn out to be so, these days, even those that were accepted for quite a while.

◧◩◪◨⬒⬓⬔⧯
16. climat+PM2[view] [source] [discussion] 2023-07-06 13:24:27
>>goneho+Wv2
Seeking profit and constant population growth are already extremely dumb goals on their own. You can continue worrying about AGI if you want but nothing I've said is either cynical or anti-human. It is simply a description of the global techno-industrial economic system and its total blindness to all the negative externalities of cancerous growth. Continued progress and development of AI capabilities does not change the dynamics of the machine that is destroying the biosphere and it never will because it is an extension of profit seeking exploitative corporate practices carried over to the digital sphere. To address the root causes of misalignment will require getting rid of profit motives and accounting for all the metabolic byproducts of human economic activity and consumption. Unless the AI alarmists have a solution to those things they're just creating another distraction and diverting attention away from the actual problems[1].

1: https://www.nationalgeographic.com/environment/article/plast...

◧◩◪
17. thepti+hC4[view] [source] [discussion] 2023-07-06 20:27:28
>>trasht+lK
I think it’s not fringe amongst experts and those in the field. It’s absolutely still fringe among the general public, and I think it’s outside the Overton Window (ie politicians aren’t talking about it).
replies(1): >>trasht+Dd5
◧◩◪◨
18. trasht+Dd5[view] [source] [discussion] 2023-07-06 23:24:52
>>thepti+hC4
The Overton Window applies to the general public, and maybe particular the press.

And this is all over the press and other media now, both the old and new, left leaning and right leaning. I would say it's pretty well within the Overton Window.

Politicians in the US are a bit behind. They probably just need to run the topic with some polls and voter study groups to decide what opinions are most popular with their voter bases.

[go to top]