And in other fields, being alarmist has paid off too with little recourse for bad predictions -- how many times have we heard that there will be huge climate disasters ending humanity, the extinction of bees, mass starvation, etc. (not to diminish the dangers of climate change which is obviously very real)? I think alarmism is generally rewarded, at least in media.
Most of the AI concern that's high status to believe has been the bias, misinformation, safety, stuff. Until very recently talk about e-risk was dismissed and mocked without really engaging with the underlying arguments. That may be changing now, but on net I still mostly see people mocked and dismissed for it.
The set of people alarmed by AGI e-risk are also pretty different than the set alarmed about a lot of these other issues that aren't really e-risks (though still might have bad outcomes). At least EY, Bostrom, Toby Ord are not also as worried about about all these other things to nearly the same extent - the extinction risk of unaligned AGI is different in severity.
Actually holding an x-risk belief is still a fringe position, most people still laugh it off.
That said, the Overton Window is moving. The Time piece from Yudkowsky was something of a milestone (even if it was widely ridiculed).
* EU passed its AI regulation directive recently and it has been bashed already here on HackerNews
Beliving it is an x-risk is not fringe. It's pretty mainstream now that there is a _risk_ of an existential level event. The fringe is more like Yudkowsky or Leahy insisting that there is a near certainty of such an event if we continue down the current path.
With Hinton, Bengio, Sutskever and Hassabis and Altman all agreeing that there exists a non-trivial existential risk (even if their opinions vary with respect to the magnitude), it seems more like this represents the mainstream.
And he wrote about the risk in 2015 months before OpenAI was founded: https://blog.samaltman.com/machine-intelligence-part-1 https://blog.samaltman.com/machine-intelligence-part-2
Fine if you disagree with his arguments, but why assume you know what his motivation is?
Do you know anyone that considers the pursuit of profits and constant exploitation of natural resources as a problem that needs to be addressed because I don't. Everyone seems very happy with the status quo and AI development is just more of the same status quo development, just corporations seeking ways to exploit and profit from digital resources. OpenAI being a perfect example of this.
What makes you say this is impossible? We could simply not go down this road, there are only so many people knowledgeable enough and with access to the right hardware to make progress towards AI. They could all agree, or be compelled, to stop.
We seem to have successfully halted research into cloning, though that wasn't a given and could have fallen into the same trap of having to develop it before one's enemy does.
Very few people are actually alarmed about the right issues (in no particular order): population size, industrial pollution, military-industrial complex, for-profit multi-national corporations, digital surveillance, factory farming, global warming, &etc. This is why the alarmism from the AI crowd seems disingenuous because AI progress is simply an extension of for-profit corporatism and exploitation applied to digital resources and to properly address the risk from AI would require addressing the actual root causes of why technological progress is misaligned with human values.
1: https://www.theguardian.com/world/2015/jul/24/france-big-bro...
People are part of the biosphere. If other species can't adapt to Homo Sapiens, well, that's life for you. It's not fair or pretty.
The AI doomers can continue worrying about technological progress if they want, the actual problems are unrelated to how much money and effort OpenAI is spending on alignment because their corporate structure requires that they continue advancing AI capabilities in order to exploit the digital commons as efficiently as possible.
We have an interest in not destroying our own environment because it’ll make our own lives more difficult and can have bad outcomes, but it’s not likely an extinction level risk for humans and even less so for all other life. Solutions like “degrowth” aren’t real solutions and cause lots of other problems.
It’s “cool” for the more extreme environmental political faction to have a cynical anti-human view of life (despite being human) because some people misinterpret this as wisdom, but I don’t.
The unaligned AGI e-risk is a different level of threat and could really lead to killing everything in pursuit of some dumb goal.
https://www.acsh.org/news/2018/04/17/bee-apocalypse-was-neve...
Mass starvation wasn't "addressed" exactly, because the predictions were for mass starvation in the west, which never happened. Also the people who predicted this weren't the ones who created the Green Revolution.
Ozone hole is I think the most valid example in the list, but who knows, maybe that was just BS too. A lot of scientific claims turn out to be so, these days, even those that were accepted for quite a while.
1: https://www.nationalgeographic.com/environment/article/plast...
And this is all over the press and other media now, both the old and new, left leaning and right leaning. I would say it's pretty well within the Overton Window.
Politicians in the US are a bit behind. They probably just need to run the topic with some polls and voter study groups to decide what opinions are most popular with their voter bases.