Do you know anyone that considers the pursuit of profits and constant exploitation of natural resources as a problem that needs to be addressed because I don't. Everyone seems very happy with the status quo and AI development is just more of the same status quo development, just corporations seeking ways to exploit and profit from digital resources. OpenAI being a perfect example of this.
What makes you say this is impossible? We could simply not go down this road, there are only so many people knowledgeable enough and with access to the right hardware to make progress towards AI. They could all agree, or be compelled, to stop.
We seem to have successfully halted research into cloning, though that wasn't a given and could have fallen into the same trap of having to develop it before one's enemy does.
Very few people are actually alarmed about the right issues (in no particular order): population size, industrial pollution, military-industrial complex, for-profit multi-national corporations, digital surveillance, factory farming, global warming, &etc. This is why the alarmism from the AI crowd seems disingenuous because AI progress is simply an extension of for-profit corporatism and exploitation applied to digital resources and to properly address the risk from AI would require addressing the actual root causes of why technological progress is misaligned with human values.
1: https://www.theguardian.com/world/2015/jul/24/france-big-bro...
People are part of the biosphere. If other species can't adapt to Homo Sapiens, well, that's life for you. It's not fair or pretty.
The AI doomers can continue worrying about technological progress if they want, the actual problems are unrelated to how much money and effort OpenAI is spending on alignment because their corporate structure requires that they continue advancing AI capabilities in order to exploit the digital commons as efficiently as possible.
We have an interest in not destroying our own environment because it’ll make our own lives more difficult and can have bad outcomes, but it’s not likely an extinction level risk for humans and even less so for all other life. Solutions like “degrowth” aren’t real solutions and cause lots of other problems.
It’s “cool” for the more extreme environmental political faction to have a cynical anti-human view of life (despite being human) because some people misinterpret this as wisdom, but I don’t.
The unaligned AGI e-risk is a different level of threat and could really lead to killing everything in pursuit of some dumb goal.
https://www.acsh.org/news/2018/04/17/bee-apocalypse-was-neve...
Mass starvation wasn't "addressed" exactly, because the predictions were for mass starvation in the west, which never happened. Also the people who predicted this weren't the ones who created the Green Revolution.
Ozone hole is I think the most valid example in the list, but who knows, maybe that was just BS too. A lot of scientific claims turn out to be so, these days, even those that were accepted for quite a while.
1: https://www.nationalgeographic.com/environment/article/plast...