zlacker

[parent] [thread] 6 comments
1. climat+(OP)[view] [source] 2023-07-06 02:28:32
What does it mean to address the risk of superintelligence? There is no way to stop technological progress and AI development is just part of the same process. Moreover, the alarmism doesn't make much sense because we already have misaligned agents at odds with human values, those agents are called profit seeking corporations but I never hear the alarmists talk about putting a stop to for-profit business ventures.

Do you know anyone that considers the pursuit of profits and constant exploitation of natural resources as a problem that needs to be addressed because I don't. Everyone seems very happy with the status quo and AI development is just more of the same status quo development, just corporations seeking ways to exploit and profit from digital resources. OpenAI being a perfect example of this.

replies(1): >>flagra+gc
2. flagra+gc[view] [source] 2023-07-06 03:57:14
>>climat+(OP)
> There is no way to stop technological progress

What makes you say this is impossible? We could simply not go down this road, there are only so many people knowledgeable enough and with access to the right hardware to make progress towards AI. They could all agree, or be compelled, to stop.

We seem to have successfully halted research into cloning, though that wasn't a given and could have fallen into the same trap of having to develop it before one's enemy does.

replies(1): >>climat+Ze
◧◩
3. climat+Ze[view] [source] [discussion] 2023-07-06 04:16:20
>>flagra+gc
There are no enemies. The biosphere is a singular organism and right now people are doing their best to basically destroy all of it. The only way to prevent further damage is to reduce the human population but that's another non-starter so as long as the human population is increasing it will compel the people in charge to continue pushing for more technological "innovation" because technology is the best way to control 8B+ people[1].

Very few people are actually alarmed about the right issues (in no particular order): population size, industrial pollution, military-industrial complex, for-profit multi-national corporations, digital surveillance, factory farming, global warming, &etc. This is why the alarmism from the AI crowd seems disingenuous because AI progress is simply an extension of for-profit corporatism and exploitation applied to digital resources and to properly address the risk from AI would require addressing the actual root causes of why technological progress is misaligned with human values.

1: https://www.theguardian.com/world/2015/jul/24/france-big-bro...

replies(1): >>weregi+cl
◧◩◪
4. weregi+cl[view] [source] [discussion] 2023-07-06 05:20:50
>>climat+Ze
> . The biosphere is a singular organism and right now people are doing their best to basically destroy all of it.

People are part of the biosphere. If other species can't adapt to Homo Sapiens, well, that's life for you. It's not fair or pretty.

replies(1): >>climat+3n
◧◩◪◨
5. climat+3n[view] [source] [discussion] 2023-07-06 05:33:50
>>weregi+cl
Every cancer eventually kills the host so either people figure out how to be less cancerous or we die out from drowning in the byproducts of our metabolic processes just like yeast drown in alcohol.

The AI doomers can continue worrying about technological progress if they want, the actual problems are unrelated to how much money and effort OpenAI is spending on alignment because their corporate structure requires that they continue advancing AI capabilities in order to exploit the digital commons as efficiently as possible.

replies(1): >>goneho+t31
◧◩◪◨⬒
6. goneho+t31[view] [source] [discussion] 2023-07-06 11:39:15
>>climat+3n
Ignoring the provocative framing of humanity as a “cancer”, earth has had at least five historical extinction level events from environmental changes and life on earth has adapted and changed during that time (and likely will continue to at least until the sun burns out).

We have an interest in not destroying our own environment because it’ll make our own lives more difficult and can have bad outcomes, but it’s not likely an extinction level risk for humans and even less so for all other life. Solutions like “degrowth” aren’t real solutions and cause lots of other problems.

It’s “cool” for the more extreme environmental political faction to have a cynical anti-human view of life (despite being human) because some people misinterpret this as wisdom, but I don’t.

The unaligned AGI e-risk is a different level of threat and could really lead to killing everything in pursuit of some dumb goal.

replies(1): >>climat+mk1
◧◩◪◨⬒⬓
7. climat+mk1[view] [source] [discussion] 2023-07-06 13:24:27
>>goneho+t31
Seeking profit and constant population growth are already extremely dumb goals on their own. You can continue worrying about AGI if you want but nothing I've said is either cynical or anti-human. It is simply a description of the global techno-industrial economic system and its total blindness to all the negative externalities of cancerous growth. Continued progress and development of AI capabilities does not change the dynamics of the machine that is destroying the biosphere and it never will because it is an extension of profit seeking exploitative corporate practices carried over to the digital sphere. To address the root causes of misalignment will require getting rid of profit motives and accounting for all the metabolic byproducts of human economic activity and consumption. Unless the AI alarmists have a solution to those things they're just creating another distraction and diverting attention away from the actual problems[1].

1: https://www.nationalgeographic.com/environment/article/plast...

[go to top]