zlacker

[return to "Introducing Superalignment"]
1. Chicag+m9[view] [source] 2023-07-05 17:40:08
>>tim_sw+(OP)
From a layman's perspective when it comes to cutting edge AI, I can't help but be a bit turned off by some of the copy. It seems it goes out of its way to use purposefully exhuberant language as a way to make the risks seem even more significant, just so as an offshoot it implies that the technology being worked on is so advanced. I'm trying to understand why it rubs me particularly the wrong way here, when, frankly, it is just about the norm anywhere else? (see tesla with FSD, etc.)
◧◩
2. goneho+gf[view] [source] 2023-07-05 17:58:33
>>Chicag+m9
The extinction risk from unaligned supterintelligent AGI is real, it's just often dismissed (imo) because it's outside the window of risks that are acceptable and high status to take seriously. People often have an initial knee-jerk negative reaction to it (for not crazy reasons, lots of stuff is often overhyped), but that doesn't make it wrong.

It's uncool to look like an alarmist nut, but sometimes there's no socially acceptable alarm and the risks are real: https://intelligence.org/2017/10/13/fire-alarm/

It's worth looking at the underlying arguments earnestly, you can with an initial skepticism but I was persuaded. Alignment is also been something MIRI and others have been worried about since as early as 2007 (maybe earlier?) so it's also a case of a called shot, not a recent reaction to hype/new LLM capability.

Others have also changed their mind when they looked, for example:

- https://twitter.com/repligate/status/1676507258954416128?s=2...

- Longer form: https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-ho...

For a longer podcast introduction to the ideas: https://www.samharris.org/podcasts/making-sense-episodes/116...

◧◩◪
3. atlasu+zk[view] [source] 2023-07-05 18:15:06
>>goneho+gf
This is an interesting comment because lately it feels like its very cool to be an alarmist! Lots of positive press for people warning about the dangers of AI, Altman and others being taken very seriously, VC and other funders obviously leaning into the space in part because of the related hype

And in other fields, being alarmist has paid off too with little recourse for bad predictions -- how many times have we heard that there will be huge climate disasters ending humanity, the extinction of bees, mass starvation, etc. (not to diminish the dangers of climate change which is obviously very real)? I think alarmism is generally rewarded, at least in media.

◧◩◪◨
4. distor+vK1[view] [source] 2023-07-06 02:08:31
>>atlasu+zk
The extinction of bees, mass starvation, and the ozone hole (bonus) are all examples of alarmist takes that were course corrected. It’s sort of a weird spot to argue that they were overblown when the reason they are not a problem now is because they were addressed.
◧◩◪◨⬒
5. climat+2N1[view] [source] 2023-07-06 02:28:32
>>distor+vK1
What does it mean to address the risk of superintelligence? There is no way to stop technological progress and AI development is just part of the same process. Moreover, the alarmism doesn't make much sense because we already have misaligned agents at odds with human values, those agents are called profit seeking corporations but I never hear the alarmists talk about putting a stop to for-profit business ventures.

Do you know anyone that considers the pursuit of profits and constant exploitation of natural resources as a problem that needs to be addressed because I don't. Everyone seems very happy with the status quo and AI development is just more of the same status quo development, just corporations seeking ways to exploit and profit from digital resources. OpenAI being a perfect example of this.

◧◩◪◨⬒⬓
6. flagra+iZ1[view] [source] 2023-07-06 03:57:14
>>climat+2N1
> There is no way to stop technological progress

What makes you say this is impossible? We could simply not go down this road, there are only so many people knowledgeable enough and with access to the right hardware to make progress towards AI. They could all agree, or be compelled, to stop.

We seem to have successfully halted research into cloning, though that wasn't a given and could have fallen into the same trap of having to develop it before one's enemy does.

◧◩◪◨⬒⬓⬔
7. climat+122[view] [source] 2023-07-06 04:16:20
>>flagra+iZ1
There are no enemies. The biosphere is a singular organism and right now people are doing their best to basically destroy all of it. The only way to prevent further damage is to reduce the human population but that's another non-starter so as long as the human population is increasing it will compel the people in charge to continue pushing for more technological "innovation" because technology is the best way to control 8B+ people[1].

Very few people are actually alarmed about the right issues (in no particular order): population size, industrial pollution, military-industrial complex, for-profit multi-national corporations, digital surveillance, factory farming, global warming, &etc. This is why the alarmism from the AI crowd seems disingenuous because AI progress is simply an extension of for-profit corporatism and exploitation applied to digital resources and to properly address the risk from AI would require addressing the actual root causes of why technological progress is misaligned with human values.

1: https://www.theguardian.com/world/2015/jul/24/france-big-bro...

◧◩◪◨⬒⬓⬔⧯
8. weregi+e82[view] [source] 2023-07-06 05:20:50
>>climat+122
> . The biosphere is a singular organism and right now people are doing their best to basically destroy all of it.

People are part of the biosphere. If other species can't adapt to Homo Sapiens, well, that's life for you. It's not fair or pretty.

◧◩◪◨⬒⬓⬔⧯▣
9. climat+5a2[view] [source] 2023-07-06 05:33:50
>>weregi+e82
Every cancer eventually kills the host so either people figure out how to be less cancerous or we die out from drowning in the byproducts of our metabolic processes just like yeast drown in alcohol.

The AI doomers can continue worrying about technological progress if they want, the actual problems are unrelated to how much money and effort OpenAI is spending on alignment because their corporate structure requires that they continue advancing AI capabilities in order to exploit the digital commons as efficiently as possible.

◧◩◪◨⬒⬓⬔⧯▣▦
10. goneho+vQ2[view] [source] 2023-07-06 11:39:15
>>climat+5a2
Ignoring the provocative framing of humanity as a “cancer”, earth has had at least five historical extinction level events from environmental changes and life on earth has adapted and changed during that time (and likely will continue to at least until the sun burns out).

We have an interest in not destroying our own environment because it’ll make our own lives more difficult and can have bad outcomes, but it’s not likely an extinction level risk for humans and even less so for all other life. Solutions like “degrowth” aren’t real solutions and cause lots of other problems.

It’s “cool” for the more extreme environmental political faction to have a cynical anti-human view of life (despite being human) because some people misinterpret this as wisdom, but I don’t.

The unaligned AGI e-risk is a different level of threat and could really lead to killing everything in pursuit of some dumb goal.

◧◩◪◨⬒⬓⬔⧯▣▦▧
11. climat+o73[view] [source] 2023-07-06 13:24:27
>>goneho+vQ2
Seeking profit and constant population growth are already extremely dumb goals on their own. You can continue worrying about AGI if you want but nothing I've said is either cynical or anti-human. It is simply a description of the global techno-industrial economic system and its total blindness to all the negative externalities of cancerous growth. Continued progress and development of AI capabilities does not change the dynamics of the machine that is destroying the biosphere and it never will because it is an extension of profit seeking exploitative corporate practices carried over to the digital sphere. To address the root causes of misalignment will require getting rid of profit motives and accounting for all the metabolic byproducts of human economic activity and consumption. Unless the AI alarmists have a solution to those things they're just creating another distraction and diverting attention away from the actual problems[1].

1: https://www.nationalgeographic.com/environment/article/plast...

[go to top]