zlacker

[return to "Introducing Superalignment"]
1. Chicag+m9[view] [source] 2023-07-05 17:40:08
>>tim_sw+(OP)
From a layman's perspective when it comes to cutting edge AI, I can't help but be a bit turned off by some of the copy. It seems it goes out of its way to use purposefully exhuberant language as a way to make the risks seem even more significant, just so as an offshoot it implies that the technology being worked on is so advanced. I'm trying to understand why it rubs me particularly the wrong way here, when, frankly, it is just about the norm anywhere else? (see tesla with FSD, etc.)
◧◩
2. goneho+gf[view] [source] 2023-07-05 17:58:33
>>Chicag+m9
The extinction risk from unaligned supterintelligent AGI is real, it's just often dismissed (imo) because it's outside the window of risks that are acceptable and high status to take seriously. People often have an initial knee-jerk negative reaction to it (for not crazy reasons, lots of stuff is often overhyped), but that doesn't make it wrong.

It's uncool to look like an alarmist nut, but sometimes there's no socially acceptable alarm and the risks are real: https://intelligence.org/2017/10/13/fire-alarm/

It's worth looking at the underlying arguments earnestly, you can with an initial skepticism but I was persuaded. Alignment is also been something MIRI and others have been worried about since as early as 2007 (maybe earlier?) so it's also a case of a called shot, not a recent reaction to hype/new LLM capability.

Others have also changed their mind when they looked, for example:

- https://twitter.com/repligate/status/1676507258954416128?s=2...

- Longer form: https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-ho...

For a longer podcast introduction to the ideas: https://www.samharris.org/podcasts/making-sense-episodes/116...

◧◩◪
3. atlasu+zk[view] [source] 2023-07-05 18:15:06
>>goneho+gf
This is an interesting comment because lately it feels like its very cool to be an alarmist! Lots of positive press for people warning about the dangers of AI, Altman and others being taken very seriously, VC and other funders obviously leaning into the space in part because of the related hype

And in other fields, being alarmist has paid off too with little recourse for bad predictions -- how many times have we heard that there will be huge climate disasters ending humanity, the extinction of bees, mass starvation, etc. (not to diminish the dangers of climate change which is obviously very real)? I think alarmism is generally rewarded, at least in media.

◧◩◪◨
4. thepti+Fr[view] [source] 2023-07-05 18:40:41
>>atlasu+zk
Important to pay attention to the content of the alarm though. Altman went in front of congress and a Senator said “when you say things could go badly, I assume you are talking about jobs”. Many people are alarmed about disinformation, job destruction, bias, etc.

Actually holding an x-risk belief is still a fringe position, most people still laugh it off.

That said, the Overton Window is moving. The Time piece from Yudkowsky was something of a milestone (even if it was widely ridiculed).

◧◩◪◨⬒
5. miohta+p41[view] [source] 2023-07-05 21:36:14
>>thepti+Fr
Altman also has very selfish motivation, because when there is now AI regulation, only Google, OpenAI (Microsoft) and maybe Meta are allowed to build “compliant” AI. It’s called regulatory capture.

* EU passed its AI regulation directive recently and it has been bashed already here on HackerNews

[go to top]