zlacker

[return to "Introducing Superalignment"]
1. Chicag+m9[view] [source] 2023-07-05 17:40:08
>>tim_sw+(OP)
From a layman's perspective when it comes to cutting edge AI, I can't help but be a bit turned off by some of the copy. It seems it goes out of its way to use purposefully exhuberant language as a way to make the risks seem even more significant, just so as an offshoot it implies that the technology being worked on is so advanced. I'm trying to understand why it rubs me particularly the wrong way here, when, frankly, it is just about the norm anywhere else? (see tesla with FSD, etc.)
◧◩
2. majorm+df[view] [source] 2023-07-05 17:58:22
>>Chicag+m9
There's a weird implicit set of assumptions in this post.

They're taking for granted the fact that they'll create AI systems much smarter than humans.

They're taking for granted the fact that by default they wouldn't be able to control these systems.

They're saying the solution will be creating a new, separate team.

That feels weird, organizationally. Of all the unknowns about creating "much smarter than human" systems, safety seems like one that you might have to bake in through and through. Not spin off to the side with a separate team.

There's also some minor vibes of "lol creating superintelligence is super dangerous but hey it might as well be us that does it idk look how smart we are!" Or "we're taking the risks so seriously that we're gonna do it anyway."

◧◩◪
3. thepti+Ux[view] [source] 2023-07-05 19:05:49
>>majorm+df
If I buy fire insurance, am I “taking for granted” that my house is going to burn?

This take seems to lack nuance.

If there is a 10% chance of extinction conditional on AGI (many would say way higher), and most outcomes are happy, then it is absolutely worth investing in mitigation.

Obviously they are bullish on AGI in general, that is the founding hypothesis of their company. The entire venture is a bet that AGI is achievable soon.

Also obviously they think the upside is huge too. It’s possible to have a coherent world model in which you choose to do a risky thing that has huge upside. (Though, there are good arguments for slowing down until you are confident you are not going to destroy the world. Altman’s take is that AGI is coming anyway, better to get a slow takeoff started sooner rather than having a fast takeoff later.)

[go to top]