zlacker

[return to "Introducing Superalignment"]
1. Chicag+m9[view] [source] 2023-07-05 17:40:08
>>tim_sw+(OP)
From a layman's perspective when it comes to cutting edge AI, I can't help but be a bit turned off by some of the copy. It seems it goes out of its way to use purposefully exhuberant language as a way to make the risks seem even more significant, just so as an offshoot it implies that the technology being worked on is so advanced. I'm trying to understand why it rubs me particularly the wrong way here, when, frankly, it is just about the norm anywhere else? (see tesla with FSD, etc.)
◧◩
2. majorm+df[view] [source] 2023-07-05 17:58:22
>>Chicag+m9
There's a weird implicit set of assumptions in this post.

They're taking for granted the fact that they'll create AI systems much smarter than humans.

They're taking for granted the fact that by default they wouldn't be able to control these systems.

They're saying the solution will be creating a new, separate team.

That feels weird, organizationally. Of all the unknowns about creating "much smarter than human" systems, safety seems like one that you might have to bake in through and through. Not spin off to the side with a separate team.

There's also some minor vibes of "lol creating superintelligence is super dangerous but hey it might as well be us that does it idk look how smart we are!" Or "we're taking the risks so seriously that we're gonna do it anyway."

◧◩◪
3. jq-r+ai[view] [source] 2023-07-05 18:07:23
>>majorm+df
Good explanation. It sounds like they wanted to do some organizational change (like every company does), and in this case create a new team.

But they also wanted to get some positive PR for it hence the announcement. As a bonus, they also wanted to blow their own trumpet and brag that they are creating some sort of a superweapon (which is false). So a lot of hot air there.

[go to top]