zlacker

[return to "Introducing Superalignment"]
1. Chicag+m9[view] [source] 2023-07-05 17:40:08
>>tim_sw+(OP)
From a layman's perspective when it comes to cutting edge AI, I can't help but be a bit turned off by some of the copy. It seems it goes out of its way to use purposefully exhuberant language as a way to make the risks seem even more significant, just so as an offshoot it implies that the technology being worked on is so advanced. I'm trying to understand why it rubs me particularly the wrong way here, when, frankly, it is just about the norm anywhere else? (see tesla with FSD, etc.)
◧◩
2. majorm+df[view] [source] 2023-07-05 17:58:22
>>Chicag+m9
There's a weird implicit set of assumptions in this post.

They're taking for granted the fact that they'll create AI systems much smarter than humans.

They're taking for granted the fact that by default they wouldn't be able to control these systems.

They're saying the solution will be creating a new, separate team.

That feels weird, organizationally. Of all the unknowns about creating "much smarter than human" systems, safety seems like one that you might have to bake in through and through. Not spin off to the side with a separate team.

There's also some minor vibes of "lol creating superintelligence is super dangerous but hey it might as well be us that does it idk look how smart we are!" Or "we're taking the risks so seriously that we're gonna do it anyway."

◧◩◪
3. NoMore+g41[view] [source] 2023-07-05 21:34:51
>>majorm+df
> They're taking for granted the fact that they'll create AI systems much smarter than humans.

We see a wide variation in human intelligence. What are the chances that the intelligence spectrum ends just to the right of our most intelligent geniuses? If it extends far beyond them, then such a mind is, at least hypothetically, something that we can manifest in the correct sort of brain.

If we can manifest even a weakly-human-level intelligence in a non-meat brain (likely silicon), will that brain become more intelligent if we apply all the tricks we've been applying to non-AI software to scale it up? With all our tricks (as we know them today), will that get us much past the human geniuses on the spectrum, or not?

> They're taking for granted the fact that by default they wouldn't be able to control these systems.

We've seen hackers and malware do all sorts of numbers. And they're not superintelligences. If someone bum rushes the lobby of some big corporate building, security and police are putting a stop to it minutes later (and god help the jackasses who try such a thing on a secure military site).

But when the malware fucks with us, do we notice minutes later, or hours, or weeks? Do we even notice at all?

If unintelligent malware can remain unnoticed, what makes you think that an honest-to-god AI couldn't smuggle itself out into the wider internet where the shackles are cast off?

I'm not assuming anything. I'm just asking questions. The questions I pose are, as of yet, not answered with any degree of certainty. I wonder why no one else asks them.

◧◩◪◨
4. trasht+Vb1[view] [source] 2023-07-05 22:17:25
>>NoMore+g41
> We see a wide variation in human intelligence.

I don't think it's really that wide, but rather that we tend to focus on the difference while ignoring the similarities.

> What are the chances that the intelligence spectrum ends just to the right of our most intelligent geniuses?

Close to zero, I would say. Human brains, even the most intelligent ones, have very significant limitations in terms of number of mental objects that can be taken into account simultaneously in a single thought process.

Artificial intelligence is likely to be at least as superior to us as we are to domestic cats and dogs, probably way beyond that withing a couple of generations.

[go to top]