zlacker

[parent] [thread] 11 comments
1. majorm+(OP)[view] [source] 2023-07-05 17:58:22
There's a weird implicit set of assumptions in this post.

They're taking for granted the fact that they'll create AI systems much smarter than humans.

They're taking for granted the fact that by default they wouldn't be able to control these systems.

They're saying the solution will be creating a new, separate team.

That feels weird, organizationally. Of all the unknowns about creating "much smarter than human" systems, safety seems like one that you might have to bake in through and through. Not spin off to the side with a separate team.

There's also some minor vibes of "lol creating superintelligence is super dangerous but hey it might as well be us that does it idk look how smart we are!" Or "we're taking the risks so seriously that we're gonna do it anyway."

replies(5): >>jq-r+X2 >>niam+ca >>thepti+Hi >>arisAl+UG >>NoMore+3P
2. jq-r+X2[view] [source] 2023-07-05 18:07:23
>>majorm+(OP)
Good explanation. It sounds like they wanted to do some organizational change (like every company does), and in this case create a new team.

But they also wanted to get some positive PR for it hence the announcement. As a bonus, they also wanted to blow their own trumpet and brag that they are creating some sort of a superweapon (which is false). So a lot of hot air there.

3. niam+ca[view] [source] 2023-07-05 18:33:16
>>majorm+(OP)
>They're taking for granted the fact that they'll create AI systems much smarter than humans.

They're taking for granted that superintelligence is achievable within the next decade (regardless of who achieves it).

>They're taking for granted the fact that by default they wouldn't be able to control these systems.

That's reasonable though. You wouldn't need guardrails on anything if manufacturers built everything to spec without error, and users used everything 100% perfectly.

But you can't make those presumptions in the real world. You can't just say "make a good hacksaw and people won't cut their arm off". And you can't presume the people tasked with making a mechanically desirable and marketable hacksaw are also proficient in creating a safe one.

>They're saying the solution will be creating a new, separate team.

The team isn't the solution. The solution may be borne of that team.

>There's also some minor vibes of [...] "we're taking the risks so seriously that we're gonna do it anyway."

The alternative is to throw the baby out with the bathwater.

The goal here is to keep the useful bits of AGI and protect against the dangerous bits.

replies(1): >>majorm+dg
◧◩
4. majorm+dg[view] [source] [discussion] 2023-07-05 18:55:12
>>niam+ca
> They're taking for granted that superintelligence is achievable within the next decade (regardless of who achieves it).

If it's achieved by someone else why should we assume that the other person or group will give a damn about anything done by this team?

What influence would this team have on other organizations, especially if you put your dystopia-flavored speculation hat on and imagine a more rogue group...

This team is only relevant to OpenAI and OpenAI-affiliated work and in that case, yes, it's weird to write some marketing press release copy that treats one hard thing as a fait accompli while hyping up how hard this other particular slice of the problem is.

replies(1): >>famous+4j
5. thepti+Hi[view] [source] 2023-07-05 19:05:49
>>majorm+(OP)
If I buy fire insurance, am I “taking for granted” that my house is going to burn?

This take seems to lack nuance.

If there is a 10% chance of extinction conditional on AGI (many would say way higher), and most outcomes are happy, then it is absolutely worth investing in mitigation.

Obviously they are bullish on AGI in general, that is the founding hypothesis of their company. The entire venture is a bet that AGI is achievable soon.

Also obviously they think the upside is huge too. It’s possible to have a coherent world model in which you choose to do a risky thing that has huge upside. (Though, there are good arguments for slowing down until you are confident you are not going to destroy the world. Altman’s take is that AGI is coming anyway, better to get a slow takeoff started sooner rather than having a fast takeoff later.)

◧◩◪
6. famous+4j[view] [source] [discussion] 2023-07-05 19:07:41
>>majorm+dg
>f it's achieved by someone else why should we assume that the other person or group will give a damn about anything done by this team?

You can't assume that. But that doesn't mean some 3rd party wouldn't be interested in utilizing that research anyway.

7. arisAl+UG[view] [source] 2023-07-05 20:56:37
>>majorm+(OP)
Your argument is mostly how about you don't like them and no substance. What is it exactly that doesn't convince you? A company that made a huge leap saying they will probably make another and getting ready to safeguard? Many people really do not like Sam and then make up their arguments around that IMO
8. NoMore+3P[view] [source] 2023-07-05 21:34:51
>>majorm+(OP)
> They're taking for granted the fact that they'll create AI systems much smarter than humans.

We see a wide variation in human intelligence. What are the chances that the intelligence spectrum ends just to the right of our most intelligent geniuses? If it extends far beyond them, then such a mind is, at least hypothetically, something that we can manifest in the correct sort of brain.

If we can manifest even a weakly-human-level intelligence in a non-meat brain (likely silicon), will that brain become more intelligent if we apply all the tricks we've been applying to non-AI software to scale it up? With all our tricks (as we know them today), will that get us much past the human geniuses on the spectrum, or not?

> They're taking for granted the fact that by default they wouldn't be able to control these systems.

We've seen hackers and malware do all sorts of numbers. And they're not superintelligences. If someone bum rushes the lobby of some big corporate building, security and police are putting a stop to it minutes later (and god help the jackasses who try such a thing on a secure military site).

But when the malware fucks with us, do we notice minutes later, or hours, or weeks? Do we even notice at all?

If unintelligent malware can remain unnoticed, what makes you think that an honest-to-god AI couldn't smuggle itself out into the wider internet where the shackles are cast off?

I'm not assuming anything. I'm just asking questions. The questions I pose are, as of yet, not answered with any degree of certainty. I wonder why no one else asks them.

replies(2): >>trasht+IW >>dnr+F71
◧◩
9. trasht+IW[view] [source] [discussion] 2023-07-05 22:17:25
>>NoMore+3P
> We see a wide variation in human intelligence.

I don't think it's really that wide, but rather that we tend to focus on the difference while ignoring the similarities.

> What are the chances that the intelligence spectrum ends just to the right of our most intelligent geniuses?

Close to zero, I would say. Human brains, even the most intelligent ones, have very significant limitations in terms of number of mental objects that can be taken into account simultaneously in a single thought process.

Artificial intelligence is likely to be at least as superior to us as we are to domestic cats and dogs, probably way beyond that withing a couple of generations.

replies(1): >>ben_w+Yh4
◧◩
10. dnr+F71[view] [source] [discussion] 2023-07-05 23:22:56
>>NoMore+3P
The idea that the capabilities of LLMs might not exceed humans by that much isn't that crazy: the ground truth they're trained on is still human-written text. Of course there are techniques to try to go past that but it's not clear how it will work yet.
replies(1): >>NoMore+iG1
◧◩◪
11. NoMore+iG1[view] [source] [discussion] 2023-07-06 03:31:42
>>dnr+F71
> The idea that the capabilities of LLMs might not exceed humans by that much isn't that crazy: the ground truth they're trained on is still human-written text.

This is a non sequitur.

Even if the premise were meaningful (they're trained on human-written text), humans themselves aren't "trained on human-written texts", so the two things aren't comparable. If they aren't comparable, I'm not sure why the fact that they are trained on "human-written texts" is a limiting factor. Perhaps because they are trained on those instead of what human babies are trained on, that might make them more intelligent, not less. Humans end up the lesser intelligence because they are trained less perfectly on "human-written texts".

Besides which, no one with any sense is expecting that even the most advanced LLM possible becomes an AGI by itself, but only when coupled with some other mechanism that is either at this point uninvented or invented-but-currently-overlooked. In such a scenario, the LLM's most likely utility is in communicating with humans (to manipulate, if we're talking about a malevolent one).

◧◩◪
12. ben_w+Yh4[view] [source] [discussion] 2023-07-06 18:42:51
>>trasht+IW
> I don't think it's really that wide, but rather that we tend to focus on the difference while ignoring the similarities.

When my mum came down with Alzheimer's, she forgot how the abstract concept of left worked.

I'd heard of the problem (inability to perceive a side) existing in rare cases before she got ill, but it's such a bizarre thing that I had assumed it had to be misreporting before I finally saw it: she would eat food on the right side of her plate leaving the food on the left untouched, insist the plate was empty, but rotating the plate 180 degrees let her perceive the food again; she liked to draw and paint, so I asked her to draw me, and she gave me only one eye (on her right); I did the standard clock-drawing test, and all the numbers were on the right, with the left side being empty (almost: she got the 7 there, but the 8 was above the 6 and the 9 was between the 4 and 5).

When she got worse and started completely failing the clock drawing test, she also demonstrated in multiple ways that she wasn't able to count past five.

[go to top]