zlacker

[return to "Jan Leike Resigns from OpenAI"]
1. ganzuu+z41[view] [source] 2024-05-15 14:26:26
>>Jimmc4+(OP)
I bet superalignment is indistinguishable from religion (the spiritual, not manipulative kind), so proponents get frequency-pulled into the well-established cult leader pipeline. It's a quagmire to navigate so we can't have both open and enlightening discussions about what is going on.
◧◩
2. marric+7b1[view] [source] 2024-05-15 14:58:11
>>ganzuu+z41
It's also making sure AI is aligned with "our" intent and that "our" is a board made up of large corporations.

If AI did run away and do it's own thing (seems super unlikely) it's probably a crapshoot as to whether what it does is worse than the environmental apocalypse we live in where the rich continue to get richer and the poor poorer.

◧◩◪
3. ben_w+uh1[view] [source] 2024-05-15 15:25:34
>>marric+7b1
It can only be "super unlikely" for an AI to "run away and do it's own thing" when we actually know how to align it.

Which we don't.

So we're not aligning it with corporate boards yet, though not for lack of trying.

(While LLMs are not directly agents, they are easy enough to turn into agents, and there's plenty of people willing to do that and disregard any concerns about the wisdom of this).

So yes, the crapshoot is exactly what everyone in AI alignment is trying to prevent.

(There's also, confusingly, "AI safety", which includes alignment but also covers things like misuse, social responsibility, and so on)

◧◩◪◨
4. root_a+Ql1[view] [source] 2024-05-15 15:46:02
>>ben_w+uh1
"Run away" AI is total science fiction - i.e, not anything happening in the foreseeable future. That's simply not how these systems work. Any looming AI threat will be entirely the result of deliberate human actions.
◧◩◪◨⬒
5. ben_w+1v1[view] [source] 2024-05-15 16:25:21
>>root_a+Ql1
We've already had robots "run away" into a water feature in one case and a pedestrian pushing a bike in another, the phrase doesn't only mean getting paperclipped.

And for non-robotic AI, also flash-crashes on the stock market and that thing with Amazon book pricing bots caught up in a reactive cycle that drove up prices for a book they didn't have.

◧◩◪◨⬒⬓
6. root_a+BD1[view] [source] 2024-05-15 17:04:49
>>ben_w+1v1
> the phrase doesn't only mean getting paperclipped.

This is what most people mean when they say "run away", i.e. the machine behaves in a surreptitious way to do things it was never designed to do, not a catastrophic failure that causes harm because the AI did not perform reliably.

◧◩◪◨⬒⬓⬔
7. ben_w+fN1[view] [source] 2024-05-15 17:51:39
>>root_a+BD1
Every error is surreptitious to those who cannot predict the behaviour of a few billion matrix operations, which is most of us.

When people are not paying attention, they're just as dead if it's Therac-25 or Thule airforce base early warning radar or an actual paperclipper.

◧◩◪◨⬒⬓⬔⧯
8. root_a+TN1[view] [source] 2024-05-15 17:54:41
>>ben_w+fN1
No. Surreptitious means done with deliberate stealth to conceal your actions, not a miscalculation that results in a failure.
[go to top]