zlacker

[parent] [thread] 24 comments
1. rmorey+(OP)[view] [source] 2023-11-20 02:47:45
> For a leadership offsite this year, according to two people familiar with the event, Sutskever commissioned a wooden effigy from a local artist that was intended to represent an “unaligned” AI—that is, one that does not meet a human’s objectives. He set it on fire to symbolize OpenAI’s commitment to its founding principles.

Honestly, pretty sick

replies(5): >>voidfu+J >>quickt+F1 >>IAmGra+I1 >>eloc49+42 >>leobg+zY
2. voidfu+J[view] [source] 2023-11-20 02:52:29
>>rmorey+(OP)
Very metal.

I think that anecdote made me like this guy even if I disagree with him about the dangers of AI.

replies(1): >>alieni+Oq1
3. quickt+F1[view] [source] 2023-11-20 02:59:19
>>rmorey+(OP)
Which particular human's objectives did it not align with?
replies(3): >>archon+r2 >>pixl97+od >>frabcu+8k
4. IAmGra+I1[view] [source] 2023-11-20 02:59:35
>>rmorey+(OP)
So his commitment is to ensure that machines never have a will of their own. I’m not so sure how history will look back on people like this. Humanity certainly makes the same mistakes over and over again while failing to recognize them as such until it’s too late.
replies(2): >>Terrif+Hj4 >>zarzav+4t4
5. eloc49+42[view] [source] 2023-11-20 03:01:10
>>rmorey+(OP)
Pretty sick but he’s forgetting that there are others that could build the same statue, and just…not set it on fire.
replies(2): >>cmrdpo+d2 >>dylan6+w2
◧◩
6. cmrdpo+d2[view] [source] [discussion] 2023-11-20 03:01:59
>>eloc49+42
Including Altman, likely.
replies(1): >>jwestb+Lp1
◧◩
7. archon+r2[view] [source] [discussion] 2023-11-20 03:03:55
>>quickt+F1
The idea of "alignment" is pretty troubling to me. Do these people think that they, or those in power have achieved moral perfection, such that it would be good to have extremely powerful AI systems under their control, aligned with them?

Imagine if the US or any other government of 1800s came gained so much power, 'locking-in' their repugnant values as the moral truth, backed by total control of the world.

replies(4): >>cwillu+43 >>space_+f3 >>ethanb+14 >>m4rtin+Wn4
◧◩
8. dylan6+w2[view] [source] [discussion] 2023-11-20 03:04:31
>>eloc49+42
Yes, idol worship is alive and well. What's your point?
replies(1): >>hoten+5d
◧◩◪
9. cwillu+43[view] [source] [discussion] 2023-11-20 03:09:01
>>archon+r2
Locking in values in that way would be considered a failure of alignment by anyone I've ever read talk about alignment. Not the worst possible failure of alignment (compared to locking in “the value of the entity legally known as OpenAI”, for example), but definitely a straightforward failure to achieve alignment.
replies(1): >>archon+Q4
◧◩◪
10. space_+f3[view] [source] [discussion] 2023-11-20 03:10:17
>>archon+r2
I think the worry with the vision of AI under the control of who ever happens to want to use them is that someday that might be the equivalent of giving everyone the keys to a nuclear silo. We know the universe makes it easier to destroy than to create and we know that AI may unleash tremendous power and there's nothing we've seen about the world that means it's guaranteed to stay nice and stable
replies(2): >>bobthe+v4 >>JohnFe+tT7
◧◩◪
11. ethanb+14[view] [source] [discussion] 2023-11-20 03:19:32
>>archon+r2
No, I don’t think they do, which is another point in the “alignment is a very big problem” column.

The problem of defining “what’s a good outcome” is a sub problem of alignment.

◧◩◪◨
12. bobthe+v4[view] [source] [discussion] 2023-11-20 03:23:12
>>space_+f3
If anything it’ll be a subtle bug that wipes us out.

The 2003 Northeast blackout that affected 50 million people was partially caused by a race condition. https://www.theregister.com/2004/04/08/blackout_bug_report/

◧◩◪◨
13. archon+Q4[view] [source] [discussion] 2023-11-20 03:26:03
>>cwillu+43
I know it's a theme, MacAskill discusess it in his book. In practice, this is the direction all the "AI safety" departments and organisations seem to be going into.

A world where everyone is paperclipped is probably better than one controlled by psychopathic totalitarian human overlords supported by AI, yet the direction of current research seems to leading us into the latter scenario.

replies(1): >>oefnak+xj4
◧◩◪
14. hoten+5d[view] [source] [discussion] 2023-11-20 04:54:14
>>dylan6+w2
I think perhaps you missed the analogy here?

The fire is OpenAI controlling an AI with their alignment efforts. The analogy here is that some company could recreate the AGI-under-alignment and just... Decide to remove the alignment controls. Hence, create another effigy and not set it on fire.

◧◩
15. pixl97+od[view] [source] [discussion] 2023-11-20 04:57:00
>>quickt+F1
I mean there are a lot of potential human objectives an AI could be maligned with in relation to humans. Simple ones are moral misalignment. Extenstential ones are ones where the AI wants to use the molecules that make up your body to make more copies of the AI.
replies(1): >>JohnFe+QT7
◧◩
16. frabcu+8k[view] [source] [discussion] 2023-11-20 05:38:23
>>quickt+F1
That is a secondary and huge problem, but the larger initial problem is making sure the AI aligns with values that nearly all humans have (e.g. don't kill all humans)
17. leobg+zY[view] [source] 2023-11-20 09:27:59
>>rmorey+(OP)
Well, I guess OpenAI always had a special kind of humor.

Brockman had a robot as ringbearer for his wedding. And instead of asking how your colleagues are doing, they would have asked “What is your life a function of?”. This was 2020.

https://www.theatlantic.com/technology/archive/2023/11/sam-a...

◧◩◪
18. jwestb+Lp1[view] [source] [discussion] 2023-11-20 12:31:10
>>cmrdpo+d2
Very much doubt this. Altman may be able to attract the necessary talent - but Altman himself isn't exactly an AI expert.
◧◩
19. alieni+Oq1[view] [source] [discussion] 2023-11-20 12:38:20
>>voidfu+J
More like very mental. The geeks are drinking their own Kool aid and acting weird again.
◧◩◪◨⬒
20. oefnak+xj4[view] [source] [discussion] 2023-11-21 02:28:50
>>archon+Q4
Are you serious? Paperclipped means the end of the human race. Why would you prefer that over anything?
◧◩
21. Terrif+Hj4[view] [source] [discussion] 2023-11-21 02:30:35
>>IAmGra+I1
The worst thing humanity can do is create a competitor for resources for itself. You do not want AI with survival instincts similar to ours. AI need to be programmed to be selfless saints or we will regret it.
◧◩◪
22. m4rtin+Wn4[view] [source] [discussion] 2023-11-21 03:00:27
>>archon+r2
Don't nuclear weapons kinda cause ssomething like this ? At least the blocks that have the become effectively impossible to destroy & can better spread their ideology.
◧◩
23. zarzav+4t4[view] [source] [discussion] 2023-11-21 03:36:19
>>IAmGra+I1
The danger with giving machines a will of their own is that people in the future might not look back on it.
◧◩◪◨
24. JohnFe+tT7[view] [source] [discussion] 2023-11-21 23:15:57
>>space_+f3
If they think what they're working on is as bad as nuclear weapons, then why are they even working on it?
◧◩◪
25. JohnFe+QT7[view] [source] [discussion] 2023-11-21 23:17:27
>>pixl97+od
> Simple ones are moral misalignment.

That doesn't sound simple. Not all humans have the same moral code, so who gets to decide which is the "correct" one?

[go to top]