zlacker

[parent] [thread] 12 comments
1. quickt+(OP)[view] [source] 2023-11-20 02:59:19
Which particular human's objectives did it not align with?
replies(3): >>archon+M >>pixl97+Jb >>frabcu+ti
2. archon+M[view] [source] 2023-11-20 03:03:55
>>quickt+(OP)
The idea of "alignment" is pretty troubling to me. Do these people think that they, or those in power have achieved moral perfection, such that it would be good to have extremely powerful AI systems under their control, aligned with them?

Imagine if the US or any other government of 1800s came gained so much power, 'locking-in' their repugnant values as the moral truth, backed by total control of the world.

replies(4): >>cwillu+p1 >>space_+A1 >>ethanb+m2 >>m4rtin+hm4
◧◩
3. cwillu+p1[view] [source] [discussion] 2023-11-20 03:09:01
>>archon+M
Locking in values in that way would be considered a failure of alignment by anyone I've ever read talk about alignment. Not the worst possible failure of alignment (compared to locking in “the value of the entity legally known as OpenAI”, for example), but definitely a straightforward failure to achieve alignment.
replies(1): >>archon+b3
◧◩
4. space_+A1[view] [source] [discussion] 2023-11-20 03:10:17
>>archon+M
I think the worry with the vision of AI under the control of who ever happens to want to use them is that someday that might be the equivalent of giving everyone the keys to a nuclear silo. We know the universe makes it easier to destroy than to create and we know that AI may unleash tremendous power and there's nothing we've seen about the world that means it's guaranteed to stay nice and stable
replies(2): >>bobthe+Q2 >>JohnFe+OR7
◧◩
5. ethanb+m2[view] [source] [discussion] 2023-11-20 03:19:32
>>archon+M
No, I don’t think they do, which is another point in the “alignment is a very big problem” column.

The problem of defining “what’s a good outcome” is a sub problem of alignment.

◧◩◪
6. bobthe+Q2[view] [source] [discussion] 2023-11-20 03:23:12
>>space_+A1
If anything it’ll be a subtle bug that wipes us out.

The 2003 Northeast blackout that affected 50 million people was partially caused by a race condition. https://www.theregister.com/2004/04/08/blackout_bug_report/

◧◩◪
7. archon+b3[view] [source] [discussion] 2023-11-20 03:26:03
>>cwillu+p1
I know it's a theme, MacAskill discusess it in his book. In practice, this is the direction all the "AI safety" departments and organisations seem to be going into.

A world where everyone is paperclipped is probably better than one controlled by psychopathic totalitarian human overlords supported by AI, yet the direction of current research seems to leading us into the latter scenario.

replies(1): >>oefnak+Sh4
8. pixl97+Jb[view] [source] 2023-11-20 04:57:00
>>quickt+(OP)
I mean there are a lot of potential human objectives an AI could be maligned with in relation to humans. Simple ones are moral misalignment. Extenstential ones are ones where the AI wants to use the molecules that make up your body to make more copies of the AI.
replies(1): >>JohnFe+bS7
9. frabcu+ti[view] [source] 2023-11-20 05:38:23
>>quickt+(OP)
That is a secondary and huge problem, but the larger initial problem is making sure the AI aligns with values that nearly all humans have (e.g. don't kill all humans)
◧◩◪◨
10. oefnak+Sh4[view] [source] [discussion] 2023-11-21 02:28:50
>>archon+b3
Are you serious? Paperclipped means the end of the human race. Why would you prefer that over anything?
◧◩
11. m4rtin+hm4[view] [source] [discussion] 2023-11-21 03:00:27
>>archon+M
Don't nuclear weapons kinda cause ssomething like this ? At least the blocks that have the become effectively impossible to destroy & can better spread their ideology.
◧◩◪
12. JohnFe+OR7[view] [source] [discussion] 2023-11-21 23:15:57
>>space_+A1
If they think what they're working on is as bad as nuclear weapons, then why are they even working on it?
◧◩
13. JohnFe+bS7[view] [source] [discussion] 2023-11-21 23:17:27
>>pixl97+Jb
> Simple ones are moral misalignment.

That doesn't sound simple. Not all humans have the same moral code, so who gets to decide which is the "correct" one?

[go to top]