zlacker

[return to "Inside The Chaos at OpenAI"]
1. rmorey+l3[view] [source] 2023-11-20 02:47:45
>>maxuti+(OP)
> For a leadership offsite this year, according to two people familiar with the event, Sutskever commissioned a wooden effigy from a local artist that was intended to represent an “unaligned” AI—that is, one that does not meet a human’s objectives. He set it on fire to symbolize OpenAI’s commitment to its founding principles.

Honestly, pretty sick

◧◩
2. quickt+05[view] [source] 2023-11-20 02:59:19
>>rmorey+l3
Which particular human's objectives did it not align with?
◧◩◪
3. pixl97+Jg[view] [source] 2023-11-20 04:57:00
>>quickt+05
I mean there are a lot of potential human objectives an AI could be maligned with in relation to humans. Simple ones are moral misalignment. Extenstential ones are ones where the AI wants to use the molecules that make up your body to make more copies of the AI.
◧◩◪◨
4. JohnFe+bX7[view] [source] 2023-11-21 23:17:27
>>pixl97+Jg
> Simple ones are moral misalignment.

That doesn't sound simple. Not all humans have the same moral code, so who gets to decide which is the "correct" one?

[go to top]