zlacker

[parent] [thread] 1 comments
1. pixl97+(OP)[view] [source] 2023-11-20 04:57:00
I mean there are a lot of potential human objectives an AI could be maligned with in relation to humans. Simple ones are moral misalignment. Extenstential ones are ones where the AI wants to use the molecules that make up your body to make more copies of the AI.
replies(1): >>JohnFe+sG7
2. JohnFe+sG7[view] [source] 2023-11-21 23:17:27
>>pixl97+(OP)
> Simple ones are moral misalignment.

That doesn't sound simple. Not all humans have the same moral code, so who gets to decide which is the "correct" one?

[go to top]