zlacker

[parent] [thread] 2 comments
1. ablyve+(OP)[view] [source] 2023-05-22 17:56:08
Security is impossible. Therefore, "AI alignment" is impossible.

I'm not sure that even AGI is possible, per Bostrom's Self-Sampling Assumption.

replies(1): >>jw1224+P1
2. jw1224+P1[view] [source] 2023-05-22 18:06:28
>>ablyve+(OP)
> I'm not sure that even AGI is possible, per Bostrom's Self-Sampling Assumption.

Can you explain more? I’ve found a definition for SSA, but unsure how it applies to AGI…

replies(1): >>ablyve+Iw
◧◩
3. ablyve+Iw[view] [source] [discussion] 2023-05-22 20:50:07
>>jw1224+P1
The self-sampling assumption essentially says that you should reason as though your self-sampling is the likely outcome in consciousness-space-time.

It's the anthropic principle applied to the distribution/shape of intelligence.

Since your self-sample is of a human-shaped mind, human-shaped minds are the most likely outcome. So the probability of AGI is unlikely.

[go to top]