zlacker

[return to "Governance of Superintelligence"]
1. ablyve+e3[view] [source] 2023-05-22 17:56:08
>>davidb+(OP)
Security is impossible. Therefore, "AI alignment" is impossible.

I'm not sure that even AGI is possible, per Bostrom's Self-Sampling Assumption.

◧◩
2. jw1224+35[view] [source] 2023-05-22 18:06:28
>>ablyve+e3
> I'm not sure that even AGI is possible, per Bostrom's Self-Sampling Assumption.

Can you explain more? I’ve found a definition for SSA, but unsure how it applies to AGI…

◧◩◪
3. ablyve+Wz[view] [source] 2023-05-22 20:50:07
>>jw1224+35
The self-sampling assumption essentially says that you should reason as though your self-sampling is the likely outcome in consciousness-space-time.

It's the anthropic principle applied to the distribution/shape of intelligence.

Since your self-sample is of a human-shaped mind, human-shaped minds are the most likely outcome. So the probability of AGI is unlikely.

[go to top]