zlacker

[return to "Governance of Superintelligence"]
1. ablyve+e3[view] [source] 2023-05-22 17:56:08
>>davidb+(OP)
Security is impossible. Therefore, "AI alignment" is impossible.

I'm not sure that even AGI is possible, per Bostrom's Self-Sampling Assumption.

◧◩
2. jw1224+35[view] [source] 2023-05-22 18:06:28
>>ablyve+e3
> I'm not sure that even AGI is possible, per Bostrom's Self-Sampling Assumption.

Can you explain more? I’ve found a definition for SSA, but unsure how it applies to AGI…

[go to top]