zlacker

[return to "Introducing Superalignment"]
1. User23+Sj[view] [source] 2023-07-05 18:13:07
>>tim_sw+(OP)
> How do we ensure AI systems much smarter than humans follow human intent?

You can't, by definition.

◧◩
2. crop_r+Zm[view] [source] 2023-07-05 18:24:09
>>User23+Sj
You can if you are the one controlling their resource allocation and surrounding environment. Similar to how kings kept smartest people in their Kingdom in line.
◧◩◪
3. tester+mr[view] [source] 2023-07-05 18:39:42
>>crop_r+Zm
Only works for so long. A smart enough serf could easily find a way to socially engineer and slaughter the king.
◧◩◪◨
4. tornat+Uv[view] [source] 2023-07-05 18:57:07
>>tester+mr
Assuming an orders-of-magnitude smarter serf doesn't appear overnight, the king can train advisors that are close to matching the intelligence of the smartest serf, and give those advisors power. It's not a foolproof solution but likely the best we have.
[go to top]