zlacker

[return to "Introducing Superalignment"]
1. User23+Sj[view] [source] 2023-07-05 18:13:07
>>tim_sw+(OP)
> How do we ensure AI systems much smarter than humans follow human intent?

You can't, by definition.

◧◩
2. crop_r+Zm[view] [source] 2023-07-05 18:24:09
>>User23+Sj
You can if you are the one controlling their resource allocation and surrounding environment. Similar to how kings kept smartest people in their Kingdom in line.
◧◩◪
3. tester+mr[view] [source] 2023-07-05 18:39:42
>>crop_r+Zm
Only works for so long. A smart enough serf could easily find a way to socially engineer and slaughter the king.
◧◩◪◨
4. usaar3+ZH[view] [source] 2023-07-05 19:50:21
>>tester+mr
I'm not convinced. Omniscience isn't the same as intelligence.

There's diminishing returns to intelligence and inherent unknowns to all moves the serf can make. The serf somehow has to evade detection, which might appear to be effectively impossible given the unknowns of how detection may take place.

[go to top]