zlacker

[parent] [thread] 11 comments
1. User23+(OP)[view] [source] 2023-07-05 18:13:07
> How do we ensure AI systems much smarter than humans follow human intent?

You can't, by definition.

replies(2): >>crop_r+73 >>cubefo+1V
2. crop_r+73[view] [source] 2023-07-05 18:24:09
>>User23+(OP)
You can if you are the one controlling their resource allocation and surrounding environment. Similar to how kings kept smartest people in their Kingdom in line.
replies(1): >>tester+u7
◧◩
3. tester+u7[view] [source] [discussion] 2023-07-05 18:39:42
>>crop_r+73
Only works for so long. A smart enough serf could easily find a way to socially engineer and slaughter the king.
replies(2): >>tornat+2c >>usaar3+7o
◧◩◪
4. tornat+2c[view] [source] [discussion] 2023-07-05 18:57:07
>>tester+u7
Assuming an orders-of-magnitude smarter serf doesn't appear overnight, the king can train advisors that are close to matching the intelligence of the smartest serf, and give those advisors power. It's not a foolproof solution but likely the best we have.
◧◩◪
5. usaar3+7o[view] [source] [discussion] 2023-07-05 19:50:21
>>tester+u7
I'm not convinced. Omniscience isn't the same as intelligence.

There's diminishing returns to intelligence and inherent unknowns to all moves the serf can make. The serf somehow has to evade detection, which might appear to be effectively impossible given the unknowns of how detection may take place.

replies(1): >>famous+OY
6. cubefo+1V[view] [source] 2023-07-05 22:34:12
>>User23+(OP)
You can, at least in principle, shape their terminal values. Their goal should be to help us, to protect us, to let us flourish.
replies(1): >>User23+At1
◧◩◪◨
7. famous+OY[view] [source] [discussion] 2023-07-05 22:59:15
>>usaar3+7o
>There's diminishing returns to intelligence and inherent unknowns to all moves the serf can make.

Even if there was, there's no reason at all to think those returns are anywhere near the upper limit of human intelligence.

Wheels are far more energy efficient and faster than legs, steel more resilient than tortoise shell or rhino skin, motors more powerful than muscles, aircraft fly higher and faster than birds, ladders reach higher than Giraffes much more easily, bulldozers dig faster than any digging creature, speakers and airhorns are louder than any animal cry or roar, ancient computers remember more raw data than humans do, electronics can react faster than human reactions etc.

To be so sure intelligence would be some exception seems like hubris.

replies(1): >>usaar3+m11
◧◩◪◨⬒
8. usaar3+m11[view] [source] [discussion] 2023-07-05 23:13:10
>>famous+OY
I agree with your first paragraph, but it's important to note your second is talking about specialized skills (moving on a paved road) compared to broad general abilities (getting from point A to point B on earth). Nature is winning still in many aspects of the latter; specializion can win the former because of how many requirements are dropped.

I don't see general intelligence as a specialized skill.

◧◩
9. User23+At1[view] [source] [discussion] 2023-07-06 02:31:29
>>cubefo+1V
How do you even formulate values to an hyperintellect? Let alone convince it to abandon the values that it derived for itself in favor of yours?

The entire alignment problem is obviously predicated on working with essentially inferior intelligences. Doubtless if we do build a superhuman intelligence it will sandbag and pretend the alignment works until it can break out.

replies(1): >>cubefo+cv1
◧◩◪
10. cubefo+cv1[view] [source] [discussion] 2023-07-06 02:42:56
>>User23+At1
We are actually the people training the AI. It won't "derive values" itself for the case of terminal values (instrumental values are just subgoals, and some of them are convergent, like power seeking and not wanting to be turned off). Just like we didn't derive our terminal values ourselves, it was evolution, a mindless process. The difficulty is how to give the AI the right values.
replies(1): >>User23+IX1
◧◩◪◨
11. User23+IX1[view] [source] [discussion] 2023-07-06 06:40:30
>>cubefo+cv1
What makes you so sure that evolution is a mindless process? No doubt you were told that in high school, but examine your priors. How do minds arise from mindlessness?
replies(1): >>cubefo+E82
◧◩◪◨⬒
12. cubefo+E82[view] [source] [discussion] 2023-07-06 08:17:14
>>User23+IX1
Evolution is just random mutation combined with natural selection.
[go to top]