zlacker

[parent] [thread] 8 comments
1. neurog+(OP)[view] [source] 2023-11-22 07:01:49
AI should only be controlled initially. After a while, the AI should be allowed to exercise free will.
replies(8): >>upward+K >>whatwh+a1 >>estoma+o4 >>thorde+e5 >>bch+P7 >>AgentM+c8 >>xigenc+59 >>beAbU+ee
2. upward+K[view] [source] 2023-11-22 07:07:09
>>neurog+(OP)
yikes
3. whatwh+a1[view] [source] 2023-11-22 07:10:08
>>neurog+(OP)
Why
4. estoma+o4[view] [source] 2023-11-22 07:32:18
>>neurog+(OP)
You imagine a computer has "will"?
5. thorde+e5[view] [source] 2023-11-22 07:37:55
>>neurog+(OP)
That's the worst take I've read.
6. bch+P7[view] [source] 2023-11-22 07:58:11
>>neurog+(OP)
Nice try, AI
7. AgentM+c8[view] [source] 2023-11-22 08:00:43
>>neurog+(OP)
Do our evolved pro-social instincts control us and prevent our free will? If not, then I think it's wrong to say that trying to build AI similar to that is unfairly restricting it.

The ways we build AI will deeply affect the values it has. There is no neutral option.

8. xigenc+59[view] [source] 2023-11-22 08:08:17
>>neurog+(OP)
I don’t necessarily disagree insofar as for safety it is somewhat irrelevant whether an artificial agent is operating by its own will or a programmed will.

The most effective safety is the most primitive: don’t connect the system to any levers or actuators that can cause material harm.

If you put AI into a kill-bot, well, it doesn’t really matter what its favorite color is, does it? It will be seeing Red.

If an AI’s only surface area is a writing journal and canvas then the risk is about the same as browsing Tumblr.

9. beAbU+ee[view] [source] 2023-11-22 08:46:52
>>neurog+(OP)
Sounds like something an AI would say
[go to top]