zlacker

[parent] [thread] 2 comments
1. oerste+(OP)[view] [source] 2026-02-04 15:16:11
Indeed, even if in principle AI and humans can do similar harm, we have very good mechanisms to make it quite unlikely that a human will do such an act.

These disincentives are built upon the fact that humans have physical necessities they need to cover for survival, and they enjoy having those well fulfilled and not worrying about them. Humans also very much like to be free, dislike pain, and want to have a good reputation with the people around them.

It is exceedingly hard to pose similar threats to a being that doesn’t care about any of that.

Although, to be fair, we also have other soft but strong means to make it unlikely that an AI will behave badly in practice. These methods are fragile but are getting better quickly.

In either case it is really hard to eliminate the possibility of harm, but you can make it unlikely and predictable enough to establish trust.

replies(2): >>deepsp+ly >>nfw2+v02
2. deepsp+ly[view] [source] 2026-02-04 17:47:27
>>oerste+(OP)
The author stated that their human assistant is located in another country which adds a huge layer of complexity to the accountability equation.

In fact, if I wanted to implement a large-scale identity theft operation targeting rich people, I would set up an 'offshore' personal-assistant-as-a-service company. I would then use a tool like OpenClaw to do the actual work, while pretending to be a human, meanwhile harvesting personal information at scale.

3. nfw2+v02[view] [source] 2026-02-05 01:46:16
>>oerste+(OP)
On the other hand, other humans may have intrinsic interests outside of your control that may lead them to harm you despite the mechanisms you mentioned, whereas bots by default don't have such motives.
[go to top]