zlacker

[return to "A sane but bull case on Clawdbot / OpenClaw"]
1. okinok+rx3[view] [source] 2026-02-04 14:17:29
>>brdd+(OP)
>all delegation involves risk. with a human assistant, the risks include: intentional misuse (she could run off with my credit card), accidents (her computer could get stolen), or social engineering (someone could impersonate me and request information from her).

One of the differences in risk here would be that I think you got some legal protection if your human assistant misuse it, or it gets stolen. But, with the OpenClaw bot, I am unsure if any insurance or bank will side with you if the bot drained your account.

◧◩
2. oerste+XJ3[view] [source] 2026-02-04 15:16:11
>>okinok+rx3
Indeed, even if in principle AI and humans can do similar harm, we have very good mechanisms to make it quite unlikely that a human will do such an act.

These disincentives are built upon the fact that humans have physical necessities they need to cover for survival, and they enjoy having those well fulfilled and not worrying about them. Humans also very much like to be free, dislike pain, and want to have a good reputation with the people around them.

It is exceedingly hard to pose similar threats to a being that doesn’t care about any of that.

Although, to be fair, we also have other soft but strong means to make it unlikely that an AI will behave badly in practice. These methods are fragile but are getting better quickly.

In either case it is really hard to eliminate the possibility of harm, but you can make it unlikely and predictable enough to establish trust.

[go to top]