One of the differences in risk here would be that I think you got some legal protection if your human assistant misuse it, or it gets stolen. But, with the OpenClaw bot, I am unsure if any insurance or bank will side with you if the bot drained your account.
These disincentives are built upon the fact that humans have physical necessities they need to cover for survival, and they enjoy having those well fulfilled and not worrying about them. Humans also very much like to be free, dislike pain, and want to have a good reputation with the people around them.
It is exceedingly hard to pose similar threats to a being that doesn’t care about any of that.
Although, to be fair, we also have other soft but strong means to make it unlikely that an AI will behave badly in practice. These methods are fragile but are getting better quickly.
In either case it is really hard to eliminate the possibility of harm, but you can make it unlikely and predictable enough to establish trust.
In fact, if I wanted to implement a large-scale identity theft operation targeting rich people, I would set up an 'offshore' personal-assistant-as-a-service company. I would then use a tool like OpenClaw to do the actual work, while pretending to be a human, meanwhile harvesting personal information at scale.