One of the differences in risk here would be that I think you got some legal protection if your human assistant misuse it, or it gets stolen. But, with the OpenClaw bot, I am unsure if any insurance or bank will side with you if the bot drained your account.
These disincentives are built upon the fact that humans have physical necessities they need to cover for survival, and they enjoy having those well fulfilled and not worrying about them. Humans also very much like to be free, dislike pain, and want to have a good reputation with the people around them.
It is exceedingly hard to pose similar threats to a being that doesn’t care about any of that.
Although, to be fair, we also have other soft but strong means to make it unlikely that an AI will behave badly in practice. These methods are fragile but are getting better quickly.
In either case it is really hard to eliminate the possibility of harm, but you can make it unlikely and predictable enough to establish trust.
And OpenClaw could probably help :)
chef's kiss
> an electronic fund transfer from a consumer's account initiated by a person other than the consumer without actual authority to initiate the transfer and from which the consumer receives no benefit
OpenClaw is not legally a person, it's a program. A program which is being operated by the consumer or a person authorized by said consumer to act on their behalf. Further, any access to funds it has would have to be granted by the consumer (or a human agent thereof). Therefore, baring something like a prompt injection attack, it doesn't seem that transfers initiated by OpenClaw would be considered unauthorized.
[0]: https://www.consumerfinance.gov/rules-policy/regulations/100...
In the plugin docs is a config UI builder. Plugin is OSS, boards aren’t.
In fact, if I wanted to implement a large-scale identity theft operation targeting rich people, I would set up an 'offshore' personal-assistant-as-a-service company. I would then use a tool like OpenClaw to do the actual work, while pretending to be a human, meanwhile harvesting personal information at scale.
Additionally:
- As has been pointed out elsewhere in the thread, it can be difficult to separate out "prompt injection" from "marketing" in some cases.
- Depending on what the vector for the prompt injection is, what model your OpenClaw instance uses, etc. it might not be easy or even possible to determine whether a given transfer was the result of prompt injection or just the bot making a stupid mistake. If the burden of proof is on the consumer to prove that it as prompt injection, this would leave many victims with no way to recover their funds. On the other hand, if banks are required to assume prompt injection unless there's evidence against it, I strongly suspect banks would respond by just banning the use of OpenClaw and similar software with their systems as part of their agreements with their customers. They might well end up doing that regardless.
- Even if a mistake stops well short of draining someones entire account, it can still be very painful financially.