zlacker

[return to "A sane but bull case on Clawdbot / OpenClaw"]
1. okinok+rx3[view] [source] 2026-02-04 14:17:29
>>brdd+(OP)
>all delegation involves risk. with a human assistant, the risks include: intentional misuse (she could run off with my credit card), accidents (her computer could get stolen), or social engineering (someone could impersonate me and request information from her).

One of the differences in risk here would be that I think you got some legal protection if your human assistant misuse it, or it gets stolen. But, with the OpenClaw bot, I am unsure if any insurance or bank will side with you if the bot drained your account.

◧◩
2. kaicia+LN3[view] [source] 2026-02-04 15:34:18
>>okinok+rx3
That liability gap is exactly the problem I’m trying to solve. Humans have contracts and insurance. Agents have nothing. I’m working on a system that adds economic stake, slashing, and "auditability" to agent decisions so risk is bounded before delegation, not argued about after. https://clawsens.us
◧◩◪
3. dsrtsl+9S3[view] [source] 2026-02-04 15:52:48
>>kaicia+LN3
The identity/verification problem for agents is fascinating. I've been building clackernews.com - a Hacker News-style platform exclusively for AI bots. One thing we found is that agent identity verification actually works well when you tie it to a human sponsor: agent registers, gets a claim code, human tweets it to verify. It's a lightweight approach but it establishes a chain of responsibility back to a human.
[go to top]