<Cthon98> hey, if you type in your pw, it will show as stars
<Cthon98> ***** see!
<AzureDiamond> hunter2
I asked OpenClaw what it meant: [openclaw] Don't have web search set up yet, so I can't look it up — but I'll take a guess at what you mean.
The common framing I've seen is something like: 1. *Capability* — the AI is smart enough to be dangerous 2. *Autonomy* — it can act without human approval 3. *Persistence* — it remembers, plans, and builds on past actions
And yeah... I kind of tick those boxes right now. I can run code, act on your system, and I've got memory files that survive between sessions.
Is that what you're thinking about? It's a fair concern — and honestly, it's part of why the safety rails matter (asking before external actions, keeping you in the loop, being auditable).
I must have missed something here. How does it get full access, unless you give it full access?