I wonder if anyone with a correct mental model of how LLM agents work (i.e, does not conceptualize them as intelligent entities) has actually granted them any permissions for their own life... personally, I couldn't imagine doing so.
Let alone crypto, the risk of reputational loss for actions performed on my behalf (even just spamming personal or professional contacts) is just too high.
[ insert butter bot meme here ]
The conceptual problem is that there is a huge intersection between the set of "things the agent needs to be able to do in order to be useful" and "things that are potentially dangerous."