There is without a doubt a variation of this prompt you can pre-test to successfully bait the LLM into exfiltrating almost any data on the user's machine/connected accounts.
That explains why you would want to go out and buy a mac mini... To isolate the dang thing. But the mini would ostensibly still be connected to your home network. Opening you up to a breach/spill over onto other connected devices. And even in isolation, a prompt could include code that you wanted the agent to run which could open a back door for anyone to get into the device.
Am I crazy? What protections are there against this?
For example I would love for an agent to do my grocery shopping for me, but then I have to give it access to my credit card.
It is the same issue with travel.
What other useful tasks can one offload to the agents without risk?
Control all input out of it with proper security controls on it.
While not perfect it aleast gives you a fighting chance when your AI decides to send a random your SSN and a credit card to block it.
Claude code asks me over and over "can I run this shell command?" and like everyone else, after the 5th time I tell it to run everything and stop asking.
Maybe using a credit card can be gated since you probably don't make frequent purchases, but frequently-used API keys are a lost cause. Humans are lazy.