For most actions that don't have much content, this could work well as a simple phone popup where you authorise or deny.
The annoying parts would be if you want the agent to reply to an email that has a full PDF or a lot of text, you'd have to review to make sure the content does not include prompt injections. I think this can be further mitigated and improved with static analysis tools specifically for this purpose.
But I think it helps to think of it not as a way to prevent LLMs to be prompt injected. I see social engineering as the equivalent of prompt injection but for humans. So if you have a personal assistant, you'd also them to be careful with that and to authorise certain sensitive actions every time they happen. And you would definitely want this for things like making payments, changing subscriptions, etc.