This dangerous misinterpretation of the actual possible threats simply better conceals real risks. What might those real risks be? That is the question. Might they include more subtle forms of nastiness, if anything at all?
I'm of the belief that there will be no nastiness, not really. But if you believe they will be nasty, it at least pays to be rational about the ways in which that might occur, no?
Now multiply that by every SaaS provider you give your plain text credentials in.
I really don't think this is a thing.
I just think this whole thing is overblown.
If there's a risk in any situation it's similar, probably less, than running any library you installed of a registry for your code. And I think that's a good comparison: supply chain is more important than AI chain.
You can consider AI-agents to be like the fancy bathrooms in a high end hotel, whereas all that code you're putting on your computer? That's the grimy public lavatory lol.