Respect for your writing, but I feel you and many others have the risk calculus here backwards.
In the meantime security researchers are publishing proof of concept data exfiltration attacks all the time. I've been collecting those here: https://simonwillison.net/tags/exfiltration-attacks/
That doesn't mean the risk isn't there - it means malicious actors have not yet started exploiting it.
Johann Rehberger calls this effect "The Normalization of Deviance in AI", borrowing terminology from the 1986 Space Shuttle Challenger disaster report: https://embracethered.com/blog/posts/2025/the-normalization-...
Short version: the longer a company or community gets away with behaving in an unsafe way without feeling the consequences, the more they are likely to ignore those risks.
I'm certain that's what is happening to us all today with coding agents. I use them in an unsafe way myself.
If you did not solve it why were you bought?