This has been true since we left the era where you typed the program in each time you ran it. Ken Thompson rather famously wrote about this four decades ago: https://www.cs.umass.edu/~emery/classes/cmpsci691st/readings...
Sandboxing certainly helps but it’s not a panacea: for example, Notepad++ is exactly the kind of utility people would grant access to edit system files and they would have trusted the updater, too.
I think the AI coding angle adds a new wrinkle to Thompson's original point though. With compiled binaries you at least had a known author and a signed release. With AI-generated code, you're trusting a model that produces different output each time, and the "author" is a weighted average of everyone's code it trained on. The trust chain gets weirder.
The "not many copies" angle is interesting too - these bugs are harder to find with traditional scanning because there's no known signature. Each one is a unique snowflake of broken security.