Supply chain attacks work because we implicitly trust the update channel. But the same trust assumption appears in other places:
- npm/pip packages where we `npm install` without auditing - AI-generated code that gets committed after a quick glance - The growing "vibe coding" trend where entire features are scaffolded by AI
The Notepad++ case is almost a best-case scenario — it's a single binary from a known source. The attack surface multiplies when you consider modern dev workflows with hundreds of transitive dependencies, or projects where significant portions were AI-generated and only superficially reviewed.
Sandboxing helps, but the real issue is the gap between what code can do and what developers expect it to do. We need better tooling for understanding what we're actually running.
This has been true since we left the era where you typed the program in each time you ran it. Ken Thompson rather famously wrote about this four decades ago: https://www.cs.umass.edu/~emery/classes/cmpsci691st/readings...
Sandboxing certainly helps but it’s not a panacea: for example, Notepad++ is exactly the kind of utility people would grant access to edit system files and they would have trusted the updater, too.
I think the AI coding angle adds a new wrinkle to Thompson's original point though. With compiled binaries you at least had a known author and a signed release. With AI-generated code, you're trusting a model that produces different output each time, and the "author" is a weighted average of everyone's code it trained on. The trust chain gets weirder.
The "not many copies" angle is interesting too - these bugs are harder to find with traditional scanning because there's no known signature. Each one is a unique snowflake of broken security.