zlacker

[return to "Notepad++ supply chain attack breakdown"]
1. the_ha+AG1[view] [source] 2026-02-04 12:00:33
>>natebc+(OP)
This attack highlights a broader pattern: developers and users increasingly trust code they haven't personally reviewed.

Supply chain attacks work because we implicitly trust the update channel. But the same trust assumption appears in other places:

- npm/pip packages where we `npm install` without auditing - AI-generated code that gets committed after a quick glance - The growing "vibe coding" trend where entire features are scaffolded by AI

The Notepad++ case is almost a best-case scenario — it's a single binary from a known source. The attack surface multiplies when you consider modern dev workflows with hundreds of transitive dependencies, or projects where significant portions were AI-generated and only superficially reviewed.

Sandboxing helps, but the real issue is the gap between what code can do and what developers expect it to do. We need better tooling for understanding what we're actually running.

◧◩
2. acdha+aR1[view] [source] 2026-02-04 13:17:31
>>the_ha+AG1
> developers and users increasingly trust code they haven't personally reviewed.

This has been true since we left the era where you typed the program in each time you ran it. Ken Thompson rather famously wrote about this four decades ago: https://www.cs.umass.edu/~emery/classes/cmpsci691st/readings...

Sandboxing certainly helps but it’s not a panacea: for example, Notepad++ is exactly the kind of utility people would grant access to edit system files and they would have trusted the updater, too.

◧◩◪
3. the_ha+X66[view] [source] 2026-02-05 17:05:30
>>acdha+aR1
The Thompson paper is a great reference, thanks. And yeah, Notepad++ with file system access is a perfect example of why sandboxing alone doesn't save you - users would just grant the permissions anyway because that's what the tool needs to do its job.

I think the AI coding angle adds a new wrinkle to Thompson's original point though. With compiled binaries you at least had a known author and a signed release. With AI-generated code, you're trusting a model that produces different output each time, and the "author" is a weighted average of everyone's code it trained on. The trust chain gets weirder.

[go to top]