zlacker

[parent] [thread] 4 comments
1. acdha+(OP)[view] [source] 2026-02-04 13:17:31
> developers and users increasingly trust code they haven't personally reviewed.

This has been true since we left the era where you typed the program in each time you ran it. Ken Thompson rather famously wrote about this four decades ago: https://www.cs.umass.edu/~emery/classes/cmpsci691st/readings...

Sandboxing certainly helps but it’s not a panacea: for example, Notepad++ is exactly the kind of utility people would grant access to edit system files and they would have trusted the updater, too.

replies(1): >>the_ha+Nf4
2. the_ha+Nf4[view] [source] 2026-02-05 17:05:30
>>acdha+(OP)
The Thompson paper is a great reference, thanks. And yeah, Notepad++ with file system access is a perfect example of why sandboxing alone doesn't save you - users would just grant the permissions anyway because that's what the tool needs to do its job.

I think the AI coding angle adds a new wrinkle to Thompson's original point though. With compiled binaries you at least had a known author and a signed release. With AI-generated code, you're trusting a model that produces different output each time, and the "author" is a weighted average of everyone's code it trained on. The trust chain gets weirder.

replies(1): >>acdha+p85
◧◩
3. acdha+p85[view] [source] [discussion] 2026-02-05 20:59:06
>>the_ha+Nf4
Yes and LLMs also shift the economics for writing new versus reusing code as well as generating attacks so I think we’ll see some odd variations of old bugs which can’t be widely attacked (not many copies in the world) but might be surprising to someone thinking that problem has been solved (like what happened with Cloudflare’s experimental OAuth library).
replies(1): >>the_ha+Pe5
◧◩◪
4. the_ha+Pe5[view] [source] [discussion] 2026-02-05 21:27:51
>>acdha+p85
The Cloudflare OAuth thing is a good example of exactly this. Someone wrote new code for a solved problem, introduced a vulnerability that wouldn't have existed if they'd just used a well-tested library. Now scale that up to every vibe coder reimplementing auth from scratch because the LLM made it look easy.

The "not many copies" angle is interesting too - these bugs are harder to find with traditional scanning because there's no known signature. Each one is a unique snowflake of broken security.

replies(1): >>acdha+gI5
◧◩◪◨
5. acdha+gI5[view] [source] [discussion] 2026-02-06 00:28:52
>>the_ha+Pe5
That last part is really interesting to me: humans are notoriously bad at things like looking at a large block of code and recognizing that something is missing from the middle. Offensive LLMs guided by control flow analysis are probably going to do some really interesting things finding flaws in that bespoke code but I bet most companies jumping on the vibe-coding bandwagon aren’t going to invest nearly as much.
[go to top]