zlacker

[parent] [thread] 3 comments
1. kwindl+(OP)[view] [source] 2026-02-04 22:14:38
Yes, but also ... the analogy to assembly is pretty good. We're moving pretty quickly towards a world where we will almost never read the code.

You may read all the assembly that your compiler produces. (Which, awesome! Sounds like you have a fun job.) But I don't. I know how to read assembly and occasionally do it. But I do it rarely enough that I have to re-learn a bunch of stuff to solve the hairy bug or learn the interesting system-level thing that I'm trying to track down if I'm reading the output of the compiler. And mostly even when I have a bug down at the level where reading assembly might help, I'm using other tools at one or two removes to understand the code at that level.

I think it's pretty clear that "reading the code" is going to go the way of reading compiler output. And quite quickly. Even for critical production systems. LLMs are getting better at writing code very fast, and there's no obvious reason we'll hit a ceiling on that progress any time soon.

In a world where the LLMs are not just pretty good at writing some kinds of code, but very good at writing almost all kinds of code, it will be the same kind of waste of time to read source code as it is, today, to read assembly code.

replies(3): >>sho_hn+K3 >>gombos+D5 >>wtetzn+rC
2. sho_hn+K3[view] [source] 2026-02-04 22:34:02
>>kwindl+(OP)
I think it's the performative aspects that are grating, though. You're right that even many systems programmers only look at the generated assembly occasionally, but at least most of them have the good sense to respect the deeper knowledge of mechanism that is to be found there, and many strive to know more eventually. Totally orthogonal to whether writing assembly at scale is sensible practice or not.

But with the AI tools we're not yet at the wave of "sometimes it's good to read the code" virtue signaling blog posts that will make front page next year or so, and still at the "I'm the new hot shit because I don't read code" moment, which is all a bit hard to take.

3. gombos+D5[view] [source] 2026-02-04 22:45:22
>>kwindl+(OP)
I think this analogy to assembly is flawed.

Compilers predictably transform one kind of programming language code to CPU (or VM) instructions. Transpilers predictably transform one kind of programming language to another.

We introduced various instruction architectures, compiler flags, reproducible builds, checksums exactly to make sure that whatever build artifact that's produced is super predictable and dependable.

That reproducibility is how we can trust our software and that's why we don't need to care about assembly (or JVM etc.) specifics 99% of the time. (Heck, I'm not familiar with most of it.)

Same goes for libraries and frameworks. We can trust their abstractions because someone put years or decades into developing, testing and maintaining them and the community has audited them if they are open-source.

It takes a whole lot of hand-waving to traverse from this point to LLMs - which are stochastic by nature - transforming natural language instructions (even if you call it "specs", it's fundamentally still a text prompt!) to dependable code "that you don't need to read" i.e. a black box.

4. wtetzn+rC[view] [source] 2026-02-05 02:44:03
>>kwindl+(OP)
The analogy to assembly is wrong. Even in a high level language, you can read the code and reason about what it does.

What's the equivalent for an LLM? The string of prompts that non-deterministically generates code?

Also, if LLM output is analogous to assembly, then why is that what we're checking in to our source control?

LLMs don't seem to solve any of the problems I had before LLMs existed. I never worried about being able to generate a bunch of code quickly. The problem that needs to be solved is how to better write code that can be understood, and easily modified, with a high degree of confidence that it's correct, performs well, etc. Using LLMs for programming seems to do the opposite.

[go to top]