zlacker

[return to "I miss thinking hard"]
1. fl0ki+4w1[view] [source] 2026-02-04 15:35:32
>>jernes+(OP)
It really bothers me how many comments on this topic (here and elsewhere) draw a false parallel between LLM-based coding as an abstraction and frameworks and compilers as an abstraction. They're not the same thing and it matters.

Frameworks and compilers are designed to be leak-proof abstractions. Any way in which they deviate from their abstract promise is a bug that can be found, filed, and permanently fixed. You get to spend your time and energy reasoning in terms of the abstraction because you can trust that the finished product works exactly the way you reasoned about at the abstract level.

LLMs cannot offer that promise by design, so it remains your job to find and fix any deviations from the abstraction you intended. If you fell short of finding and fixing any of those bugs, you've just left yourself a potential crisis down the line.

[Aside: I get why that's acceptable in many domains, and I hope in return people can get why it's not acceptable in many other domains]

All of our decades of progress in programming languages, frameworks, libraries, etc. has been in trying to build up leak-proof abstractions so that programmer intent can be focused only on the unique and interesting parts of a problem, with the other details getting the best available (or at least most widely applicable) implementation. In many ways we've succeeded, even though in many ways it looks like progress has stalled. LLMs have not solved this, they've just given up on the leak-proof part of the problem, trading it for exactly the costs and risks the industry was trying to avoid by solving it properly.

◧◩
2. Stefan+Zz1[view] [source] 2026-02-04 15:51:56
>>fl0ki+4w1
Your comment gets to the crux of my thinking about LLM coding. The way I think of what LLM coding is doing is decompressing your prompt into code based on the statistical likelihood of that decompression based on training data. Basically "Build me an IOS app" -> a concrete implementation of an iOS app. The issue here is that the user supplying the prompt needs to encode all of the potential variables that the AI needs to work with into the prompt, or else the implementation will just be based on the "bog-standard" iOS app based on the training corpus, although with potential differences in the app based on other tokens in the prompt. Is natural language the right way to encode that information? Do we want to rely on input tokens to the context of a model successfully making it into the output to guarantee accuracy? I think the Kiro Spec driven development starts to get at addressing the inherent issues in LLM based coding assistance, but it is an early step.
[go to top]