zlacker

[return to "My AI skeptic friends are all nuts"]
1. parado+4u[view] [source] 2025-06-03 00:29:20
>>tablet+(OP)
I like Thomas, but I find his arguments include the same fundamental mistake I see made elsewhere. He acknowledged that the tools need an expert to use properly, and as he illustrated, he refined his expertise over many years. He is of the first and last generation of experienced programmers who learned without LLM assistance. How is someone just coming out of school going to get the encouragement and space to independently develop the experience they need to break out of the "vibe coding" phase? I can almost anticipate an interjection along the lines of "well we used to build everything with our hands and now we have tools etc, it's just different" but this is an order of magnitude different. This is asking a robot to design and assemble a shed for you, and you never even see the saw, nails, and hammer being used, let alone understand enough about how the different materials interact to get much more than a "vibe" for how much weight the roof might support.
◧◩
2. hatthe+GC[view] [source] 2025-06-03 01:50:42
>>parado+4u
I think the main difference between shortcuts like "compilers" and shortcuts like "LLMs" is determinism. I don't need to know assembly because I use a compiler that is very well specified, often mathematically proven to introduce no errors, and errs on the side of caution unless specifically told otherwise.

On the other hand, LLMs are highly nondeterministic. They often produce correct output for simple things, but that's because those things are simple enough that we trust the probability of it being incorrect is implausibly low. But there's no guarantee that they won't get them wrong. For more complicated things, LLMs are terrible and need very well specified guardrails. They will bounce around inside those guardrails until they make something correct, but that's more of a happy accident than a mathematical guarantee.

LLMs aren't a level of abstraction, they are an independent entity. They're the equivalent of a junior coder who has no long term memory and thus needs to write everything down and you just have to hope that they don't forget to write something down and hope that some deterministic automated test will catch them if they do forget.

If you could hire an unpaid intern with long term memory loss, would you?

◧◩◪
3. chrism+5G[view] [source] 2025-06-03 02:28:32
>>hatthe+GC
Determinism is only one part of it: predictability and the ability to model what it’s doing is perhaps more important.

The physics engine in the game Trackmania is deterministic: this means that you can replay the same inputs and get the same output; but it doesn’t mean the output always makes sense: if you drive into a wall in a particular way, you can trigger what’s called an uberbug, where your car gets flung in a somewhat random direction at implausibly high speed. (This sort of thing can lead to fun tool-assisted speedruns that are utterly unviable for humans.)

The abstractions part you mention, there’s the key. Good abstractions make predictable. Turn the steering wheel to the left, head left. There are still odd occasions when I will mispredict what some code in a language like Rust, Python or JavaScript will do, but they’re rare. By contrast, LLMs are very unpredictable, and you will fundamentally never be able to mentally model what it achieves.

◧◩◪◨
4. hatthe+B21[view] [source] 2025-06-03 06:39:35
>>chrism+5G
Having an LLM code for you is like watching someone make a TAS. It technically meets the explicitly-specified goals of the mapper (checkpoints and finish), but the final run usually ignores the intended route made by the mapper. Even if the mapper keeps on putting in extra checkpoints and guardrails in between, the TAS can still find a crazy uberbug into backflip into taking half the checkpoints in reverse order. And unless you spend far longer studying the TAS than it would have taken to learn to drive it yourself normally, you're not going to learn much yourself.
[go to top]