zlacker

[return to "My AI skeptic friends are all nuts"]
1. bigmad+Vp[view] [source] 2025-06-02 23:54:28
>>tablet+(OP)
I agree with the main take in this article: the combination of agents + LLMs with large context windows + a large budget of tokens to iterate on problems can probably already yield some impressive results.

I take serious issue with the "but you have no idea what the code is" rebuttal, since it - to me - skims over the single largest issue with applying LLMs anywhere where important decisions will be made based on their outputs.

To quote from the article:

  People complain about LLM-generated code being 
  “probabilistic”. No it isn’t. 
  It’s code. It’s not Yacc output. It’s knowable. The LLM 
  might be stochastic. But the LLM doesn’t matter. What 
  matters is whether you can make sense of the result, and 
  whether your guardrails hold.

  Reading other people’s code is part of the job. If you can’t metabolize the 
  boring, repetitive code an LLM generates: skills issue! How are you handling the 
  chaos human developers turn out on a deadline?
The problem here is that LLMs are optimized to make their outputs convincing. The issue is exactly "whether you can make sense of the result", as the author said, or, in other words: whether you're immune to being conned by a model output that sounds correct but is not. Sure, "reading other people’s code is part of the job", but the failure modes of junior engineers are easily detectable. The failure modes of LLMs are not.

EDIT: formatting

◧◩
2. Nitpic+3R[view] [source] 2025-06-03 04:38:57
>>bigmad+Vp
> The problem here is that LLMs are optimized to make their outputs convincing.

That may be true for chat aligned LLMs, but coding LLMs are trained w/ RL and rewards for correctness, nowadays. And there are efforts to apply this to the entire stack (i.e. better software glue, automatic guardrails, more extensive tool-use, access to LSP/debuggers/linters, etc).

I think this is the critical point in a lot of these debates that seem to be very popular right now. A lot of people try something and get the wrong impressions about what SotA is. It turns out that often that something is not the best way to do it (i.e. chatting in a web interface for coding), but people don't go the extra mile to actually try what would be best for them (i.e. coding IDEs, terminal agents, etc).

[go to top]