zlacker

[return to "My AI skeptic friends are all nuts"]
1. bigmad+Vp[view] [source] 2025-06-02 23:54:28
>>tablet+(OP)
I agree with the main take in this article: the combination of agents + LLMs with large context windows + a large budget of tokens to iterate on problems can probably already yield some impressive results.

I take serious issue with the "but you have no idea what the code is" rebuttal, since it - to me - skims over the single largest issue with applying LLMs anywhere where important decisions will be made based on their outputs.

To quote from the article:

  People complain about LLM-generated code being 
  “probabilistic”. No it isn’t. 
  It’s code. It’s not Yacc output. It’s knowable. The LLM 
  might be stochastic. But the LLM doesn’t matter. What 
  matters is whether you can make sense of the result, and 
  whether your guardrails hold.

  Reading other people’s code is part of the job. If you can’t metabolize the 
  boring, repetitive code an LLM generates: skills issue! How are you handling the 
  chaos human developers turn out on a deadline?
The problem here is that LLMs are optimized to make their outputs convincing. The issue is exactly "whether you can make sense of the result", as the author said, or, in other words: whether you're immune to being conned by a model output that sounds correct but is not. Sure, "reading other people’s code is part of the job", but the failure modes of junior engineers are easily detectable. The failure modes of LLMs are not.

EDIT: formatting

◧◩
2. proc0+mu[view] [source] 2025-06-03 00:32:08
>>bigmad+Vp
It's also funny how it requires a lot of iterations for the average task.. and the user has to pay for the failures. No other product has this expectation, imagine a toaster that only toasts bread 20% of the time, and 50% it's half toasted.
[go to top]