zlacker

[parent] [thread] 2 comments
1. blurbl+(OP)[view] [source] 2025-08-22 03:33:39
The false equivalence I pointed at earlier was "LLM code => no human on the other side".

The person driving the LLM is a teachable human who can learn what's what's going on and learn to improve the code. It's simply not true that there's no person on the other side of the PR.

The idea that we should be comparing "teaching a human" to "teaching an LLM" is yet another instance of this false equivalence.

It's not inherently pointless to provide feedback on a PR with code written using an LLM, that feedback goes to the person using the LLM tools.

People are swallowing this b.s. marketing mystification of "LLMs as non human entities". But really they're fancy compilers that we have a lot to learn about.

replies(1): >>nullc+6d
2. nullc+6d[view] [source] 2025-08-22 06:31:39
>>blurbl+(OP)
The person operating the LLM is not a meaningfully teachable human when they're not disclosing that they're using an LLM.

IF they disclose what they've done, provided the prompts, etc. then other contributors can help them get better results from the tools. But the feedback is very different than the feedback you'd give a human that actually wrote the code in question, that latter feedback is unlikely to be of much value (and even less likely to persist).

replies(1): >>sho_hn+fv
◧◩
3. sho_hn+fv[view] [source] [discussion] 2025-08-22 10:06:12
>>nullc+6d
Yep, true.

I've done things like share a ChatGPT account with a junior dev to steer them toward better prompts, actually, and that had some merit.

[go to top]