zlacker

[parent] [thread] 2 comments
1. heavys+(OP)[view] [source] 2026-02-03 03:01:06
Please don't feed people LLM generated docs
replies(1): >>dehugg+nK2
2. dehugg+nK2[view] [source] 2026-02-03 20:02:37
>>heavys+(OP)
i love the assumption by default that "ai generated" automatically excludes "human verified".

see, i actually read and monitor the outputs. i check them against my own internal knowledge. i trial the results with real trouble shooting and real bug fixes/feature requests.

when its wrong, i fix it. when its right, great we now have documentation where none existed before.

dogfood the documentation and you'll know if its worth using or not.

replies(1): >>heavys+fQ3
◧◩
3. heavys+fQ3[view] [source] [discussion] 2026-02-04 02:25:27
>>dehugg+nK2
Literally several times a week, I have to close PRs with docs that clearly no one read because they are blatantly wrong. This happened after LLMs. If what you're claiming is happening, I'm not seeing it anywhere.

AI is incapable of capturing human context that 99.999% of the time exists in people's brains, not code or context. This is why it is crucial that humans write for humans, not an LLM that puts out docs that have the aesthetics of looking acceptable.

[go to top]