zlacker

[parent] [thread] 0 comments
1. anu7df+(OP)[view] [source] 2026-01-06 07:33:39
This is pretty much how I use LLMs as well. These interactions have convinced me that while the LLMs are very convincing with persuasive arguments, they are wrong often on things I am good at; so much so that I would have a hard time opening PRs for code edited by them without reading it carefully. Gell-man amnesia and all that seems appropriate here even though that anthropomorphizes LLMs to an uncomfortable extent. At some point in the future I can see them becoming very good at recognizing my intent and also reasoning correctly. Not there yet.
[go to top]