zlacker

[parent] [thread] 3 comments
1. Analem+(OP)[view] [source] 2025-05-06 17:00:24
I don't think that's the relevant comparison though. Do you expect StackOverflow or product documentation to be 100% accurate 100% of the time? I definitely don't.
replies(3): >>ctxc+k1 >>ctxc+w1 >>kweing+6j
2. ctxc+k1[view] [source] 2025-05-06 17:07:31
>>Analem+(OP)
The error introduced by the data is expected and internalized, it's the error of LLMs on _top_ of that that's hard to.
3. ctxc+w1[view] [source] 2025-05-06 17:08:39
>>Analem+(OP)
Also, documentation and SO are incorrect in a predictable way. We don't expect them to state things in a matter of fact way that just don't exist.
4. kweing+6j[view] [source] 2025-05-06 18:56:43
>>Analem+(OP)
I actually agree with this. I use LLMs often, and I don't compare them to a calculator.

Mainly I meant to push back against the reflexive comparison to a friend or family member or colleague. AI is a multi-purpose tool that is used for many different kinds of tasks. Some of these tasks are analogues to human tasks, where we should anticipate human error. Others are not, and yet we often ask an LLM to do them anyway.

[go to top]