zlacker

[parent] [thread] 1 comments
1. Cthulh+(OP)[view] [source] 2026-01-23 12:52:21
I don't remember exactly who said it, but at one point I read a good take - people trust these chatbots because there's big companies and billions behind them, surely big companies test and verify their stuff thoroughly?

But (as someone else described), GPTs and other current-day LLMs are probabilistic. But 99% of what they produce seems feasible enough.

replies(1): >>nullc+Ec5
2. nullc+Ec5[view] [source] 2026-01-25 08:20:48
>>Cthulh+(OP)
> But 99% of what they produce seems feasible enough.

This being a big part of the problem-- their false answers are more plausible and convincing then the truth. The output almost always seems feasible-- true or not is an entirely different matter.

Historically when most things fail they produce nonsense. If they do they are producing something related to the truth (but perhaps biased or mis-calibrated). LLM output can be both highly plausible and unrelated to reality.

[go to top]