zlacker

[parent] [thread] 0 comments
1. ziml77+(OP)[view] [source] 2026-02-05 04:24:41
> But inconsistency isn't a surprise for anyone who actually knows how LLMs work

Exactly. These people saying they've gotten good results for the same question aren't countering your argument. All they're doing is proving that sometimes it can output good results. But a tool that's randomly right or wrong is not a very useful one. You can't trust any of its output unless you can validate it. And for a lot of the questions people ask of it, if you have to validate it, there was no reason to use the LLM in the first place.

[go to top]