zlacker

[parent] [thread] 0 comments
1. fatbir+(OP)[view] [source] 2024-05-16 05:10:35
Not in the least. LLMs don't introspect. LLMs have no sense of self. There is no secondary process in an LLM monitoring the output and checking it against anything else. This is how they hallucinate: a complete lack of self-awareness. All they can do is sound convincing based on mostly coherent training data.

How does an LLM look at a heptagon and confidently say it's an octagon? Because visually they're similar, octagons are relatively more common (and identified as such) while heptagons are rare. What it doesn't do is count the sides, something a child in kindergarten can do.

If I were working in AI I would be focussing on exactly this problem: finding the "right sounding" answer solves a lot of cases well enough, but falls down embarrassingly when other cogitive processes are available that are guaranteed to produce correct results (when done correctly). Anyone asking chatgpt a math question should be able to get back a correctly calculated math answer, and the way to get that answer is not to massage the training data, it's to dispatch the prompt to a different subsystem that can parse the request and return a result that a calculator can provide.

It's similar to using LLMs for law: they hallucinate cases and precedents that don't exist because they're not checking against nexis, they're just sounding good. The next problem in AI is the layer of executive functioning that taps the correct part of the AI based on the input.

[go to top]