zlacker

[parent] [thread] 1 comments
1. tinco+(OP)[view] [source] 2023-11-18 06:15:04
Humans don't work this way either. You don't need the LLM to do the logic, you just need the LLM to prepare the information so it can be fed into a logic engine. Just like humans do when they shut down their system 1 brain and go into system 2 slow mode.

I'm in the definitely ready for AGI camp. But it's not going to be a single model that's going to do the AGI magic trick, it's going to be an engineered system consisting of multiple communicating models hooked up using traditional engineering techniques.

replies(1): >>denton+871
2. denton+871[view] [source] 2023-11-18 15:03:31
>>tinco+(OP)
> You don't need the LLM to do the logic, you just need the LLM to prepare the information so it can be fed into a logic engine.

This is my view!

Expert Systems went nowhere, because you have to sit a domain expert down with a knowledge engineer for months, encoding the expertise. And then you get a system that is expert in a specific domain. So if you can get an LLM to distil a corpus (library, or whatever) into a collection of "facts" attributed to specific authors, you could stream those facts into an expert system, that could make deductions, and explain its reasoning.

So I don't think these LLMs lead directly to AGI (or any kind of AI). They are text-retrieval systems, a bit like search engines but cleverer. But used as an input-filter for a reasoning engine such as an expert system, you could end up with a system that starts to approach what I'd call "intelligence".

If someone is trying to develop such a system, I'd like to know.

[go to top]