zlacker

[return to "Obituary for Cyc"]
1. ChuckM+H5[view] [source] 2025-04-08 19:57:10
>>todsac+(OP)
I had the funny thought that this is exactly what a sentient AI would write "stop looking here, there is nothing to see, move along." :-)

I (like vannevar apparently) didn't feel Cyc was going anywhere useful, there were ideas there, but not coherent enough to form a credible basis for even a hypothesis of how a system could be constructed that would embody them.

I was pretty impressed by McCarthy's blocks world demo, later he and a student formalized some of the rules for creating 'context'[1] for AI to operate within, I continue to think that will be crucial to solving some of the mess that LLMs create.

For example, the early failures of LLMs suggesting that you could make salad crunchy by adding rocks was a classic context failure, data from the context of 'humor' and data from the context of 'recipes' intertwined. Because existing models have no context during training, there is nothing in the model that 'tunes' the output based on context. And you get rocks in your salad.

[1] https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&d...

◧◩
2. musica+iS[view] [source] 2025-04-09 04:21:32
>>ChuckM+H5
> there remains no evidence of its general intelligence

This seems like a high bar to reach.

We all know that symbolic AI didn't scale as well as LLMs trained on huge amounts of data. However, as you note, it also tried to address many things that LLMs still don't do well.

◧◩◪
3. adastr+8V[view] [source] 2025-04-09 05:01:08
>>musica+iS
Such as what? What can GOFAI do well that LLMs still cannot?
◧◩◪◨
4. sgt101+p31[view] [source] 2025-04-09 06:56:46
>>adastr+8V
I think logical reasoning - so reasoning about logical problems, especially those with transitive relations like two way implication. A way round this is to get them to write prolog relations and then reason over them... with prolog. This isn't a fail - it's what things like prolog do, and not what things like nns do. If I was asked to solve these problems I would write prolog too.

I think quite a lot of planning.

I think scheduling - I tried something recently and GPT4 wrote python code which worked for very naive cases but then failed at any scale.

Basically though - trusted reasoning. Where you need a precise and correct answer LLM's aren't any good. They fail in the limit. But where you need a generally decent answer they are amazing. You just can't rely on it.

Whereas GOFAI you can, because if you couldn't the community thew it out and said it was impossible!

◧◩◪◨⬒
5. adastr+iP1[view] [source] 2025-04-09 14:23:14
>>sgt101+p31
I guess that's a fine distinction I don't make. If the problem requires the AI to write a prolog program to solve, and it is capable of writing the necessary prolog code, then I don't see the practical or philosophical difference from taking the transitive step and saying the AI solved it. If I asked you to solve an air traffic control problem and you did so by writing prolog, no one would try to claim you weren't capable of solving it.

Agentic LLMs can solve complicated reasoning and scheduling problems, by writing special-purpose solutions (which might resemble the things we call GOFAI). It's the nature of AGI--which LLMs assuredly are--that they can solve problems by inventing specialized tools, just as we do.

◧◩◪◨⬒⬓
6. cess11+bf2[view] [source] 2025-04-09 16:38:03
>>adastr+iP1
Can you show us a log from when you gave an LLM a scheduling problem or something and it decided to solve it with Prolog or Z3 or something?
◧◩◪◨⬒⬓⬔
7. adastr+kB3[view] [source] 2025-04-10 00:06:56
>>cess11+bf2
On mobile so I’m not sure how to export a chat log, but the following prompts worked with ChatGPT:

1: I need to schedule scientific operations for a space probe, given a lot of hard instrument and schedule constraints. Please write a program to do this. Use the best tool for the job, no matter how obscure.

2: This is a high-value NASA space mission and so we only get one shot at it. We need to make absolutely sure that the solution is correct and optimal, ideally with proofs.

3: Please code me up a full example, making up appropriate input data for the purpose of illustration

I got an implementation that at first glance looks correct using the MiniZinc constraint solver. I’m sure people could quibble, but I was not trying to lead the model in any way. The second prompt was because the first generated a simple python program, and I think it was because I didn’t specify that it was a high value project that needed mission assurance at the start. A better initial prompt would’ve gotten the desired result on the first try.

[go to top]