zlacker

[return to "Obituary for Cyc"]
1. ChuckM+H5[view] [source] 2025-04-08 19:57:10
>>todsac+(OP)
I had the funny thought that this is exactly what a sentient AI would write "stop looking here, there is nothing to see, move along." :-)

I (like vannevar apparently) didn't feel Cyc was going anywhere useful, there were ideas there, but not coherent enough to form a credible basis for even a hypothesis of how a system could be constructed that would embody them.

I was pretty impressed by McCarthy's blocks world demo, later he and a student formalized some of the rules for creating 'context'[1] for AI to operate within, I continue to think that will be crucial to solving some of the mess that LLMs create.

For example, the early failures of LLMs suggesting that you could make salad crunchy by adding rocks was a classic context failure, data from the context of 'humor' and data from the context of 'recipes' intertwined. Because existing models have no context during training, there is nothing in the model that 'tunes' the output based on context. And you get rocks in your salad.

[1] https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&d...

◧◩
2. musica+iS[view] [source] 2025-04-09 04:21:32
>>ChuckM+H5
> there remains no evidence of its general intelligence

This seems like a high bar to reach.

We all know that symbolic AI didn't scale as well as LLMs trained on huge amounts of data. However, as you note, it also tried to address many things that LLMs still don't do well.

◧◩◪
3. adastr+8V[view] [source] 2025-04-09 05:01:08
>>musica+iS
Such as what? What can GOFAI do well that LLMs still cannot?
◧◩◪◨
4. YeGobl+Zl1[view] [source] 2025-04-09 10:38:39
>>adastr+8V
SAT solving, verification and model checking, automated theorem proving, planning and scheculing, knowledge representation and reasoning. Those are fields of AI research where LLMs have nothing to offer.
◧◩◪◨⬒
5. adastr+TP1[view] [source] 2025-04-09 14:26:23
>>YeGobl+Zl1
I can ask Claude 3.7 to write me a program that does SAT solving, theorem proving, or scheduling, and it generally gets it right on the first try.
◧◩◪◨⬒⬓
6. YeGobl+Ky2[view] [source] 2025-04-09 18:08:50
>>adastr+TP1
Demonstrate.
◧◩◪◨⬒⬓⬔
7. adastr+XS3[view] [source] 2025-04-10 03:42:50
>>YeGobl+Ky2
It would take you all of 5 seconds to try in Claude yourself. I do this work on a daily basis; I know its value.
◧◩◪◨⬒⬓⬔⧯
8. YeGobl+ji4[view] [source] 2025-04-10 08:40:57
>>adastr+XS3
Do you mean you create SAT solvers with Claude on a daily basis? What is the use case for that?
◧◩◪◨⬒⬓⬔⧯▣
9. adastr+re5[view] [source] 2025-04-10 16:34:15
>>YeGobl+ji4
I ask Claude to solve problems of similar complexity on a daily basis. A SAT solver specifically is maybe a once a week thing.

Use cases are anything, really. Determine resource allocation for a large project, or do Monte Carlo simulation of various financial and risk models. Looking at a problem that has a bunch of solutions with various trade-offs, pick the best strategy given various input constraints.

There are specialized tools out there that you can pay an arm and a leg for a license to do this, or you can have Claude one-off a project that gets the same result for $0.50 of AI credits. We live in an age of unprecedented intelligence abundance, and people are not used to this. I can have Claude implement something that would take a team of engineers months or years to do, and use it once then throw it away.

I say Claude specifically because in my experience none of the other models are really able to handle tasks like this.

Edit: an example prompt I put here: >>43639320

◧◩◪◨⬒⬓⬔⧯▣▦
10. YeGobl+qt5[view] [source] 2025-04-10 18:03:37
>>adastr+re5
Talk is cheap. The bottom line is that I don't see any SAT solvers that you generated with Claude.
◧◩◪◨⬒⬓⬔⧯▣▦▧
11. adastr+YO5[view] [source] 2025-04-10 20:42:44
>>YeGobl+qt5
It’s not my job to make one for you.
[go to top]