zlacker

[return to "Obituary for Cyc"]
1. ChuckM+H5[view] [source] 2025-04-08 19:57:10
>>todsac+(OP)
I had the funny thought that this is exactly what a sentient AI would write "stop looking here, there is nothing to see, move along." :-)

I (like vannevar apparently) didn't feel Cyc was going anywhere useful, there were ideas there, but not coherent enough to form a credible basis for even a hypothesis of how a system could be constructed that would embody them.

I was pretty impressed by McCarthy's blocks world demo, later he and a student formalized some of the rules for creating 'context'[1] for AI to operate within, I continue to think that will be crucial to solving some of the mess that LLMs create.

For example, the early failures of LLMs suggesting that you could make salad crunchy by adding rocks was a classic context failure, data from the context of 'humor' and data from the context of 'recipes' intertwined. Because existing models have no context during training, there is nothing in the model that 'tunes' the output based on context. And you get rocks in your salad.

[1] https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&d...

◧◩
2. musica+iS[view] [source] 2025-04-09 04:21:32
>>ChuckM+H5
> there remains no evidence of its general intelligence

This seems like a high bar to reach.

We all know that symbolic AI didn't scale as well as LLMs trained on huge amounts of data. However, as you note, it also tried to address many things that LLMs still don't do well.

◧◩◪
3. ChuckM+yZ[view] [source] 2025-04-09 05:58:23
>>musica+iS
This is exactly correct, LLMs did scale with huge data, symbolic AI did not. So why? One of the things I periodically ask people working on LLMs is "what does a 'parameter' represent? The simplistic answer is 'it's a weight in a neural net node' but that doesn't much closer. Consider something like a bloom filter where a '0' bit represents the nth bit of all hashes of strings this filter has not seen. I would be interested in reading a paper that does a good job of explaining what a parameter ends up representing in an LLM model.[1]

I suspect that McCarthy was on to something with the context thing. Organic intelligence certainly fails in creative ways without context it would not be disqualifying to have AI fail in similarly spectacular ways.

[1] I made a bit of progress on this considering it to be the permeability for progress such that the higher the weight the easier it was to 'pass thorough' this particular neuron but the cyclic nature of the graph makes a purely topological explanation pretty obtuse :-).

◧◩◪◨
4. kracke+d01[view] [source] 2025-04-09 06:06:19
>>ChuckM+yZ
>I would be interested in reading a paper that does a good job of explaining what a parameter ends up representing in an LLM model.

https://distill.pub/2020/circuits/ https://transformer-circuits.pub/2025/attribution-graphs/bio...

[go to top]