zlacker

[return to "Cyc: History's Forgotten AI Project"]
1. blueye+Fp[view] [source] 2024-04-17 22:46:18
>>iafish+(OP)
Cyc is one of those bad ideas that won't die, and which keeps getting rediscovered on HN. Lenat wasted decades of his life on it. Knowledge graphs like Cyc are labor intensive to build and difficult to maintain. They are brittle in the face of change, and useless if they cannot represent the underlying changes of reality.
◧◩
2. breck+1z[view] [source] 2024-04-18 00:00:58
>>blueye+Fp
I think before 2022 it was still an open question whether it was a good approach.

Now it's clear that knowledge graphs are far inferior to deep neural nets, but even still few people can explain the _root_ reason why.

I don't think Lenat's bet was a waste. I think it was sensible based on the information at the time.

The decision to research it largely in secret, closed source, I think was a mistake.

◧◩◪
3. galaxy+0q1[view] [source] 2024-04-18 10:34:42
>>breck+1z
I assume the problem with symbolic inference is that from a single inconsistent premise logic can produce any statement possible.

If that is so then symbolic AI does not easily scale because you cannot feed inconsistent information into it. Compare this to how humans and LLMs learn, they both have no problem with inconsistent information. Yet statistically speaking humans can easily produce "useful" information.

◧◩◪◨
4. xpe+XD8[view] [source] 2024-04-21 01:28:00
>>galaxy+0q1
> Compare this to how humans and LLMs learn, they both have no problem with inconsistent information.

I don't have time to fully refute this claim, but it is very problematic.

1. Even a very narrow framing of how neural networks deal with inconsistent training data would perhaps warrant a paper if not a Ph.D. thesis. Maybe this has already been done? Here is the problem statement: given a DNN with a given topology trained with SGD and a given error function, what happens when you present flatly contradictory training examples? What happens when the contradiction doesn't emerge until deeper levels of a network? Can we detect this? How?

2. Do we really _want_ systems that passively tolerate inconsistent information? When I think of an ideal learning agent, I want one that would engage in the learning process and seek to resolve any apparent contradictions. I haven't actively researched this area, but I'm confident that some have, if only because Tom Mitchell at CMU emphasizes different learning paradigms in his well-known ML book. So hopefully enough people reading that think "yeah, the usual training methods for NNs aren't really that interesting ... we can do better."

3. Just because humans 'tolerate' inconsistent information in some cases doesn't mean they do so well, as compared to ideal Bayesian agents.

4. There are "GOFAI" algorithms for probabilistic reasoning that are in many cases better than DNNs.

[go to top]