zlacker

[return to "Cyc: History's Forgotten AI Project"]
1. blueye+Fp[view] [source] 2024-04-17 22:46:18
>>iafish+(OP)
Cyc is one of those bad ideas that won't die, and which keeps getting rediscovered on HN. Lenat wasted decades of his life on it. Knowledge graphs like Cyc are labor intensive to build and difficult to maintain. They are brittle in the face of change, and useless if they cannot represent the underlying changes of reality.
◧◩
2. breck+1z[view] [source] 2024-04-18 00:00:58
>>blueye+Fp
I think before 2022 it was still an open question whether it was a good approach.

Now it's clear that knowledge graphs are far inferior to deep neural nets, but even still few people can explain the _root_ reason why.

I don't think Lenat's bet was a waste. I think it was sensible based on the information at the time.

The decision to research it largely in secret, closed source, I think was a mistake.

◧◩◪
3. xpe+Dz[view] [source] 2024-04-18 00:05:55
>>breck+1z
> Now it's clear that knowledge graphs are far inferior to deep neural nets

No. It depends. In general, two technologies can’t be assessed independently of the application.

◧◩◪◨
4. famous+1M[view] [source] 2024-04-18 02:13:17
>>xpe+Dz
Anything other than clear definitions and unambiguous axioms (which happens to be most of the real world) and gofai falls apart. Like it can't even be done. There's a reason it was abandoned in NLP long before the likes of GPT.

There aren't any class of problems deep nets can't handle. Will they always be the most efficient or best performing solution ? No, but it will be possible.

◧◩◪◨⬒
5. xpe+GB8[view] [source] 2024-04-21 00:59:36
>>famous+1M
> There aren't any class of problems deep nets can't handle. Will they always be the most efficient or best performing solution ? No, but it will be possible.

This assumes that all classes of problems reduce to functions which can be approximated, right, per the universal approximation theorems?

Even for cases where the UAT applies (which is not everywhere, as I show next), your caveat understates the case. There are dramatically better and worse algorithms for differing problems.

But I think a lot of people (including the comment above) misunderstand or misapply the UATs. Think about the assumptions! UATs assume a fixed length input, do they not? This breaks a correspondence with many classes of algorithms.*

## Example

Let's make a DNN that sorts a list of numbers, shall we? But we can't cheat and only have it do pairwise comparisons -- that is not the full sorting problem. We have to input the list of numbers and output the list of sorted numbers. At run-time. With a variable-length list of inputs.

So no single DNN will do! For every input length, we would need a different DNN, would we not? Training this collection of DNNs will be a whole lot of fun! It will make Bitcoin mining look like a poster-child of energy conservation. /s

* Or am I wrong? Is there a theoretical result I don't know about?

◧◩◪◨⬒⬓
6. famous+RD9[view] [source] 2024-04-21 15:13:43
>>xpe+GB8
>Even for cases where the UAT applies (which is not everywhere, as I show next), your caveat understates the case. There are dramatically better and worse algorithms for differing problems.

The grand goal of AI is a general learner that can at least tackle any kind of problem we care about. Are DNNs the best performing solution for every problem? No and I agree on that. But they are applicable to a far wider range of problems. There is no question what the better general learner paradigm is.

>* Or am I wrong? Is there a theoretical result I don't know about?

Thankfully, we don't need to get into theoreticals. Go ask GPT-4 to sort an arbitrary list of numbers. Change the length and try again.

[go to top]