zlacker

[return to "Cyc: History's Forgotten AI Project"]
1. blueye+Fp[view] [source] 2024-04-17 22:46:18
>>iafish+(OP)
Cyc is one of those bad ideas that won't die, and which keeps getting rediscovered on HN. Lenat wasted decades of his life on it. Knowledge graphs like Cyc are labor intensive to build and difficult to maintain. They are brittle in the face of change, and useless if they cannot represent the underlying changes of reality.
◧◩
2. breck+1z[view] [source] 2024-04-18 00:00:58
>>blueye+Fp
I think before 2022 it was still an open question whether it was a good approach.

Now it's clear that knowledge graphs are far inferior to deep neural nets, but even still few people can explain the _root_ reason why.

I don't think Lenat's bet was a waste. I think it was sensible based on the information at the time.

The decision to research it largely in secret, closed source, I think was a mistake.

◧◩◪
3. xpe+Dz[view] [source] 2024-04-18 00:05:55
>>breck+1z
> Now it's clear that knowledge graphs are far inferior to deep neural nets

No. It depends. In general, two technologies can’t be assessed independently of the application.

◧◩◪◨
4. famous+1M[view] [source] 2024-04-18 02:13:17
>>xpe+Dz
Anything other than clear definitions and unambiguous axioms (which happens to be most of the real world) and gofai falls apart. Like it can't even be done. There's a reason it was abandoned in NLP long before the likes of GPT.

There aren't any class of problems deep nets can't handle. Will they always be the most efficient or best performing solution ? No, but it will be possible.

◧◩◪◨⬒
5. mepian+EN[view] [source] 2024-04-18 02:34:41
>>famous+1M
They should handle the problem of hallucinations then.
◧◩◪◨⬒⬓
6. famous+jX[view] [source] 2024-04-18 04:32:08
>>mepian+EN
Bigger models hallucinate less.

and we don't call it hallucinations but gofai mispredicts plenty.

◧◩◪◨⬒⬓⬔
7. xpe+pA8[view] [source] 2024-04-21 00:41:28
>>famous+jX
> Bigger models hallucinate less.

I'm skeptical. Based on what research?

[go to top]