zlacker

[return to "Doug Lenat has died"]
1. dredmo+83[view] [source] 2023-09-01 18:00:35
>>snewma+(OP)
Cyc ("Syke") is one of those projects I've long found vaguely fascinating though I've never had the time / spoons to look into it significantly. It's an AI project based on a comprehensive ontology and knowledgebase.

Wikipedia's overview: <https://en.wikipedia.org/wiki/Cyc>

Project / company homepage: <https://cyc.com/>

◧◩
2. jfenge+B7[view] [source] 2023-09-01 18:23:24
>>dredmo+83
I worked with Cyc. It was an impressive attempt to do the thing that it does, but it didn't work out. It was the last great attempt to do AI in the "neat" fashion, and its failure helped bring about the current, wildly successful "scruffy" approaches to AI.

It's failure is no shade against Doug. Somebody had to try it, and I'm glad it was one of the brightest guys around. I think he clung on to it long after it was clear that it wasn't going to work out, but breakthroughs do happen. (The current round of machine learning itself is a revival of a technique that had been abandoned, but people who stuck with it anyway discovered the tricks that made it go.)

◧◩◪
3. Kuinox+Jc[view] [source] 2023-09-01 18:54:04
>>jfenge+B7
Why did it didn't work out ?
◧◩◪◨
4. jfenge+Oh[view] [source] 2023-09-01 19:21:40
>>Kuinox+Jc
I don't know if there's really an answer to that, beyond noting that it never turned out to be more than the sum of its parts. It was a large ontology and a hefty logic engine. You put in queries and you got back answers.

The goal was that in a decade it would become self-sustaining. It would have enough knowledge that it could start reading natural language. And it just... didn't.

Contrast it with LLMs and diffusion and such. They make stupid, asinine mistakes -- real howlers, because they don't understand anything at all about the world. If it could draw, Cyc would never draw a human with 7 fingers on each hand, because it knows that most humans have 5. (It had a decent-ish ontology of human anatomy which could handle injuries and birth defects, but would default reason over the normal case.) I often see ChatGPT stumped by simple variations of brain teasers, and Cyc wouldn't make those mistakes -- once you'd translated them into CycL (its language, because it couldn't read natural language in any meaningful way).

But those same models do a scary job of passing the Turing Test. Nobody would ever have thought to try it on Cyc. It was never anywhere close.

Philosophically I can't say why Cyc never developed "magic" and LLMs (seemingly) do. And I'm still not convinced that they're on the right path, though they actually have some legitimate usages right now. I tried to find uses for Cyc in exactly the opposite direction, guaranteeing data quality, but it turned out nobody really wanted that.

◧◩◪◨⬒
5. ushako+YD[view] [source] 2023-09-01 21:43:16
>>jfenge+Oh
Sounds similar to WolframAlpha?
[go to top]