zlacker

[return to "Remembering Doug Lenat and his quest to capture the world with logic"]
1. Chaita+hc[view] [source] 2023-09-06 11:39:55
>>andyjo+(OP)
Great read. Surprised to read Wolfram never actually got to use CYC. Anyone here who has and can talk about its capabilities?
◧◩
2. stakha+pj[view] [source] 2023-09-06 12:30:43
>>Chaita+hc
I briefly looked into it many moons ago when I was a Ph.D. student working in the area of computational semantics in 2006-10. This was already well past the hayday of CYC though.

The first stumbling block was that CYC wasn't openly available. Their research group was very insular, and they were very protective of their IP, hoping to pay for their work through licensing deals and industry- or academic collaborations that could funnel money their way.

They had a subset called "OpenCYC" though, which they released more publicly in the hope of drawing more attention. I tried using that, but soon got frustrated with the software. The representation was in a CYC-specific language called "CycL" and the inference engine was CYC-specific as well and based on a weird description logic specifically invented for CYC. So you couldn't just hook up a first-order theorem prover or anything like that. And "description logic" is a polite term for what their software did. It seemed mostly designed as a workaround to the fact that open-ended inferencing of the kind they spoke of to motivate their work would have depended way too frequently on factoids of common sense knowledge that were missing from the knowledge base. I got frustrated with that software very quickly and eventually gave up.

This was a period of AI-winter, and people doing AI were very afraid to even use the term "AI" to describe what they were doing. People were instead saying they were doing "pattern processing with images" or "audio signal processing" or "natural language processing" or "automated theorem proving" or whatever. Any mention of "AI" made you look naive. But Lenat's group called their stuff "AI" and stuck to their guns, even at a time when that seemed a bit politically inept.

From what I gathered through hearsay, CYC were also doing things like taking a grant from the defense department, and suddenly a major proportion of the facts in the ontology were about military helicopters. But they still kept beating the drum about how they were codifying "common sense" knowledge, and, if only they could get enough "common sense" knowledge in there, they would break through a resistance level at some point, where they could have the AI program itself, i.e. use the existing facts to derive more facts by reading and understanding plain text.

◧◩◪
3. Michae+fA[view] [source] 2023-09-06 13:54:58
>>stakha+pj
That's fascinating to read, thanks for sharing.

Did it ever do something genuinely surprising? That seemed beyond the state-of-the-art at the time?

◧◩◪◨
4. stakha+yJ[view] [source] 2023-09-06 14:34:22
>>Michae+fA
One of the people from Cyc gave a talk at the research group I was in once and mentioned an idea that kind of stuck with me.

...sorry, it takes some building-up to this: At the time, a lot of work in NLP was focused on building parsers that were trying to draw constituency trees from sentences, or extract syntactic dependency structures, but do so in a way that completely abstracted away from semantics, or looked at semantics as an extension of syntax, but not venturing into the territory of inference and common sense. So, a sentence like "Green ideas sleep furiously" (to borrow from Chomsky's example), was just as good as a research object to someone doing that kind of research as a sentence that actually makes sense and is comprised of words of the same lexical categories, like "Absolute power corrupts absolutely". -- I suspect, that line of research is still going strong, so the past tense may not be quite appropriate here. I'm using it, because I have been so out of the loop since leaving academia.

The major problem these folk are facing is an exploding combinatorial space of ambiguity at the grammatical level ("I saw a man with a telescope" can be bracketed "I saw (a man) with a telescope" or "I saw a (man with a telescope)") and the semantic level ("Every man loves a woman" can mean "For every man M there exists a woman W, such that M loves W" or it can mean "There exists a woman W, such that for every man M it is true that M loves W"). Even if you could completely solve the parsing problem, the ambiguity problem would remain.

Now this guy from the Cyc group said: Forget about parsing. If you give me the words that are in the sentence and you're not even giving me any clue about how the words were used in the sentence, I can already look into my ontology and tell you how the ontology would be most likely to connect the words.

Now, the sentence "The cat chased the dog" obviously means something different from "The dog chased the cat" despite using the same words. But in most text genres, you're likely to only encounter sentences that are saying things that are commonly held as true. So if you have an ontology that tells you what's commonly held as true, that gives you a statistical prior that enables you to understand language. In fact, you probably can't hope to understand language without it, and it's probably the key to "disambiguation".

This thought kind of flipped my worldview upside down. I had always kind of thought of it as this "pipelined architecture" where you first need to parse the text, before it even makes sense to think about how to solve the problems of what to do with the output from that parser. But that was unnecessarily limiting. You can look at the problem as a joint-decoding problem, and it may very well be the case that the lion's share of entropy comes from elsewhere, and it may be foolish to go around trying to build parsers, if you haven't yet hooked up your system to the information source that provides the lion's share of entropy, namely common-sense knowledge.

Now, I don't think that Cyc had gotten particularly close to solving that problem either, and, in fact, it was a bit uncharacteristic for a "Cycler" to talk about statistical priors at all, as their work hadn't even gotten into the territory of collecting those kinds of statistics. But, as a theoretical point, I thought it was very valid.

[go to top]