zlacker

[parent] [thread] 11 comments
1. Kuinox+(OP)[view] [source] 2023-09-01 18:54:04
Why did it didn't work out ?
replies(3): >>jfoutz+h2 >>jfenge+55 >>hackan+Se2
2. jfoutz+h2[view] [source] 2023-09-01 19:07:03
>>Kuinox+(OP)
Take a look at https://en.m.wikipedia.org/wiki/SHRDLU

Cyc is sort of like that, but for everything. Not just a small limited world. I believe it didn’t work out because it’s really hard.

replies(1): >>ansibl+n6
3. jfenge+55[view] [source] 2023-09-01 19:21:40
>>Kuinox+(OP)
I don't know if there's really an answer to that, beyond noting that it never turned out to be more than the sum of its parts. It was a large ontology and a hefty logic engine. You put in queries and you got back answers.

The goal was that in a decade it would become self-sustaining. It would have enough knowledge that it could start reading natural language. And it just... didn't.

Contrast it with LLMs and diffusion and such. They make stupid, asinine mistakes -- real howlers, because they don't understand anything at all about the world. If it could draw, Cyc would never draw a human with 7 fingers on each hand, because it knows that most humans have 5. (It had a decent-ish ontology of human anatomy which could handle injuries and birth defects, but would default reason over the normal case.) I often see ChatGPT stumped by simple variations of brain teasers, and Cyc wouldn't make those mistakes -- once you'd translated them into CycL (its language, because it couldn't read natural language in any meaningful way).

But those same models do a scary job of passing the Turing Test. Nobody would ever have thought to try it on Cyc. It was never anywhere close.

Philosophically I can't say why Cyc never developed "magic" and LLMs (seemingly) do. And I'm still not convinced that they're on the right path, though they actually have some legitimate usages right now. I tried to find uses for Cyc in exactly the opposite direction, guaranteeing data quality, but it turned out nobody really wanted that.

replies(5): >>dredmo+28 >>bpiche+qf >>ushako+fr >>Kuinox+QA >>famous+ihf
◧◩
4. ansibl+n6[view] [source] [discussion] 2023-09-01 19:28:36
>>jfoutz+h2
If we are to develop understandable AGI, I think that some kind of (mathematically correct) probabilistic reasoning based on a symbolic knowledge base is the way to go. You would probably need to have some version of a Neural Net on the front end to make it useful though.

So you'd use the NN to recognize that the thing in front of the camera is a cat, and that would be fed into the symbolic knowledge base for further reasoning.

The knowledge base will contain facts like the cat is likely to "meow" at some point, especially if it wants attention. Based on the relevant context, the knowledge base would also know that the cat is unlikely to be able to talk, unless it is a cat in a work of fiction, for example.

replies(1): >>DonHop+eA
◧◩
5. dredmo+28[view] [source] [discussion] 2023-09-01 19:37:52
>>jfenge+55
One sense that I've had of LLM / generative AIs is that they lack "bones", in the sense that there's no underlying structure to which they adhere, only outward appearances which are statistically correlated (using fantastically complex statistical correlation maps).

Cyc, on the other hand, lacks flesh and skin. It's all skeleton and can generate facts but not embellish them into narratives.

The best human writing has both, much as artists (traditional painters, sculptors, and more recently computer animators) has a skeleton (outline, index cards, Zettlekasten, wireframe) to which flesh, skin, and fur are attached. LLM generative AIs are too plastic, Cyc is insufficiently plastic.

I suspect there's some sort of a middle path between the two. Though that path and its destination also increasingly terrify me.

◧◩
6. bpiche+qf[view] [source] [discussion] 2023-09-01 20:20:00
>>jfenge+55
Had? Cycorp is still around and deploying their software.
◧◩
7. ushako+fr[view] [source] [discussion] 2023-09-01 21:43:16
>>jfenge+55
Sounds similar to WolframAlpha?
◧◩◪
8. DonHop+eA[view] [source] [discussion] 2023-09-01 22:59:20
>>ansibl+n6
At Leela AI we're developing hybrid symbolic-connectionist constructivist AI, combining "neat" neural networks with "scruffy" symbolic logic, enabling unsupervised machine learning that understands cause and effect and teaches itself, motivated by intrinsic curiosity.

Leela AI was founded by Henry Minsky and Cyrus Shaoul, and is inspired by ideas about child development by Jean Piaget, Seymour Papert, Marvin Minsky, and Gary Drescher (described in his book “Made-Up Minds”).

https://mitpress.mit.edu/9780262517089/made-up-minds/

https://leela.ai/leela-core

>Leela Platform is powered by Leela Core, an innovative AI engine based on research at the MIT Artificial Intelligence Lab. With its dynamic combination of traditional neural networks for pattern recognition and causal-symbolic networks for self-discovery, Leela Core goes beyond accurately recognizing objects to comprehend processes, concepts, and causal connections.

>Leela Core is much faster to train than conventional NNs, using 100x less data and enabling 10x less time-to-value. This highly resilient AI can quickly adjust to changes and explain what it is sensing and doing via the Leela Viewer dashboard. [...]

The key to regulating AI is explainability. The key to explainability may be causal AI.

https://leela.ai/post/the-key-to-regulating-ai-is-explainabi...

>[...] For example, the Leela Core engine that drives the Leela Platform for visual intelligence in manufacturing adds a symbolic causal agent that can reason about the world in a way that is more familiar to the human mind than neural networks. The causal layer can cross-check Leela Core's traditional NN components in a hybrid causal/neural architecture. Leela Core is already better at explaining its decisions than NN-only platforms, making it easier to troubleshoot and customize. Much greater transparency is expected in future versions. [...]

replies(1): >>ansibl+oS3
◧◩
9. Kuinox+QA[view] [source] [discussion] 2023-09-01 23:04:57
>>jfenge+55
Thanks - that's was the kind of answer I wanted. Is there any work trying to "merge" the two together ?
10. hackan+Se2[view] [source] 2023-09-02 17:13:24
>>Kuinox+(OP)
"The essentialist tradition, in contrast to the tradition of differential ontology, attempts to locate the identity of any given thing in some essential properties or self-contained identities"

May essentialism just does not work.

https://iep.utm.edu/differential-ontology/

◧◩◪◨
11. ansibl+oS3[view] [source] [discussion] 2023-09-03 11:58:51
>>DonHop+eA
I think this is an interesting approach. A child may collectively spend hours looking at toy blocks, to train themselves to understand how what they see maps to an object in three dimensional space. But later on, the child may see a dog for only a few seconds, and be able to construct an internal model for what a dog is. So the child may initially see the dog standing and pointing to the left, but later the child will be able to recognize a dog laying on the floor pointing to the right. And do that without thousands of training examples, because they have constructed an internal mental model of what is a dog. This model is imperfect, and if they child has never seen a cat before, that might be recognized as a dog too.
◧◩
12. famous+ihf[view] [source] [discussion] 2023-09-07 01:08:40
>>jfenge+55
>because they don't understand anything at all about the world.

LLMs understand plenty, in any way that can be tested. It's really funny when i see making mistakes as the evidence of lack of understanding. Guess people don't understand anything at all too.

> I often see ChatGPT stumped by simple variations of brain teasers

Only if everything else is exactly as the basic teaser and guess what ? humans fall for this too. They see something they memorized and go full speed ahead. Simply changing names is enough to get it to solve it.

[go to top]