zlacker

[return to "Douglas Lenat's Cyc is now being commercialized"]
1. DonHop+H2[view] [source] 2016-03-16 21:14:29
>>_freu+(OP)
Marvin Minsky said "We need common-sense knowledge – and programs that can use it. Common sense computing needs several ways of representing knowledge. It is harder to make a computer housekeeper than a computer chess-player, because the housekeeper must deal with a wider range of situations." [1]

He named Douglas Lenat as one of the ten or so people working on common sense (at the time of the interview in 1998), and said the best system based on common sense is CYC. But he called for proprietary systems not to keep the data a secret, and to distribute copies, so they can evolve and get new ideas, and because we must understand how they work.

Sabbatini: Why there are no computers already working with common sense knowledge ?

Minsky: There are very few people working with common sense problems in Artificial Intelligence. I know of no more than five people, so probably there are about ten of them out there. Who are these people ? There’s John McCarthy, at Stanford University, who was the first to formalize common sense using logics. He has a very interesting web page. Then, there is Harry Sloaman, from the University of Edinburgh, who’s probably the best philosopher in the world working on Artificial Intelligence, with the exception of Daniel Dennett, but he knows more about computers. Then there’s me, of course. Another person working on a strong common-sense project is Douglas Lenat, who directs the CYC project in Austin. Finally, Douglas Hofstadter, who wrote many books about the mind, artificial intelligence, etc., is working on similar problems.

We talk only to each other and no one else is interested. There is something wrong with computer sciences.

Sabbatini: Is there any AI software that uses the common sense approach ?

Minsky: As I said, the best system based on common sense is CYC, developed by Doug Lenat, a brilliant guy, but he set up a company, CYCorp, and is developing it as a proprietary system. Many computer scientists have a good idea and then made it a secret and start making proprietary systems. They should distribute copies of their system to graduate systems, so that they could evolve and get new ideas. We must understand how they work.

[1] http://www.cerebromente.org.br/n07/opiniao/minsky/minsky_i.h...

◧◩
2. Animal+B3[view] [source] 2016-03-16 21:23:52
>>DonHop+H2
> We talk only to each other and no one else is interested.

OK.

> There is something wrong with computer sciences.

Or there is something wrong with you (Minsky). If you're brilliant, and the rest of the world doesn't follow you, it doesn't mean that there's something wrong with them. It may simply be that you are brilliant and wrong.

◧◩◪
3. DonHop+S3[view] [source] 2016-03-16 21:26:20
>>Animal+B3
Do you mean {inclusive or exclusive} "Or"? I'd say there's something wrong with computer sciences, and Minsky was brilliant, and right about some things, and wrong about other things.

>He [Aaron Sloman, one of the small group of "each other" who talk to each other] disagrees with all of these on some topics, while agreeing on others.

◧◩◪◨
4. Animal+Pc[view] [source] 2016-03-16 23:02:34
>>DonHop+S3
I meant exclusive or. I was getting at the arrogance: "Out of all the AI people, only the 5 of us talk to each other. There must be something wrong with the whole field, because they can't see how right we are!"

The arrogance - that "we" clearly are right, so "they" clearly must be wrong - grates on me. Minsky may in fact be right, but he should at least have the humility to see that, in a difference of opinion between the few and the many, it is at least possible that the many are right...

◧◩◪◨⬒
5. nickps+rf[view] [source] 2016-03-16 23:34:41
>>Animal+Pc
Common sense powers the many's decisions around 90% of their day. It seems to be a prerequisite for many intelligence functions. Many AI's screw up or are ineffective entirely in real-world because they lack it. And only around five pro's were working on it.

I think there's no arrogance in saying the many were foolish to ignore the most used and probably critical part of intelligence. Especially when their work failed due to lacking it. If anything, those thinking they didnt need it were very arrogant in thinking their simple formalisms on old hardware would replace or outperform common sense on wetware.

Besides, time showed who were the fools. ;)

◧◩◪◨⬒⬓
6. Animal+0j[view] [source] 2016-03-17 00:18:11
>>nickps+rf
> Besides, time showed who were the fools. ;)

Who, in your view, would that be?

The people who thought that rule-driven inference engines were going to get us strong AI? OK, I can give you that events have proven that view to be foolish.

The people who thought that common sense was not the way to AI? Time has not shown that they are fools (at least, not yet), because no impressive AI advances (of which I am aware) are based on the common-sense approach. (I suppose CYC itself could be regarded as such an advance, but I see it more as building material than as a system in itself.)

Now, DonHopkins quotes Minsky as saying that a mix of approaches is the answer. Arguably, that is beginning to be proven. Common sense (the CYC approach)? Not so much.

◧◩◪◨⬒⬓⬔
7. nickps+qm[view] [source] 2016-03-17 01:10:45
>>Animal+0j
"Time has not shown that they are fools (at least, not yet), because no impressive AI advances (of which I am aware) are based on the common-sense approach"

Sure it has: deep learning. Human common sense is mostly based on intuition. Intuition is a process that finds patterns in unstructured data in terms of classification, relation to other things, and relationships in what we see vs how we respond. It has reinforcement mechanisms that improve the models with better exposure. Just like the neural networks.

They kind of indirectly worked on common sense. Not everything is there and data sets are too narrow for full, common sense. Yet, key attributes are there with amazing results from the likes of DeepMind. So, yeah, we proponents of common sense and intuition are winning. By 4 to 1 in a recent event.

" saying that a mix of approaches is the answer. Arguably, that is beginning to be proven. Common sense (the CYC approach)? Not so much."

Common sense is one component of a hybrid system. That's what I pushed. That's what I understood from others. CYC itself combines a knowledge base representing our "common sense" with one or more reasoning engines. The NN's leveraging it in their internal connections are often combined with tree searches, heuristics, and other things. Our own brain uses many specialized things working together to achieve an overall result.

So, no, common sense storage by itself won't do much for you. One needs the other parts. Hybrid systems were most like the only proven general intelligence. So, we should default on that.

◧◩◪◨⬒⬓⬔⧯
8. Animal+bM1[view] [source] 2016-03-17 20:37:44
>>nickps+qm
I see a huge difference between the deep learning approach and the CYC approach. I don't see enough common ground to call them both "common sense" approaches. And, in fact, in the conversation up to this point, the CYC approach is what we were calling the "common sense" approach. So I don't see deep learning as validation of the common sense approach, at least not as the terms have been used in this conversation.
◧◩◪◨⬒⬓⬔⧯▣
9. nickps+ha2[view] [source] 2016-03-18 00:29:39
>>Animal+bM1
That's why I defined common sense in terms of collection of and acting on knowledge via human intuition mechanism. It is a neural network or series of them that finds patterns in raw data with reinforcement. That sounds like deep learning. Cyc is doing something similar but hand-crafted instead of raw and logical instead of probabilistic.

Intuition just adds connections to other knowledge and reasoning part. That our brain is hybrid like that is why I advocate more hybrids, all with an intuition-like component.

[go to top]