zlacker

[return to "Douglas Lenat's Cyc is now being commercialized"]
1. DonHop+H2[view] [source] 2016-03-16 21:14:29
>>_freu+(OP)
Marvin Minsky said "We need common-sense knowledge – and programs that can use it. Common sense computing needs several ways of representing knowledge. It is harder to make a computer housekeeper than a computer chess-player, because the housekeeper must deal with a wider range of situations." [1]

He named Douglas Lenat as one of the ten or so people working on common sense (at the time of the interview in 1998), and said the best system based on common sense is CYC. But he called for proprietary systems not to keep the data a secret, and to distribute copies, so they can evolve and get new ideas, and because we must understand how they work.

Sabbatini: Why there are no computers already working with common sense knowledge ?

Minsky: There are very few people working with common sense problems in Artificial Intelligence. I know of no more than five people, so probably there are about ten of them out there. Who are these people ? There’s John McCarthy, at Stanford University, who was the first to formalize common sense using logics. He has a very interesting web page. Then, there is Harry Sloaman, from the University of Edinburgh, who’s probably the best philosopher in the world working on Artificial Intelligence, with the exception of Daniel Dennett, but he knows more about computers. Then there’s me, of course. Another person working on a strong common-sense project is Douglas Lenat, who directs the CYC project in Austin. Finally, Douglas Hofstadter, who wrote many books about the mind, artificial intelligence, etc., is working on similar problems.

We talk only to each other and no one else is interested. There is something wrong with computer sciences.

Sabbatini: Is there any AI software that uses the common sense approach ?

Minsky: As I said, the best system based on common sense is CYC, developed by Doug Lenat, a brilliant guy, but he set up a company, CYCorp, and is developing it as a proprietary system. Many computer scientists have a good idea and then made it a secret and start making proprietary systems. They should distribute copies of their system to graduate systems, so that they could evolve and get new ideas. We must understand how they work.

[1] http://www.cerebromente.org.br/n07/opiniao/minsky/minsky_i.h...

◧◩
2. Animal+B3[view] [source] 2016-03-16 21:23:52
>>DonHop+H2
> We talk only to each other and no one else is interested.

OK.

> There is something wrong with computer sciences.

Or there is something wrong with you (Minsky). If you're brilliant, and the rest of the world doesn't follow you, it doesn't mean that there's something wrong with them. It may simply be that you are brilliant and wrong.

◧◩◪
3. DonHop+S3[view] [source] 2016-03-16 21:26:20
>>Animal+B3
Do you mean {inclusive or exclusive} "Or"? I'd say there's something wrong with computer sciences, and Minsky was brilliant, and right about some things, and wrong about other things.

>He [Aaron Sloman, one of the small group of "each other" who talk to each other] disagrees with all of these on some topics, while agreeing on others.

◧◩◪◨
4. Animal+Pc[view] [source] 2016-03-16 23:02:34
>>DonHop+S3
I meant exclusive or. I was getting at the arrogance: "Out of all the AI people, only the 5 of us talk to each other. There must be something wrong with the whole field, because they can't see how right we are!"

The arrogance - that "we" clearly are right, so "they" clearly must be wrong - grates on me. Minsky may in fact be right, but he should at least have the humility to see that, in a difference of opinion between the few and the many, it is at least possible that the many are right...

◧◩◪◨⬒
5. ScottB+qA[view] [source] 2016-03-17 06:10:53
>>Animal+Pc
> The arrogance - that "we" clearly are right, so "they" clearly must be wrong - grates on me.

I don't think he meant it that way. He was well aware he didn't have all the answers. What I believe he was talking about was not the answers but the questions: which ones are people spending their time on? I think he's saying that the questions that most people in AI are spending their time on are not going to give us strong AI. Is that such a controversial claim? I expect most people in the field would agree with it.

◧◩◪◨⬒⬓
6. DonHop+5D[view] [source] 2016-03-17 07:28:13
>>ScottB+qA
I agree that he didn't mean it in an arrogant way, didn't think he had all the answers, and was asking big questions. He was all about integrating multiple methods, including commonsense knowledge like CYC. But it's hard to get commonsense knowledge methods funded by the current "benefactors of AI".

Here is something he said to me in April 2009 in a discussion about educational software for the OLPC:

Marvin Minsky: "I've been unsuccessful at getting support for a major project to build the architecture proposed in "The Emotion Machine." The idea is to make an AI that can use multiple methods and commonsense knowledge--so that whenever it gets stuck, it can try another approach. The trouble is that most funding has come under the control of statistical and logical practitioners, or people who think we need to solve low-level problems before we can deal with human-level ones."

Maybe (I'll venture a wild guess) it's just that investing in statistical AI research currently makes more financial sense for the goals of the advertising industry that's funding most of the research these days... You're the product, and all that.

[go to top]