zlacker

[parent] [thread] 1 comments
1. ScottB+(OP)[view] [source] 2016-03-17 06:10:53
> The arrogance - that "we" clearly are right, so "they" clearly must be wrong - grates on me.

I don't think he meant it that way. He was well aware he didn't have all the answers. What I believe he was talking about was not the answers but the questions: which ones are people spending their time on? I think he's saying that the questions that most people in AI are spending their time on are not going to give us strong AI. Is that such a controversial claim? I expect most people in the field would agree with it.

replies(1): >>DonHop+F2
2. DonHop+F2[view] [source] 2016-03-17 07:28:13
>>ScottB+(OP)
I agree that he didn't mean it in an arrogant way, didn't think he had all the answers, and was asking big questions. He was all about integrating multiple methods, including commonsense knowledge like CYC. But it's hard to get commonsense knowledge methods funded by the current "benefactors of AI".

Here is something he said to me in April 2009 in a discussion about educational software for the OLPC:

Marvin Minsky: "I've been unsuccessful at getting support for a major project to build the architecture proposed in "The Emotion Machine." The idea is to make an AI that can use multiple methods and commonsense knowledge--so that whenever it gets stuck, it can try another approach. The trouble is that most funding has come under the control of statistical and logical practitioners, or people who think we need to solve low-level problems before we can deal with human-level ones."

Maybe (I'll venture a wild guess) it's just that investing in statistical AI research currently makes more financial sense for the goals of the advertising industry that's funding most of the research these days... You're the product, and all that.

[go to top]