>He [Aaron Sloman, one of the small group of "each other" who talk to each other] disagrees with all of these on some topics, while agreeing on others.
http://web.media.mit.edu/~minsky/papers/CausalDiversity.html
Minsky:
What is the answer? My opinion is that we can make versatile AI machines only by using several different kinds of representations in the same system! This is because no single method works well for all problems; each is good for certain tasks but not for others. Also different kinds of problems need different kinds of reasoning. For example, much of the reasoning used in computer programming can be logic-based. However, most real-world problems need methods that are better at matching patterns and constructing analogies, making decisions based on previous experience with examples, or using types of explanations that have worked well on similar problems in the past. How can we encourage people to make systems that use multiple methods for representing and reasoning? First we'll have to change some present-day ideas. For example, many students like to ask, "Is it better to represent knowledge with Neural Nets, Logical Deduction, Semantic Networks, Frames, Scripts, Rule-Based Systems or Natural Language?" My teaching method is to try to get them to ask a different kind of question. "First decide what kinds of reasoning might be best for each different kind of problem -- and then find out which combination of representations might work well in each case." A trick that might help them to start doing this is to begin by asking, for each problem, "How many different factors are involved, and how much influence does each factor have?" This leads to a sort of "theory-matrix."
http://www.cs.bham.ac.uk/research/projects/cogaff/crp/#note-...
Sloman:
In retrospect, it seems that a mixture of the probabilistic and deterministic approaches is required, within the study of architectures for complete agents: a more general study than the investigation of algorithms and representations that dominated most of the early work on AI (partly because of the dreadful limitations of speed and memory of even the most expensive and sophisticated computers available in the 1960s and 1970s).
There are many ways such hybrid mechanisms could be implemented, and my recent work on different processing layers within an integrated architecture (combining reactive, deliberative and meta-management layers) indicates some features of a hybrid system, with probabilistic associations dominating the reactive layer and structure manipulations being more important in the deliberative layer. For recent papers on this see
- The Cogaff papers directory http://www.cs.bham.ac.uk/research/cogaff/
- My "talks" directory: http://www.cs.bham.ac.uk/research/projects/cogaff/talks/
- The rapidly growing "miscellaneous" directory: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREAD...
More specific though less comprehensive models have been proposed by other researchers, one of the most impressive being the ACT-R system developed by John Anderson and his collaborators. See http://act.psy.cmu.edu/.
Added 8 Feb 2016 Minsky's paper on "causal diversity" is also relevant:
Marvin L. Minsky, 1992, Future of AI Technology, in Toshiba Review, 47, 7, http://web.media.mit.edu/~minsky/papers/CausalDiversity.html
----
Will Wright discusses how he applies several different kinds of representations to make hybrid models for games, in the democratically elected "Dynamics" section of his talk, "Lessons in Game Design", on "nested dynamics / emergence" and "paradigms".
I also should have been more strong in my statement. Very few active ML/AI researchers believe the database + logical deduction / inference method will even play a nontrivial role in any future AGI system.
The arrogance - that "we" clearly are right, so "they" clearly must be wrong - grates on me. Minsky may in fact be right, but he should at least have the humility to see that, in a difference of opinion between the few and the many, it is at least possible that the many are right...
I think there's no arrogance in saying the many were foolish to ignore the most used and probably critical part of intelligence. Especially when their work failed due to lacking it. If anything, those thinking they didnt need it were very arrogant in thinking their simple formalisms on old hardware would replace or outperform common sense on wetware.
Besides, time showed who were the fools. ;)
One advantage of logic + deduction is potential clarity ahead of time about what the system might do, and ability to explain its actions. If AI safety is your top concern, those seem at least potentially valuable even when other techniques can build powerful systems sooner. (I wish more conventional software had that kind of transparency.)
Who, in your view, would that be?
The people who thought that rule-driven inference engines were going to get us strong AI? OK, I can give you that events have proven that view to be foolish.
The people who thought that common sense was not the way to AI? Time has not shown that they are fools (at least, not yet), because no impressive AI advances (of which I am aware) are based on the common-sense approach. (I suppose CYC itself could be regarded as such an advance, but I see it more as building material than as a system in itself.)
Now, DonHopkins quotes Minsky as saying that a mix of approaches is the answer. Arguably, that is beginning to be proven. Common sense (the CYC approach)? Not so much.
i'm not particularly fearful of an AI apocolypse, but i couldn't agree with that more.
Sure it has: deep learning. Human common sense is mostly based on intuition. Intuition is a process that finds patterns in unstructured data in terms of classification, relation to other things, and relationships in what we see vs how we respond. It has reinforcement mechanisms that improve the models with better exposure. Just like the neural networks.
They kind of indirectly worked on common sense. Not everything is there and data sets are too narrow for full, common sense. Yet, key attributes are there with amazing results from the likes of DeepMind. So, yeah, we proponents of common sense and intuition are winning. By 4 to 1 in a recent event.
" saying that a mix of approaches is the answer. Arguably, that is beginning to be proven. Common sense (the CYC approach)? Not so much."
Common sense is one component of a hybrid system. That's what I pushed. That's what I understood from others. CYC itself combines a knowledge base representing our "common sense" with one or more reasoning engines. The NN's leveraging it in their internal connections are often combined with tree searches, heuristics, and other things. Our own brain uses many specialized things working together to achieve an overall result.
So, no, common sense storage by itself won't do much for you. One needs the other parts. Hybrid systems were most like the only proven general intelligence. So, we should default on that.
I don't think he meant it that way. He was well aware he didn't have all the answers. What I believe he was talking about was not the answers but the questions: which ones are people spending their time on? I think he's saying that the questions that most people in AI are spending their time on are not going to give us strong AI. Is that such a controversial claim? I expect most people in the field would agree with it.
Here is something he said to me in April 2009 in a discussion about educational software for the OLPC:
Marvin Minsky: "I've been unsuccessful at getting support for a major project to build the architecture proposed in "The Emotion Machine." The idea is to make an AI that can use multiple methods and commonsense knowledge--so that whenever it gets stuck, it can try another approach. The trouble is that most funding has come under the control of statistical and logical practitioners, or people who think we need to solve low-level problems before we can deal with human-level ones."
Maybe (I'll venture a wild guess) it's just that investing in statistical AI research currently makes more financial sense for the goals of the advertising industry that's funding most of the research these days... You're the product, and all that.
Intuition just adds connections to other knowledge and reasoning part. That our brain is hybrid like that is why I advocate more hybrids, all with an intuition-like component.
I have no issue with probabilistic state-space search.