http://web.media.mit.edu/~minsky/papers/CausalDiversity.html
Minsky:
What is the answer? My opinion is that we can make versatile AI machines only by using several different kinds of representations in the same system! This is because no single method works well for all problems; each is good for certain tasks but not for others. Also different kinds of problems need different kinds of reasoning. For example, much of the reasoning used in computer programming can be logic-based. However, most real-world problems need methods that are better at matching patterns and constructing analogies, making decisions based on previous experience with examples, or using types of explanations that have worked well on similar problems in the past. How can we encourage people to make systems that use multiple methods for representing and reasoning? First we'll have to change some present-day ideas. For example, many students like to ask, "Is it better to represent knowledge with Neural Nets, Logical Deduction, Semantic Networks, Frames, Scripts, Rule-Based Systems or Natural Language?" My teaching method is to try to get them to ask a different kind of question. "First decide what kinds of reasoning might be best for each different kind of problem -- and then find out which combination of representations might work well in each case." A trick that might help them to start doing this is to begin by asking, for each problem, "How many different factors are involved, and how much influence does each factor have?" This leads to a sort of "theory-matrix."
http://www.cs.bham.ac.uk/research/projects/cogaff/crp/#note-...
Sloman:
In retrospect, it seems that a mixture of the probabilistic and deterministic approaches is required, within the study of architectures for complete agents: a more general study than the investigation of algorithms and representations that dominated most of the early work on AI (partly because of the dreadful limitations of speed and memory of even the most expensive and sophisticated computers available in the 1960s and 1970s).
There are many ways such hybrid mechanisms could be implemented, and my recent work on different processing layers within an integrated architecture (combining reactive, deliberative and meta-management layers) indicates some features of a hybrid system, with probabilistic associations dominating the reactive layer and structure manipulations being more important in the deliberative layer. For recent papers on this see
- The Cogaff papers directory http://www.cs.bham.ac.uk/research/cogaff/
- My "talks" directory: http://www.cs.bham.ac.uk/research/projects/cogaff/talks/
- The rapidly growing "miscellaneous" directory: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREAD...
More specific though less comprehensive models have been proposed by other researchers, one of the most impressive being the ACT-R system developed by John Anderson and his collaborators. See http://act.psy.cmu.edu/.
Added 8 Feb 2016 Minsky's paper on "causal diversity" is also relevant:
Marvin L. Minsky, 1992, Future of AI Technology, in Toshiba Review, 47, 7, http://web.media.mit.edu/~minsky/papers/CausalDiversity.html
----
Will Wright discusses how he applies several different kinds of representations to make hybrid models for games, in the democratically elected "Dynamics" section of his talk, "Lessons in Game Design", on "nested dynamics / emergence" and "paradigms".
I also should have been more strong in my statement. Very few active ML/AI researchers believe the database + logical deduction / inference method will even play a nontrivial role in any future AGI system.
One advantage of logic + deduction is potential clarity ahead of time about what the system might do, and ability to explain its actions. If AI safety is your top concern, those seem at least potentially valuable even when other techniques can build powerful systems sooner. (I wish more conventional software had that kind of transparency.)
i'm not particularly fearful of an AI apocolypse, but i couldn't agree with that more.
I have no issue with probabilistic state-space search.