He named Douglas Lenat as one of the ten or so people working on common sense (at the time of the interview in 1998), and said the best system based on common sense is CYC. But he called for proprietary systems not to keep the data a secret, and to distribute copies, so they can evolve and get new ideas, and because we must understand how they work.
Sabbatini: Why there are no computers already working with common sense knowledge ?
Minsky: There are very few people working with common sense problems in Artificial Intelligence. I know of no more than five people, so probably there are about ten of them out there. Who are these people ? There’s John McCarthy, at Stanford University, who was the first to formalize common sense using logics. He has a very interesting web page. Then, there is Harry Sloaman, from the University of Edinburgh, who’s probably the best philosopher in the world working on Artificial Intelligence, with the exception of Daniel Dennett, but he knows more about computers. Then there’s me, of course. Another person working on a strong common-sense project is Douglas Lenat, who directs the CYC project in Austin. Finally, Douglas Hofstadter, who wrote many books about the mind, artificial intelligence, etc., is working on similar problems.
We talk only to each other and no one else is interested. There is something wrong with computer sciences.
Sabbatini: Is there any AI software that uses the common sense approach ?
Minsky: As I said, the best system based on common sense is CYC, developed by Doug Lenat, a brilliant guy, but he set up a company, CYCorp, and is developing it as a proprietary system. Many computer scientists have a good idea and then made it a secret and start making proprietary systems. They should distribute copies of their system to graduate systems, so that they could evolve and get new ideas. We must understand how they work.
[1] http://www.cerebromente.org.br/n07/opiniao/minsky/minsky_i.h...
OK.
> There is something wrong with computer sciences.
Or there is something wrong with you (Minsky). If you're brilliant, and the rest of the world doesn't follow you, it doesn't mean that there's something wrong with them. It may simply be that you are brilliant and wrong.
>He [Aaron Sloman, one of the small group of "each other" who talk to each other] disagrees with all of these on some topics, while agreeing on others.
http://web.media.mit.edu/~minsky/papers/CausalDiversity.html
Minsky:
What is the answer? My opinion is that we can make versatile AI machines only by using several different kinds of representations in the same system! This is because no single method works well for all problems; each is good for certain tasks but not for others. Also different kinds of problems need different kinds of reasoning. For example, much of the reasoning used in computer programming can be logic-based. However, most real-world problems need methods that are better at matching patterns and constructing analogies, making decisions based on previous experience with examples, or using types of explanations that have worked well on similar problems in the past. How can we encourage people to make systems that use multiple methods for representing and reasoning? First we'll have to change some present-day ideas. For example, many students like to ask, "Is it better to represent knowledge with Neural Nets, Logical Deduction, Semantic Networks, Frames, Scripts, Rule-Based Systems or Natural Language?" My teaching method is to try to get them to ask a different kind of question. "First decide what kinds of reasoning might be best for each different kind of problem -- and then find out which combination of representations might work well in each case." A trick that might help them to start doing this is to begin by asking, for each problem, "How many different factors are involved, and how much influence does each factor have?" This leads to a sort of "theory-matrix."
http://www.cs.bham.ac.uk/research/projects/cogaff/crp/#note-...
Sloman:
In retrospect, it seems that a mixture of the probabilistic and deterministic approaches is required, within the study of architectures for complete agents: a more general study than the investigation of algorithms and representations that dominated most of the early work on AI (partly because of the dreadful limitations of speed and memory of even the most expensive and sophisticated computers available in the 1960s and 1970s).
There are many ways such hybrid mechanisms could be implemented, and my recent work on different processing layers within an integrated architecture (combining reactive, deliberative and meta-management layers) indicates some features of a hybrid system, with probabilistic associations dominating the reactive layer and structure manipulations being more important in the deliberative layer. For recent papers on this see
- The Cogaff papers directory http://www.cs.bham.ac.uk/research/cogaff/
- My "talks" directory: http://www.cs.bham.ac.uk/research/projects/cogaff/talks/
- The rapidly growing "miscellaneous" directory: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREAD...
More specific though less comprehensive models have been proposed by other researchers, one of the most impressive being the ACT-R system developed by John Anderson and his collaborators. See http://act.psy.cmu.edu/.
Added 8 Feb 2016 Minsky's paper on "causal diversity" is also relevant:
Marvin L. Minsky, 1992, Future of AI Technology, in Toshiba Review, 47, 7, http://web.media.mit.edu/~minsky/papers/CausalDiversity.html
----
Will Wright discusses how he applies several different kinds of representations to make hybrid models for games, in the democratically elected "Dynamics" section of his talk, "Lessons in Game Design", on "nested dynamics / emergence" and "paradigms".
I also should have been more strong in my statement. Very few active ML/AI researchers believe the database + logical deduction / inference method will even play a nontrivial role in any future AGI system.
One advantage of logic + deduction is potential clarity ahead of time about what the system might do, and ability to explain its actions. If AI safety is your top concern, those seem at least potentially valuable even when other techniques can build powerful systems sooner. (I wish more conventional software had that kind of transparency.)
i'm not particularly fearful of an AI apocolypse, but i couldn't agree with that more.