He named Douglas Lenat as one of the ten or so people working on common sense (at the time of the interview in 1998), and said the best system based on common sense is CYC. But he called for proprietary systems not to keep the data a secret, and to distribute copies, so they can evolve and get new ideas, and because we must understand how they work.
Sabbatini: Why there are no computers already working with common sense knowledge ?
Minsky: There are very few people working with common sense problems in Artificial Intelligence. I know of no more than five people, so probably there are about ten of them out there. Who are these people ? There’s John McCarthy, at Stanford University, who was the first to formalize common sense using logics. He has a very interesting web page. Then, there is Harry Sloaman, from the University of Edinburgh, who’s probably the best philosopher in the world working on Artificial Intelligence, with the exception of Daniel Dennett, but he knows more about computers. Then there’s me, of course. Another person working on a strong common-sense project is Douglas Lenat, who directs the CYC project in Austin. Finally, Douglas Hofstadter, who wrote many books about the mind, artificial intelligence, etc., is working on similar problems.
We talk only to each other and no one else is interested. There is something wrong with computer sciences.
Sabbatini: Is there any AI software that uses the common sense approach ?
Minsky: As I said, the best system based on common sense is CYC, developed by Doug Lenat, a brilliant guy, but he set up a company, CYCorp, and is developing it as a proprietary system. Many computer scientists have a good idea and then made it a secret and start making proprietary systems. They should distribute copies of their system to graduate systems, so that they could evolve and get new ideas. We must understand how they work.
[1] http://www.cerebromente.org.br/n07/opiniao/minsky/minsky_i.h...
An free updated version of his book "The Computer Revolution in Philosophy: Philosophy Science and Models of Mind" is available [2].
About the cool retro cover he writes: "I was not consulted about the cover. The book is mainly concerned with the biological, psychological and philosophical significance of virtual machinery. I did not know that the publishers had decided to associate it with paper tape devices until it was published." -Aaron Sloman
A recent update (Feb 2016) references Minsky's "Future of AI Technology" paper on "causal diversity" as being relevant to the the "Probabilistic (associative) vs structural learning" section. [3]
Wikipedia:
Aaron Sloman is a philosopher and researcher on artificial intelligence and cognitive science who was born in Rhodesia (now Zimbabwe). He is the author of several papers on philosophy, epistemology and artificial intelligence. He held the Chair in Artificial Intelligence and Cognitive Science at the School of Computer Science at the University of Birmingham, and before that a chair with the same title at the University of Sussex. He has collaborated with biologist Jackie Chappell on the evolution of intelligence. Since retiring he is Honorary Professor of Artificial Intelligence and Cognitive Science at Birmingham.
Influences
His philosophical ideas were deeply influenced by the writings of Immanuel Kant, Gottlob Frege and Karl Popper, and to a lesser extent by John Austin, Gilbert Ryle, R. M. Hare (who, as his 'personal tutor' at Balliol College discussed meta-ethics with him), Imre Lakatos and Ludwig Wittgenstein. What he could learn from philosophers left large gaps, which he decided around 1970 research in artificial intelligence might fill. E.g. philosophy of mind could be transformed by testing ideas in working fragments of minds, and philosophy of mathematics could be illuminated by trying to understand how a working robot could develop into a mathematician.
Much of his thinking about AI was influenced by Marvin Minsky and despite his critique of logicism he also learnt much from John McCarthy. His work on emotions can be seen as an elaboration of a paper on "Emotional and motivational controls of cognition", written in the 1960s by Herbert A. Simon. He disagrees with all of these on some topics, while agreeing on others.
[1] https://en.wikipedia.org/wiki/Aaron_Sloman
[2] http://www.cs.bham.ac.uk/research/projects/cogaff/crp/
[3] http://web.media.mit.edu/~minsky/papers/CausalDiversity.html
[edit]
According to this, it has "about seven million assertions" and notes that cyc can infer many more assertions from those.
http://web.media.mit.edu/~minsky/papers/CausalDiversity.html
Minsky:
What is the answer? My opinion is that we can make versatile AI machines only by using several different kinds of representations in the same system! This is because no single method works well for all problems; each is good for certain tasks but not for others. Also different kinds of problems need different kinds of reasoning. For example, much of the reasoning used in computer programming can be logic-based. However, most real-world problems need methods that are better at matching patterns and constructing analogies, making decisions based on previous experience with examples, or using types of explanations that have worked well on similar problems in the past. How can we encourage people to make systems that use multiple methods for representing and reasoning? First we'll have to change some present-day ideas. For example, many students like to ask, "Is it better to represent knowledge with Neural Nets, Logical Deduction, Semantic Networks, Frames, Scripts, Rule-Based Systems or Natural Language?" My teaching method is to try to get them to ask a different kind of question. "First decide what kinds of reasoning might be best for each different kind of problem -- and then find out which combination of representations might work well in each case." A trick that might help them to start doing this is to begin by asking, for each problem, "How many different factors are involved, and how much influence does each factor have?" This leads to a sort of "theory-matrix."
http://www.cs.bham.ac.uk/research/projects/cogaff/crp/#note-...
Sloman:
In retrospect, it seems that a mixture of the probabilistic and deterministic approaches is required, within the study of architectures for complete agents: a more general study than the investigation of algorithms and representations that dominated most of the early work on AI (partly because of the dreadful limitations of speed and memory of even the most expensive and sophisticated computers available in the 1960s and 1970s).
There are many ways such hybrid mechanisms could be implemented, and my recent work on different processing layers within an integrated architecture (combining reactive, deliberative and meta-management layers) indicates some features of a hybrid system, with probabilistic associations dominating the reactive layer and structure manipulations being more important in the deliberative layer. For recent papers on this see
- The Cogaff papers directory http://www.cs.bham.ac.uk/research/cogaff/
- My "talks" directory: http://www.cs.bham.ac.uk/research/projects/cogaff/talks/
- The rapidly growing "miscellaneous" directory: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREAD...
More specific though less comprehensive models have been proposed by other researchers, one of the most impressive being the ACT-R system developed by John Anderson and his collaborators. See http://act.psy.cmu.edu/.
Added 8 Feb 2016 Minsky's paper on "causal diversity" is also relevant:
Marvin L. Minsky, 1992, Future of AI Technology, in Toshiba Review, 47, 7, http://web.media.mit.edu/~minsky/papers/CausalDiversity.html
----
Will Wright discusses how he applies several different kinds of representations to make hybrid models for games, in the democratically elected "Dynamics" section of his talk, "Lessons in Game Design", on "nested dynamics / emergence" and "paradigms".
http://tvtropes.org/pmwiki/pmwiki.php/Main/XanatosGambit
I thought Cyc project was worth a long-term investment but other theory might be simultaneously true.
My reaction exactly :-)
I got to attend their training in 2003, at Cycorp, which is still around [1]. Some REALLY amazingly smart people.
I wonder if he's saying "it's done!" in hopes of not getting buried by DeepMind... kind of a last-ditch effort for "Strong AI".
The creator describes two interesting mechanical properties his parts exhibit:
> synclastic bending and auxetic behavior. Synclastic materials have the fascinating ability to assume compound curvature along two (often orthogonal) directions. One can wrap a sphere easily in a synclastic material without folding it whereas attempting the same with an anticlastic material, such as paper, would require numerous folds. Auxetic behavior is found in materials with a negative Poission's ratio, which relates the deformation in one direction when the material is stressed in a perpendicular direction. When compressed in one direction, auxetic materials contract in the other, and when stretched, they expand. In other words, an auxetic nail would become narrowed as it was hammered into a board and expand in diameter when pulled out of the board.
Fortunately, my scan of their website seems to indicate they have released their ontologies, their under a creative commons license.
http://www.cyc.com/platform/opencyc/ http://www.cyc.com/documentation/opencyc-license/
Read Lenat's "Why AM and Eurisko Appear to Work".[1]