zlacker

[parent] [thread] 4 comments
1. DonHop+(OP)[view] [source] 2016-03-16 22:02:12
That's not what Minsky or Sloman said they believed, nor what I meant to imply.

http://web.media.mit.edu/~minsky/papers/CausalDiversity.html

Minsky:

What is the answer? My opinion is that we can make versatile AI machines only by using several different kinds of representations in the same system! This is because no single method works well for all problems; each is good for certain tasks but not for others. Also different kinds of problems need different kinds of reasoning. For example, much of the reasoning used in computer programming can be logic-based. However, most real-world problems need methods that are better at matching patterns and constructing analogies, making decisions based on previous experience with examples, or using types of explanations that have worked well on similar problems in the past. How can we encourage people to make systems that use multiple methods for representing and reasoning? First we'll have to change some present-day ideas. For example, many students like to ask, "Is it better to represent knowledge with Neural Nets, Logical Deduction, Semantic Networks, Frames, Scripts, Rule-Based Systems or Natural Language?" My teaching method is to try to get them to ask a different kind of question. "First decide what kinds of reasoning might be best for each different kind of problem -- and then find out which combination of representations might work well in each case." A trick that might help them to start doing this is to begin by asking, for each problem, "How many different factors are involved, and how much influence does each factor have?" This leads to a sort of "theory-matrix."

http://www.cs.bham.ac.uk/research/projects/cogaff/crp/#note-...

Sloman:

In retrospect, it seems that a mixture of the probabilistic and deterministic approaches is required, within the study of architectures for complete agents: a more general study than the investigation of algorithms and representations that dominated most of the early work on AI (partly because of the dreadful limitations of speed and memory of even the most expensive and sophisticated computers available in the 1960s and 1970s).

There are many ways such hybrid mechanisms could be implemented, and my recent work on different processing layers within an integrated architecture (combining reactive, deliberative and meta-management layers) indicates some features of a hybrid system, with probabilistic associations dominating the reactive layer and structure manipulations being more important in the deliberative layer. For recent papers on this see

- The Cogaff papers directory http://www.cs.bham.ac.uk/research/cogaff/

- My "talks" directory: http://www.cs.bham.ac.uk/research/projects/cogaff/talks/

- The rapidly growing "miscellaneous" directory: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREAD...

More specific though less comprehensive models have been proposed by other researchers, one of the most impressive being the ACT-R system developed by John Anderson and his collaborators. See http://act.psy.cmu.edu/.

Added 8 Feb 2016 Minsky's paper on "causal diversity" is also relevant:

Marvin L. Minsky, 1992, Future of AI Technology, in Toshiba Review, 47, 7, http://web.media.mit.edu/~minsky/papers/CausalDiversity.html

----

Will Wright discusses how he applies several different kinds of representations to make hybrid models for games, in the democratically elected "Dynamics" section of his talk, "Lessons in Game Design", on "nested dynamics / emergence" and "paradigms".

https://youtu.be/CdgQyq3hEPo?t=35m7s

replies(1): >>argona+82
2. argona+82[view] [source] 2016-03-16 22:25:25
>>DonHop+(OP)
> He named Douglas Lenat as one of the ten or so people working on common sense (at the time of the interview in 1998), and said the best system based on common sense is CYC.

I also should have been more strong in my statement. Very few active ML/AI researchers believe the database + logical deduction / inference method will even play a nontrivial role in any future AGI system.

replies(1): >>abeced+g9
◧◩
3. abeced+g9[view] [source] [discussion] 2016-03-16 23:50:01
>>argona+82
I don't have any strong opinion about this, but it's suggestive that AlphaGo combines a classical AI search with neural nets. Another example: Steven Pinker's theory that human language uses a combo of neural-net-style and logic-style processing (Words and Rules). This isn't to say that Skynet will run part of itself on Prolog -- more like, to get the best performance over the broadest range of domains will need multiple techniques.

One advantage of logic + deduction is potential clarity ahead of time about what the system might do, and ability to explain its actions. If AI safety is your top concern, those seem at least potentially valuable even when other techniques can build powerful systems sooner. (I wish more conventional software had that kind of transparency.)

replies(2): >>arthur+Ib >>argona+2j2
◧◩◪
4. arthur+Ib[view] [source] [discussion] 2016-03-17 00:19:15
>>abeced+g9
> One advantage of logic + deduction is potential clarity ahead of time about what the system might do, and ability to explain its actions. If AI safety is your top concern, those seem at least potentially valuable even when other techniques can build powerful systems sooner. (I wish more conventional software had that kind of transparency.)

i'm not particularly fearful of an AI apocolypse, but i couldn't agree with that more.

◧◩◪
5. argona+2j2[view] [source] [discussion] 2016-03-18 04:24:12
>>abeced+g9
I'm talking about logical inference on a database of facts in the hopes of representing common sense.

I have no issue with probabilistic state-space search.

[go to top]