zlacker

Douglas Lenat's Cyc is now being commercialized

submitted by _freu+(OP) on 2016-03-16 20:48:41 | 59 points 49 comments
[view article] [source] [links] [go to bottom]
replies(11): >>mchahn+u2 >>DonHop+H2 >>lcall+O2 >>nikola+U2 >>bgribb+G3 >>dcrole+M5 >>tkosan+dg >>nickps+9h >>mark_l+Nk >>100ide+un >>catpol+Ap1
1. mchahn+u2[view] [source] 2016-03-16 21:12:30
>>_freu+(OP)
> Cyc has been given many thousands of facts

Are thousands enough? Maybe the article misstated this.

replies(1): >>aidenn+E5
2. DonHop+H2[view] [source] 2016-03-16 21:14:29
>>_freu+(OP)
Marvin Minsky said "We need common-sense knowledge – and programs that can use it. Common sense computing needs several ways of representing knowledge. It is harder to make a computer housekeeper than a computer chess-player, because the housekeeper must deal with a wider range of situations." [1]

He named Douglas Lenat as one of the ten or so people working on common sense (at the time of the interview in 1998), and said the best system based on common sense is CYC. But he called for proprietary systems not to keep the data a secret, and to distribute copies, so they can evolve and get new ideas, and because we must understand how they work.

Sabbatini: Why there are no computers already working with common sense knowledge ?

Minsky: There are very few people working with common sense problems in Artificial Intelligence. I know of no more than five people, so probably there are about ten of them out there. Who are these people ? There’s John McCarthy, at Stanford University, who was the first to formalize common sense using logics. He has a very interesting web page. Then, there is Harry Sloaman, from the University of Edinburgh, who’s probably the best philosopher in the world working on Artificial Intelligence, with the exception of Daniel Dennett, but he knows more about computers. Then there’s me, of course. Another person working on a strong common-sense project is Douglas Lenat, who directs the CYC project in Austin. Finally, Douglas Hofstadter, who wrote many books about the mind, artificial intelligence, etc., is working on similar problems.

We talk only to each other and no one else is interested. There is something wrong with computer sciences.

Sabbatini: Is there any AI software that uses the common sense approach ?

Minsky: As I said, the best system based on common sense is CYC, developed by Doug Lenat, a brilliant guy, but he set up a company, CYCorp, and is developing it as a proprietary system. Many computer scientists have a good idea and then made it a secret and start making proprietary systems. They should distribute copies of their system to graduate systems, so that they could evolve and get new ideas. We must understand how they work.

[1] http://www.cerebromente.org.br/n07/opiniao/minsky/minsky_i.h...

replies(3): >>DonHop+v3 >>Animal+B3 >>nickps+Wh
3. lcall+O2[view] [source] 2016-03-16 21:16:19
>>_freu+(OP)
fwiw: A related project but with a different, I hope more complete, vision for storing ~"any/all knowledge": http://onemodel.org . (AGPL)
4. nikola+U2[view] [source] 2016-03-16 21:17:20
>>_freu+(OP)
Have some respect and don't call him "Doug", please! He's always been "Douglas Lenat"! Although questioned, Eurisko [0] (circa 1976) to me is a bigger AI achievement than the much fanfared AlphaGo! I have great respect for Douglas Lenat!

[0]: https://en.wikipedia.org/wiki/Eurisko

replies(4): >>nikola+E3 >>dang+F3 >>zippy+jn >>Animat+xu
◧◩
5. DonHop+v3[view] [source] [discussion] 2016-03-16 21:22:30
>>DonHop+H2
"Harry Sloaman" must have been an incorrect transcription of Aaron Sloman [1].

An free updated version of his book "The Computer Revolution in Philosophy: Philosophy Science and Models of Mind" is available [2].

About the cool retro cover he writes: "I was not consulted about the cover. The book is mainly concerned with the biological, psychological and philosophical significance of virtual machinery. I did not know that the publishers had decided to associate it with paper tape devices until it was published." -Aaron Sloman

A recent update (Feb 2016) references Minsky's "Future of AI Technology" paper on "causal diversity" as being relevant to the the "Probabilistic (associative) vs structural learning" section. [3]

Wikipedia:

Aaron Sloman is a philosopher and researcher on artificial intelligence and cognitive science who was born in Rhodesia (now Zimbabwe). He is the author of several papers on philosophy, epistemology and artificial intelligence. He held the Chair in Artificial Intelligence and Cognitive Science at the School of Computer Science at the University of Birmingham, and before that a chair with the same title at the University of Sussex. He has collaborated with biologist Jackie Chappell on the evolution of intelligence. Since retiring he is Honorary Professor of Artificial Intelligence and Cognitive Science at Birmingham.

Influences

His philosophical ideas were deeply influenced by the writings of Immanuel Kant, Gottlob Frege and Karl Popper, and to a lesser extent by John Austin, Gilbert Ryle, R. M. Hare (who, as his 'personal tutor' at Balliol College discussed meta-ethics with him), Imre Lakatos and Ludwig Wittgenstein. What he could learn from philosophers left large gaps, which he decided around 1970 research in artificial intelligence might fill. E.g. philosophy of mind could be transformed by testing ideas in working fragments of minds, and philosophy of mathematics could be illuminated by trying to understand how a working robot could develop into a mathematician.

Much of his thinking about AI was influenced by Marvin Minsky and despite his critique of logicism he also learnt much from John McCarthy. His work on emotions can be seen as an elaboration of a paper on "Emotional and motivational controls of cognition", written in the 1960s by Herbert A. Simon. He disagrees with all of these on some topics, while agreeing on others.

[1] https://en.wikipedia.org/wiki/Aaron_Sloman

[2] http://www.cs.bham.ac.uk/research/projects/cogaff/crp/

[3] http://web.media.mit.edu/~minsky/papers/CausalDiversity.html

◧◩
6. Animal+B3[view] [source] [discussion] 2016-03-16 21:23:52
>>DonHop+H2
> We talk only to each other and no one else is interested.

OK.

> There is something wrong with computer sciences.

Or there is something wrong with you (Minsky). If you're brilliant, and the rest of the world doesn't follow you, it doesn't mean that there's something wrong with them. It may simply be that you are brilliant and wrong.

replies(1): >>DonHop+S3
◧◩
7. nikola+E3[view] [source] [discussion] 2016-03-16 21:24:27
>>nikola+U2
Wow! Just found this [0]!

[0]: http://lesswrong.com/lw/10g/lets_reimplement_eurisko/

replies(2): >>SilasX+5e >>nickps+Rh
◧◩
8. dang+F3[view] [source] [discussion] 2016-03-16 21:24:33
>>nikola+U2
The article calls him Doug, but we'll give you (and him) the las.
9. bgribb+G3[view] [source] 2016-03-16 21:24:36
>>_freu+(OP)
It's odd to speak of Cyc just now being commercialized -- Cycorp has been in business using Cyc as its core tech for a long time. Military contracting, among other stuff.
replies(2): >>aab0+gd >>dizzys+zq
◧◩◪
10. DonHop+S3[view] [source] [discussion] 2016-03-16 21:26:20
>>Animal+B3
Do you mean {inclusive or exclusive} "Or"? I'd say there's something wrong with computer sciences, and Minsky was brilliant, and right about some things, and wrong about other things.

>He [Aaron Sloman, one of the small group of "each other" who talk to each other] disagrees with all of these on some topics, while agreeing on others.

replies(2): >>argona+D4 >>Animal+Pc
◧◩◪◨
11. argona+D4[view] [source] [discussion] 2016-03-16 21:34:16
>>DonHop+S3
Virtually no active researcher in AI today believes we'll achieve AI or "common sense" or anything remotely related to thinking/reasoning by hardcoding facts/rules/laws and doing logical inference / deduction on those facts.
replies(1): >>DonHop+l7
◧◩
12. aidenn+E5[view] [source] [discussion] 2016-03-16 21:43:13
>>mchahn+u2
The free version of cyc has about a quarter-million terms alone, so they are likely wrong.

[edit]

According to this, it has "about seven million assertions" and notes that cyc can infer many more assertions from those.

http://www.cyc.com/kb/

replies(1): >>cpeter+xh
13. dcrole+M5[view] [source] 2016-03-16 21:44:51
>>_freu+(OP)
Wow. I did not realize it was time for the yearly Cyc article again. Cycorp has been a thing for a long time, but I think history has shown that the path Doug and Cyc have taken is not the way forward.
◧◩◪◨⬒
14. DonHop+l7[view] [source] [discussion] 2016-03-16 22:02:12
>>argona+D4
That's not what Minsky or Sloman said they believed, nor what I meant to imply.

http://web.media.mit.edu/~minsky/papers/CausalDiversity.html

Minsky:

What is the answer? My opinion is that we can make versatile AI machines only by using several different kinds of representations in the same system! This is because no single method works well for all problems; each is good for certain tasks but not for others. Also different kinds of problems need different kinds of reasoning. For example, much of the reasoning used in computer programming can be logic-based. However, most real-world problems need methods that are better at matching patterns and constructing analogies, making decisions based on previous experience with examples, or using types of explanations that have worked well on similar problems in the past. How can we encourage people to make systems that use multiple methods for representing and reasoning? First we'll have to change some present-day ideas. For example, many students like to ask, "Is it better to represent knowledge with Neural Nets, Logical Deduction, Semantic Networks, Frames, Scripts, Rule-Based Systems or Natural Language?" My teaching method is to try to get them to ask a different kind of question. "First decide what kinds of reasoning might be best for each different kind of problem -- and then find out which combination of representations might work well in each case." A trick that might help them to start doing this is to begin by asking, for each problem, "How many different factors are involved, and how much influence does each factor have?" This leads to a sort of "theory-matrix."

http://www.cs.bham.ac.uk/research/projects/cogaff/crp/#note-...

Sloman:

In retrospect, it seems that a mixture of the probabilistic and deterministic approaches is required, within the study of architectures for complete agents: a more general study than the investigation of algorithms and representations that dominated most of the early work on AI (partly because of the dreadful limitations of speed and memory of even the most expensive and sophisticated computers available in the 1960s and 1970s).

There are many ways such hybrid mechanisms could be implemented, and my recent work on different processing layers within an integrated architecture (combining reactive, deliberative and meta-management layers) indicates some features of a hybrid system, with probabilistic associations dominating the reactive layer and structure manipulations being more important in the deliberative layer. For recent papers on this see

- The Cogaff papers directory http://www.cs.bham.ac.uk/research/cogaff/

- My "talks" directory: http://www.cs.bham.ac.uk/research/projects/cogaff/talks/

- The rapidly growing "miscellaneous" directory: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREAD...

More specific though less comprehensive models have been proposed by other researchers, one of the most impressive being the ACT-R system developed by John Anderson and his collaborators. See http://act.psy.cmu.edu/.

Added 8 Feb 2016 Minsky's paper on "causal diversity" is also relevant:

Marvin L. Minsky, 1992, Future of AI Technology, in Toshiba Review, 47, 7, http://web.media.mit.edu/~minsky/papers/CausalDiversity.html

----

Will Wright discusses how he applies several different kinds of representations to make hybrid models for games, in the democratically elected "Dynamics" section of his talk, "Lessons in Game Design", on "nested dynamics / emergence" and "paradigms".

https://youtu.be/CdgQyq3hEPo?t=35m7s

replies(1): >>argona+t9
◧◩◪◨⬒⬓
15. argona+t9[view] [source] [discussion] 2016-03-16 22:25:25
>>DonHop+l7
> He named Douglas Lenat as one of the ten or so people working on common sense (at the time of the interview in 1998), and said the best system based on common sense is CYC.

I also should have been more strong in my statement. Very few active ML/AI researchers believe the database + logical deduction / inference method will even play a nontrivial role in any future AGI system.

replies(1): >>abeced+Bg
◧◩◪◨
16. Animal+Pc[view] [source] [discussion] 2016-03-16 23:02:34
>>DonHop+S3
I meant exclusive or. I was getting at the arrogance: "Out of all the AI people, only the 5 of us talk to each other. There must be something wrong with the whole field, because they can't see how right we are!"

The arrogance - that "we" clearly are right, so "they" clearly must be wrong - grates on me. Minsky may in fact be right, but he should at least have the humility to see that, in a difference of opinion between the few and the many, it is at least possible that the many are right...

replies(2): >>nickps+rf >>ScottB+qA
◧◩
17. aab0+gd[view] [source] [discussion] 2016-03-16 23:06:56
>>bgribb+G3
Which makes one wonder what exactly 'Lucid' is doing different from Cycorp Inc (1995, described in WP article for Cyc), and is exactly what the TR article doesn't cover. /sigh
◧◩◪
18. SilasX+5e[view] [source] [discussion] 2016-03-16 23:18:08
>>nikola+E3
Was on that thread (from 2009 for those who don't want to click), don't remember it materializing into a project with any progress.
◧◩◪◨⬒
19. nickps+rf[view] [source] [discussion] 2016-03-16 23:34:41
>>Animal+Pc
Common sense powers the many's decisions around 90% of their day. It seems to be a prerequisite for many intelligence functions. Many AI's screw up or are ineffective entirely in real-world because they lack it. And only around five pro's were working on it.

I think there's no arrogance in saying the many were foolish to ignore the most used and probably critical part of intelligence. Especially when their work failed due to lacking it. If anything, those thinking they didnt need it were very arrogant in thinking their simple formalisms on old hardware would replace or outperform common sense on wetware.

Besides, time showed who were the fools. ;)

replies(1): >>Animal+0j
20. tkosan+dg[view] [source] 2016-03-16 23:44:44
>>_freu+(OP)
Douglas Lenat recently gave the following talk at CMU about how Cyc works and its current capabilities: https://www.youtube.com/watch?v=4mv0nCS2mik
◧◩◪◨⬒⬓⬔
21. abeced+Bg[view] [source] [discussion] 2016-03-16 23:50:01
>>argona+t9
I don't have any strong opinion about this, but it's suggestive that AlphaGo combines a classical AI search with neural nets. Another example: Steven Pinker's theory that human language uses a combo of neural-net-style and logic-style processing (Words and Rules). This isn't to say that Skynet will run part of itself on Prolog -- more like, to get the best performance over the broadest range of domains will need multiple techniques.

One advantage of logic + deduction is potential clarity ahead of time about what the system might do, and ability to explain its actions. If AI safety is your top concern, those seem at least potentially valuable even when other techniques can build powerful systems sooner. (I wish more conventional software had that kind of transparency.)

replies(2): >>arthur+3j >>argona+nq2
22. nickps+9h[view] [source] 2016-03-16 23:57:18
>>_freu+(OP)
Been a while since I heard about my once-favorite project aiming to imitate common sense. I think I even contributed to its knowledge base a bit. Broken memory is unclear there. I loved that Lenat was one of the few to see (a) the need for common sense representation, (b) many people would need to train it, and (c) good algorithms to integrate it with other things. The part I strongly criticized was locking it up in proprietary fashion: worst thing you can do for stuff needing this much training data.

Good to see it's being commercialized... again? Swore he had a company. Anyway, probably the most valuable thing is the knowledge base they built. It was structured, curated, and very general. It would be great if AI researchers working on different architectures, including adaptive NN's, re-encoded and used that knowledge base. Might speed up training and catch blind spots w/ common sense checks.

Note to other researchers: it would be worth the effort to re-create a similar knowledge base more open to public but with careful moderations. Make sure the knowledge base and decent engine are open source. Gotta be for best results here.

replies(2): >>chris_+mk >>joe_th+Pq
◧◩◪
23. cpeter+xh[view] [source] [discussion] 2016-03-17 00:00:35
>>aidenn+E5
I wonder how well Cyc could build assertions from fuzzy or only semi-trustworthy data from sources like Wikipedia.
replies(1): >>aidenn+Od1
◧◩◪
24. nickps+Rh[view] [source] [discussion] 2016-03-17 00:04:06
>>nikola+E3
The one person who had the papers said what was in them didn't tell them enough to even understand how to implement the main loop. It was too vague. So, no detailed papers, no source, good results that might have been done by hand, steady funding for decades, and a commercial spin-off. One commenter said it looks like a Xanatos Gambit:

http://tvtropes.org/pmwiki/pmwiki.php/Main/XanatosGambit

I thought Cyc project was worth a long-term investment but other theory might be simultaneously true.

◧◩
25. nickps+Wh[view] [source] [discussion] 2016-03-17 00:05:33
>>DonHop+H2
Thank you and everyone else in this sub-thread for all the quotes and references. Been good reading with some names I wish I remembered. My reading backlog just got a bit larger. :)
◧◩◪◨⬒⬓
26. Animal+0j[view] [source] [discussion] 2016-03-17 00:18:11
>>nickps+rf
> Besides, time showed who were the fools. ;)

Who, in your view, would that be?

The people who thought that rule-driven inference engines were going to get us strong AI? OK, I can give you that events have proven that view to be foolish.

The people who thought that common sense was not the way to AI? Time has not shown that they are fools (at least, not yet), because no impressive AI advances (of which I am aware) are based on the common-sense approach. (I suppose CYC itself could be regarded as such an advance, but I see it more as building material than as a system in itself.)

Now, DonHopkins quotes Minsky as saying that a mix of approaches is the answer. Arguably, that is beginning to be proven. Common sense (the CYC approach)? Not so much.

replies(2): >>DonHop+Lk >>nickps+qm
◧◩◪◨⬒⬓⬔⧯
27. arthur+3j[view] [source] [discussion] 2016-03-17 00:19:15
>>abeced+Bg
> One advantage of logic + deduction is potential clarity ahead of time about what the system might do, and ability to explain its actions. If AI safety is your top concern, those seem at least potentially valuable even when other techniques can build powerful systems sooner. (I wish more conventional software had that kind of transparency.)

i'm not particularly fearful of an AI apocolypse, but i couldn't agree with that more.

◧◩
28. chris_+mk[view] [source] [discussion] 2016-03-17 00:38:03
>>nickps+9h
> Good to see it's being commercialized... again? Swore he had a company.

My reaction exactly :-)

I got to attend their training in 2003, at Cycorp, which is still around [1]. Some REALLY amazingly smart people.

I wonder if he's saying "it's done!" in hopes of not getting buried by DeepMind... kind of a last-ditch effort for "Strong AI".

[1] http://www.cyc.com

replies(1): >>nickps+1m
◧◩◪◨⬒⬓⬔
29. DonHop+Lk[view] [source] [discussion] 2016-03-17 00:44:20
>>Animal+0j
How is CYC not just another lego in the toolbox, as Wright would put it, like any of the other approaches you can combine together?
replies(1): >>nickps+wm
30. mark_l+Nk[view] [source] 2016-03-17 00:44:40
>>_freu+(OP)
I have played with OpenCYC.org for years. It hasn't been updated since 2012 but version 4 is still interesting.

After seeing the utility of Google's Knowledge Graph, I wish there were a free open source project to combine all the public data sources like OpenCYC, DBPedia, the Freebase dumps in MediaPedia, etc.

◧◩◪
31. nickps+1m[view] [source] [discussion] 2016-03-17 01:04:42
>>chris_+mk
Funny you bring that up. I've been telling people that the one victory we had for common sense was the fact that things like DeepMind approximate it in how they work. They're usually trained on too narrow data vs Cyc. Yet, they do find patterns esp in classification and action/response like human intuition does. I was going to suggest to people that they be used in logical/intuitive hybrids to replace things like Cyc. Maybe hitch a ride on a different bandwagon with benefits from all the effort going into it.
◧◩◪◨⬒⬓⬔
32. nickps+qm[view] [source] [discussion] 2016-03-17 01:10:45
>>Animal+0j
"Time has not shown that they are fools (at least, not yet), because no impressive AI advances (of which I am aware) are based on the common-sense approach"

Sure it has: deep learning. Human common sense is mostly based on intuition. Intuition is a process that finds patterns in unstructured data in terms of classification, relation to other things, and relationships in what we see vs how we respond. It has reinforcement mechanisms that improve the models with better exposure. Just like the neural networks.

They kind of indirectly worked on common sense. Not everything is there and data sets are too narrow for full, common sense. Yet, key attributes are there with amazing results from the likes of DeepMind. So, yeah, we proponents of common sense and intuition are winning. By 4 to 1 in a recent event.

" saying that a mix of approaches is the answer. Arguably, that is beginning to be proven. Common sense (the CYC approach)? Not so much."

Common sense is one component of a hybrid system. That's what I pushed. That's what I understood from others. CYC itself combines a knowledge base representing our "common sense" with one or more reasoning engines. The NN's leveraging it in their internal connections are often combined with tree searches, heuristics, and other things. Our own brain uses many specialized things working together to achieve an overall result.

So, no, common sense storage by itself won't do much for you. One needs the other parts. Hybrid systems were most like the only proven general intelligence. So, we should default on that.

replies(1): >>Animal+bM1
◧◩◪◨⬒⬓⬔⧯
33. nickps+wm[view] [source] [discussion] 2016-03-17 01:13:22
>>DonHop+Lk
Exactly my point. It was always assumed the common sense would be combined with automated reasoning techniques and possibly other forms of AI (esp vision/speech processing).
◧◩
34. zippy+jn[view] [source] [discussion] 2016-03-17 01:26:12
>>nikola+U2
He seems to be ok with 'Doug': https://www.youtube.com/watch?v=2w_ekB08ohU
35. 100ide+un[view] [source] 2016-03-17 01:28:57
>>_freu+(OP)
There are some 3d-printable / 2d-laser-cutable examples of auxetic materials on thingiverse[1].

The creator describes two interesting mechanical properties his parts exhibit:

> synclastic bending and auxetic behavior. Synclastic materials have the fascinating ability to assume compound curvature along two (often orthogonal) directions. One can wrap a sphere easily in a synclastic material without folding it whereas attempting the same with an anticlastic material, such as paper, would require numerous folds. Auxetic behavior is found in materials with a negative Poission's ratio, which relates the deformation in one direction when the material is stressed in a perpendicular direction. When compressed in one direction, auxetic materials contract in the other, and when stretched, they expand. In other words, an auxetic nail would become narrowed as it was hammered into a board and expand in diameter when pulled out of the board.

[1] http://www.thingiverse.com/thing:289650

◧◩
36. dizzys+zq[view] [source] [discussion] 2016-03-17 02:26:49
>>bgribb+G3
I was hoping someone would come in and say this. I know that Doug Lenat doesn't go around correcting all of the articles, but the mythology of a company just building up a knowledge base with no income for 30 years is far from the truth. Cycorp used to run the Lycos search bot, which is at least 15 years back.
◧◩
37. joe_th+Pq[view] [source] [discussion] 2016-03-17 02:31:50
>>nickps+9h
It seems terrible that such a project would lock up all that knowledge in a proprietary form.

Fortunately, my scan of their website seems to indicate they have released their ontologies, their under a creative commons license.

http://www.cyc.com/platform/opencyc/ http://www.cyc.com/documentation/opencyc-license/

replies(2): >>nickps+Ws >>catpol+xp1
◧◩◪
38. nickps+Ws[view] [source] [discussion] 2016-03-17 03:14:28
>>joe_th+Pq
Hell yeah! Thanks for the link! Very encouraging for my proposal of integrating it with other methods. :)
◧◩
39. Animat+xu[view] [source] [discussion] 2016-03-17 03:49:27
>>nikola+U2
"Eurisko (circa 1976) to me is a bigger AI achievement than the much fanfared AlphaGo!"

Read Lenat's "Why AM and Eurisko Appear to Work".[1]

[1] https://www.aaai.org/Papers/AAAI/1983/AAAI83-059.pdf

replies(1): >>nikola+zj1
◧◩◪◨⬒
40. ScottB+qA[view] [source] [discussion] 2016-03-17 06:10:53
>>Animal+Pc
> The arrogance - that "we" clearly are right, so "they" clearly must be wrong - grates on me.

I don't think he meant it that way. He was well aware he didn't have all the answers. What I believe he was talking about was not the answers but the questions: which ones are people spending their time on? I think he's saying that the questions that most people in AI are spending their time on are not going to give us strong AI. Is that such a controversial claim? I expect most people in the field would agree with it.

replies(1): >>DonHop+5D
◧◩◪◨⬒⬓
41. DonHop+5D[view] [source] [discussion] 2016-03-17 07:28:13
>>ScottB+qA
I agree that he didn't mean it in an arrogant way, didn't think he had all the answers, and was asking big questions. He was all about integrating multiple methods, including commonsense knowledge like CYC. But it's hard to get commonsense knowledge methods funded by the current "benefactors of AI".

Here is something he said to me in April 2009 in a discussion about educational software for the OLPC:

Marvin Minsky: "I've been unsuccessful at getting support for a major project to build the architecture proposed in "The Emotion Machine." The idea is to make an AI that can use multiple methods and commonsense knowledge--so that whenever it gets stuck, it can try another approach. The trouble is that most funding has come under the control of statistical and logical practitioners, or people who think we need to solve low-level problems before we can deal with human-level ones."

Maybe (I'll venture a wild guess) it's just that investing in statistical AI research currently makes more financial sense for the goals of the advertising industry that's funding most of the research these days... You're the product, and all that.

◧◩◪◨
42. aidenn+Od1[view] [source] [discussion] 2016-03-17 16:08:58
>>cpeter+xh
My guess is quite poorly. Remember the assertion that all humans have two arms and two legs isn't even 100% true, which is one reason of many why the majority of the AI field abandoned the formal logic approach for statistical methods.

The other side of the story would be that the majority of the AI field didn't want to spend 30 years formalizing the large body of general-purpose knowledge.

◧◩◪
43. nikola+zj1[view] [source] [discussion] 2016-03-17 16:51:16
>>Animat+xu
Well, it still is. I have a simple test for true AI - it shouldn't have any math beyond simple arithmetics and look at machine learning today!
◧◩◪
44. catpol+xp1[view] [source] [discussion] 2016-03-17 17:31:49
>>joe_th+Pq
OpenCyc is only a fraction of the ontology, unfortunately. There's a lot of internal desire to update and expand OpenCyc, but my understanding is that at present the company hasn't secured funding that they're really allowed to use for that purpose.
replies(1): >>nickps+x92
45. catpol+Ap1[view] [source] 2016-03-17 17:32:04
>>_freu+(OP)
I'll say this much: Cycorp is an interesting place to work.
◧◩◪◨⬒⬓⬔⧯
46. Animal+bM1[view] [source] [discussion] 2016-03-17 20:37:44
>>nickps+qm
I see a huge difference between the deep learning approach and the CYC approach. I don't see enough common ground to call them both "common sense" approaches. And, in fact, in the conversation up to this point, the CYC approach is what we were calling the "common sense" approach. So I don't see deep learning as validation of the common sense approach, at least not as the terms have been used in this conversation.
replies(1): >>nickps+ha2
◧◩◪◨
47. nickps+x92[view] [source] [discussion] 2016-03-18 00:22:44
>>catpol+xp1
Oh no! I take it back! We're still missing the knowledge base we need. Least OpenCyc might be a nice start on it.
◧◩◪◨⬒⬓⬔⧯▣
48. nickps+ha2[view] [source] [discussion] 2016-03-18 00:29:39
>>Animal+bM1
That's why I defined common sense in terms of collection of and acting on knowledge via human intuition mechanism. It is a neural network or series of them that finds patterns in raw data with reinforcement. That sounds like deep learning. Cyc is doing something similar but hand-crafted instead of raw and logical instead of probabilistic.

Intuition just adds connections to other knowledge and reasoning part. That our brain is hybrid like that is why I advocate more hybrids, all with an intuition-like component.

◧◩◪◨⬒⬓⬔⧯
49. argona+nq2[view] [source] [discussion] 2016-03-18 04:24:12
>>abeced+Bg
I'm talking about logical inference on a database of facts in the hopes of representing common sense.

I have no issue with probabilistic state-space search.

[go to top]