zlacker

Obituary for Cyc

submitted by todsac+(OP) on 2025-04-08 19:13:50 | 449 points 279 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
1. pvg+z1[view] [source] 2025-04-08 19:24:43
>>todsac+(OP)
A big Cyc thread about a year ago >>40069298
2. zitter+i2[view] [source] 2025-04-08 19:30:28
>>todsac+(OP)
You can run a version of CYC that was released online as opencyc https://github.com/asanchez75/opencyc . This is when a version of the system was posted on source forge and the GitHub has the dataset and the KB and inference engine. Note it has been written in an old version of Java.
7. ChuckM+H5[view] [source] 2025-04-08 19:57:10
>>todsac+(OP)
I had the funny thought that this is exactly what a sentient AI would write "stop looking here, there is nothing to see, move along." :-)

I (like vannevar apparently) didn't feel Cyc was going anywhere useful, there were ideas there, but not coherent enough to form a credible basis for even a hypothesis of how a system could be constructed that would embody them.

I was pretty impressed by McCarthy's blocks world demo, later he and a student formalized some of the rules for creating 'context'[1] for AI to operate within, I continue to think that will be crucial to solving some of the mess that LLMs create.

For example, the early failures of LLMs suggesting that you could make salad crunchy by adding rocks was a classic context failure, data from the context of 'humor' and data from the context of 'recipes' intertwined. Because existing models have no context during training, there is nothing in the model that 'tunes' the output based on context. And you get rocks in your salad.

[1] https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&d...

9. toisan+88[view] [source] 2025-04-08 20:11:16
>>todsac+(OP)
I think there could be a next generation cyc. Current llms have too many mistakes and grounding it with an AI ontology could be really interesting. I wrote more about it here: https://blog.jtoy.net/understanding-cyc-the-ai-database/
17. Animat+5e[view] [source] 2025-04-08 20:52:57
>>todsac+(OP)
Cyc is going great, according to the web site. "The Next Generation of Enterprise AI"[1]

Lenat himself died in 2023. Despite this, he is listed as the only member of the "leadership team".[2]

[1] https://cyc.com/

[2] https://cyc.com/leadership-team/

18. timCli+Me[view] [source] 2025-04-08 20:58:14
>>todsac+(OP)
> The secretive nature of Cyc has multiple causes. Lenat personally did not release the source code of his PhD project or EURISKO, remained unimpressed with open source, and disliked academia as much as academia disliked him.

One thing that's not mentioned here, but something that I took away from Wolfram's obituary of Lenat (https://writings.stephenwolfram.com/2023/09/remembering-doug...) was that Lenat was very easily distracted ("Could we somehow usefully connect [Wolfram|Alpha and the Wolfram Language] to CYC? ... But when I was at SXSW the next year Doug had something else he wanted to show me. It was a math education game.").

My armchair diagnosis is untreated ADHD. He might have had had discussing the internals of CYC on his todo list since its first prototype, but the draft was never ready.

◧◩◪
20. luma+Wf[view] [source] [discussion] 2025-04-08 21:05:28
>>cmrdpo+G8
The Bitter Lesson has a few things to say about this.

http://www.incompleteideas.net/IncIdeas/BitterLesson.html

◧◩
28. baq+3j[view] [source] [discussion] 2025-04-08 21:29:24
>>vannev+14
https://ai-2027.com/ postulates that a good enough LLM will rewrite itself using rules and facts... sci-fi, but so is chatting with a matrix multiplication.
◧◩◪
29. IshKeb+ij[view] [source] [discussion] 2025-04-08 21:31:18
>>cmrdpo+G8
These guys are trying to combine symbolic reasoning with LLMs somehow: https://www.symbolica.ai/
◧◩◪◨
35. specia+Il[view] [source] [discussion] 2025-04-08 21:50:58
>>IshKeb+ij
check out Imandra's platform for neurosymbolic AI - https://www.imandra.ai/
◧◩◪◨
37. chubot+nm[view] [source] [discussion] 2025-04-08 21:55:45
>>ChadNa+Qj
I dunno I actually think say Claude AI SOUNDS smarter than it is, right now

It has a phenomenal recall. I just asked it about "SmartOS", something I knew about, vaguely, in ~2012, and it gave me a pretty darn good answer. On that particular subject, I think it probably gave a better answer than anyone I could e-mail, call, or text right now

It was significantly more informative than wikipedia - https://en.wikipedia.org/wiki/SmartOS

But I still find it easy to stump it and get it to hallucinate, which makes it seem dumb

It is like a person with good manners, and a lot of memory, and which is quite good at comparisons (although you have to verify, which is usually fine)

But I would not say it is "smart" at coming up with new ideas or anything

I do think a key point is that a "text calculator" is doing a lot of work ... i.e. summarization and comparison are extremely useful things. They can accelerate thinking

◧◩◪◨
38. veqq+zn[view] [source] [discussion] 2025-04-08 22:05:06
>>joseph+cm
So you're just ignoring all the probabilistic, fuzzy etc. Prologs etc. which do precisely that? https://github.com/lab-v2/pyreason
◧◩◪
52. Donald+yu[view] [source] [discussion] 2025-04-08 23:09:21
>>giardi+ds
Source code for Eurisko: github.com/seveno4/EURISKO

Source code for AM: https://github.com/white-flame/am

54. hitekk+9v[view] [source] 2025-04-08 23:15:58
>>todsac+(OP)
A former employee of Cyc did an insightful AMA on HN back in 2019: >>21783828
◧◩
58. zozbot+yv[view] [source] [discussion] 2025-04-08 23:20:20
>>Rochus+zu
> Maybe Cycorp's knowledge base will be made generally accessible at some point, so that it can be used to train LLMs.

More likely, it will be made increasingly irrelevant as open alternatives to it are developed instead. The Wikipedia folks are working on some sort of openly developed interlingua that can be edited by humans, in order to populate Wikipedias in underrepresented languages with basic encyclopedic text. (Details very much TBD, but see https://en.wikipedia.org/wiki/Abstract_Wikipedia and https://meta.wikimedia.org/wiki/Abstract_Wikipedia ) This will probably be roughly as powerful as the system OP posits at some point in the article, that can generate text in both English and Japanese but only if fed with the right "common sense" to begin with. It's not clear exactly how useful logical inference on such statements might turn out to be, but the potential will definitely exist for something like that too, if it's found to be genuinely worthwhile in some way.

64. wpietr+Zw[view] [source] 2025-04-08 23:39:03
>>todsac+(OP)
"Their topmost distinction was between things with souls and things without souls. And large trees were in the former category, whereas small trees were in the latter category…"

This reminds me deeply of Borges: https://en.wikipedia.org/wiki/Celestial_Emporium_of_Benevole...

To me, that bit of Borges is a reminder that all human taxonomies are limited and provisional. But it seems to me that Cyc and its brethren are built around the notion that a universal taxonomy is important and achievable. I guess it's possible that a useful kind of cognition could happen that way, but it's patently not how people work. If I had gotten to the point where I was forced to define exactly when a tree got a soul, I hope I'd realize that I was barking up the wrong tree.

◧◩◪
66. crater+xx[view] [source] [discussion] 2025-04-08 23:44:14
>>chubot+ca
libgen is far from an archive of "most" books and publications, not even close.

The most recent numbers from libgen itself are 2.4 million non-fiction books and 80 million science journal articles. The Atlantic's database published in 2025 has 7.5 million books.[0] The publishing industry estimates that many books are published each year. As of 2010, Google counted over 129 million books[1]. At best an LLM like Llama will have have 20% of all books in its training set.

0. https://www.theatlantic.com/technology/archive/2025/03/libge...

1. https://booksearch.blogspot.com/2010/08/books-of-world-stand...

◧◩
79. wpietr+xB[view] [source] [discussion] 2025-04-09 00:31:04
>>woodru+Oy
I don't think there's anything wrong with exploring a field for decades. There are many scientists who have a mix of successes and failures. But this guy spend his whole life and many years of other people's lives trying one single thing that never really worked. You could call that being a single-minded visionary, but I don't think it's unreasonable for others to think it either kooky or a giant waste.

A useful comparison to me here is all the alchemical efforts to turn lead into gold. Can modern physicists do that? Not economically, but sure. [1] If alchemists had just persisted, would they have gotten there too? No, it was a giant waste, and pretty loony to a modern eye. And I'd say both alchemists and a number of AI proponents both are so wrapped up in pursuing specific outcomes (gold, AGI) that they indulge in a lot of magical thinking.

[1] https://www.scientificamerican.com/article/fact-or-fiction-l...

80. Nelson+VB[view] [source] 2025-04-09 00:37:07
>>todsac+(OP)
Open Mind Common Sense is a project now forgotten, a mid 2000s effort to build something like Cyc but in a more open fashion. From the MIT Media Lab, primarily Pushpinder Singh (RIP).

https://en.wikipedia.org/wiki/Open_Mind_Common_Sense

◧◩◪
84. yellow+qC[view] [source] [discussion] 2025-04-09 00:42:56
>>zozbot+yv
https://www.wikidata.org/wiki/Wikidata:Main_Page, for those curious about the interlingua in question.
◧◩◪◨⬒
98. DonHop+2I[view] [source] [discussion] 2025-04-09 01:52:16
>>imglor+Jh
You must not read ingredients on products or eat library paste. ;)

Also: MEAT GLUE (transglutanimase)!

https://en.wikipedia.org/wiki/Transglutaminase

https://en.wikipedia.org/wiki/Molecular_gastronomy

https://lyndeymilan.blogspot.com/2010/01/question-what-is-oc...

https://www.youtube.com/watch?v=oA8sNJjlQzU

https://www.youtube.com/watch?v=XAVO6hzNTns

104. gomija+tL[view] [source] 2025-04-09 02:39:39
>>todsac+(OP)
Lex Fridman interview with Lenat a few years back. https://www.youtube.com/watch?v=3wMKoSRbGVs
◧◩◪◨
111. gnrami+UN[view] [source] [discussion] 2025-04-09 03:14:48
>>joseph+cm
The way I see it:

(1) There is kind of a definition of a chair. But it's very long. Like, extremely long, and includes maybe even millions to billions of logical expressions, assuming your definition might need to use visual or geometric features of a given object to be classified as a chair (or not chair).

This is a kind of unification of neural networks (in particular LLMs) and symbolic thought: large enough symbolic thought can simulate NNs and vice versa. Indeed even the fact that NNs are soft and fuzzy does not matter theoretically, it's easy to show logical circuits can simulate soft and fuzzy boundaries (in fact, that's how NNs are implemented in real hardware! as binary logic circuits). But I think specific problems have varying degrees of more natural formulation as arithmetic, probabilistic, linear or fuzzy logic, on one hand, and binary, boolean-like logic on the other. Or natural formulations could involve arbitrary mixes of them.

(2) As humans, the actual definitions (although they may be said to exist in a certain way at a given time[1]) vary with time. We can, and do, invent new stuff all the time, and often extend or reuse old concepts. For example, I believe the word 'plug' in english likely well predates modern age, probably used to refer to original electrical power connectors. Nowadays there are USB plugs, which may not carry power at all, or audio plugs, etc. (maybe there are better examples). In any case the pioneer(s) usually did not envision all a name could be used for, and uses evolve.

(3) Words are used as tools to allow communication and, crucially, thought. There comes a need to put a fence (or maybe a mark) in abstract conceptual and logic space, and we associate that with a word. Really a word could be "anything we want to communicate", represent anything. In particular changes to the states of our minds, and states themselves. That's usually too general, most words are probably nouns which represent classifications of objects that exist in the world (like the mentioned chair) -- the 'mind state' definition is probably general enough to cover words like 'sadness', 'amazement', etc., and 'mind state transitions' probably can account for everything else.

We use words (and associated concepts) to dramatically reduce the complexity of the world to enable or improve planning. We can then simplify our tasks into a vastly simpler logical plan: even something simple like put shoes, open door, go outside, take train, get to work -- without segmenting the world into things and concepts (it's hard to even imagine thought without using concepts at all -- it probably happens instinctively), the number of possibilities involved in planning and acting would be overwhelming.

Obligatory article about this: https://slatestarcodex.com/2014/11/21/the-categories-were-ma...

---

Now this puts into perspective the work of formalizing things, in particular concepts. If you're formalizing concepts to create a system like Cyc, and expect it to be cheap, simple, reliable, and function well into the future, by our observations that should fail. However, formalization is still possible, even if expensive, complex, and possibly ever changing.

There are still reasons you may want to formalize things, in particular to acquire a deeper understanding of those things, or when you're okay in creating definitions set in stone because they will be confined to a group being attentive and restrictive to their formal definitions (and not, as natural language, evolving organically according to convenience): that's the case with mathematics. The peano axioms still define the same natural numbers; and although names may be reused, you can usually specify them to a particular axiomatic definition that will never change. And thus we can keep building facts on those foundations forever -- while what is a 'plug' in natural language might change (and associated facts about plugs become invalid), we can define mathematical objects (like 'natural numbers') with unchanging properties, and ever-valid and potentially ever-growing valid facts to be known about them, reliably. So fixing concepts in stone more or less (at least when it comes to a particular axiomatization) is not such a foolish endeavor it may look like, quite the opposite! Science in general benefits from those solid foundations.

I think eventually even some concepts related to human emotions and specially ethics will be (with varying degrees of rigor) formalized to be better understood. Which doesn't mean human language should (or will) stop evolving and being fuzzy, it can do so independently of formal more rigid counterparts. Both aspects are useful.

[1] In the sense that, at a given time, you could (theoretically) spend an enormous effort to arrive at a giant rule system that would probably satisfy most people, and most objects referred to as chairs, at a given fixed time.

◧◩◪◨
129. kracke+d01[view] [source] [discussion] 2025-04-09 06:06:19
>>ChuckM+yZ
>I would be interested in reading a paper that does a good job of explaining what a parameter ends up representing in an LLM model.

https://distill.pub/2020/circuits/ https://transformer-circuits.pub/2025/attribution-graphs/bio...

137. awande+u21[view] [source] 2025-04-09 06:38:04
>>todsac+(OP)
Great post that hits the sweet spot between a dry academic summary and a popular exposition.

Personally I hope that the current wave of AI is over hyped and misunderstood (so that e.g. https://ai-2027.com/ will be a comical footnote one day) and that symbolic reasoning will make a comeback in a new form.

139. kappas+c41[view] [source] 2025-04-09 07:08:26
>>todsac+(OP)
I wonder if it's possible to design a new formal language (something like lojban [1], with a grammar strictly based on formal logic) but with better UX so that it can be used by regular folks like me? Maybe combine something like Attempto [2] with a dedicated visual UI?

I would think a language like that could speed up knowledge base construction. Maybe it can also serve as a substitute for natural languages in some situations where we want our communication to be logically airtight.

[1] https://en.wikipedia.org/wiki/Lojban

[2] https://en.wikipedia.org/wiki/Attempto_Controlled_English

On a separate note, I've always wondered how Cyc is pronounced. Is it "sike", or is it "see-why-see"?

◧◩◪◨
165. YeGobl+Hl1[view] [source] [discussion] 2025-04-09 10:34:51
>>ChuckM+yZ
>> This is exactly correct, LLMs did scale with huge data, symbolic AI did not. So why?

Like the rock salad you're mixing up two disparate contexts here. Symbolic AI like SAT solvers and planners is not trying to learn from data and there's no context in which it has to "scale with huge data".

Instead, what modern SAT solvers and planners do is even harder than "scaling with data" - which, after all, today means having imba hardware and using it well. SAT solving and planning can't do that: SAT is NP-complete and planning is PSPACE-complete so it really doesn't matter how much you "scale" your hardware, those are not problems you can solve by scaling, ever.

And yet, today both SAT and planning are solved problems. NP complete? Nowadays, that's a piece of cake. There are dedicated solvers for all the classical sub-categories of SAT and modern planners can solve planning problems that require sequences of thousands of actions. Hell, modern planners can even play Atari games from pixels alone, and do very well indeed [1].

So how did symbolic AI manage those feats? Not with bigger computers but precisely with the approach that the article above seems to think has failed to produce any results: heuristic search. In SAT solving, the dominant approach is an algorithm called "Conflict Driven Clause Learning", that is designed to exploit the special structure of SAT problems. In Planning and Scheduling, heuristic search was always used, but work really took off in the '90s when people realised that they could automatically estimate a heuristic cost function from the structure of a planning problem.

There are parallel and similar approaches everywhere you look at, in classical AI problems, like verification, theorem proving, etc, and that work has even produced a few Turing awards [2]. But do you hear about that work at all, when you hear about AI research? No, because it works, and so it's not AI.

But it works, it runs on normal hardware, it doesn't need "scale" and it doesn't need data. You're measuring the wrong thing with the wrong stick.

____________

[1] Planning with Pixels in (Almost) Real Time: https://arxiv.org/pdf/1801.03354 Competitive results with humans and RL. Bet you didn't know that.

[2] E.g. Pnueli for temporal logic in verification, or Clarke, Emerson and Sifakis, for model checking.

184. cubefo+Vy1[view] [source] 2025-04-09 12:50:31
>>todsac+(OP)
A similar failure of GOFAI was ABBYY's (a Russian company which was for a long time market leader in OCR software) monumental (multi decade long) attempt of creating advanced translation software entirely based on complex formal grammar parsing.

The story behind it is really interesting. This article was written by someone who worked at ABBYY:

https://sysblok.ru/blog/gorkij-urok-abbyy-kak-lingvisty-proi...

The piece is in Russian but can (ironically) be read in good English by using e.g. the Google Translate feature inside Chrome. Which is of course entirely based on machine learning.

The story is essentially similar to Cyc: symbolic AI/logical AI/GOFAI can produce initially impressive results (ABBYY was way better than early forms of Google Translate), but symbolic approaches doesn't scale well. Big Data + machine learning wins out eventually. The piece above mentions a 2009 piece from Google which put forward this thesis. "The Unreasonable Effectiveness of Data":

https://static.googleusercontent.com/media/research.google.c...

Note that 2009 was significantly before the existence of large language models, transformers, or even AlexNet.

◧◩◪
186. fancyf+bB1[view] [source] [discussion] 2025-04-09 13:08:03
>>cubefo+uz1
Yes. Sorry. I was actually just googling this and realised this same anecdote is in cited the intro to the Deep Learning book by Goodfellow et al. Their write-up is hopefully clearer:

"For example, Cyc failed to understand a story about a person named Fred shaving in the morning (Linde, 1992). Its inference engine detected an inconsistency in the story: it knew that people do not have electrical parts, but because Fred was holding an electric razor, it believed the entity “FredWhileShaving” contained electrical parts. It therefore asked whether Fred was still a person while he was shaving"

https://www.deeplearningbook.org/contents/intro.html

The (Linde, 1992) citation is they give is the 4th episode of a TV series - presumably the one I saw as a kid!

https://en.m.wikipedia.org/wiki/The_Machine_That_Changed_the...

And of course it's on YouTube:

https://youtube.com/clip/UgkxRcsHT-s1iZ-VRWFRXA-qg4kjTYe-a6j...

◧◩
189. Philpa+rL1[view] [source] [discussion] 2025-04-09 14:02:58
>>akobol+0x1
OLMo 2 from AI2 is fully open-source, including code, data and recipe: https://allenai.org/olmo

There are others, but OLMo is the most recent and competitive.

190. Beijin+WL1[view] [source] 2025-04-09 14:05:36
>>todsac+(OP)
This is more interesting, and if I remember correctly, they had some interesting ideas:

Two AI Pioneers. Two Bizarre Suicides. What Really Happened?

https://www.wired.com/2008/01/ff-aimystery/

◧◩◪
199. mac3n+BV1[view] [source] [discussion] 2025-04-09 14:58:02
>>cubefo+uz1
along this line, “Gravity has no friends.”

http://blog.kenperlin.com/?p=2068

◧◩◪◨
204. gwern+C12[view] [source] [discussion] 2025-04-09 15:31:40
>>mcphag+1N1
> That came out in 2009, correct? I wonder how much was spent on LLMs up to that point.

Quite a lot. Look back at the size of the teams working on language models at IBM, Microsoft, Google, etc, and think about all the decades of language model research going back to Shannon and quantifying the entropy of English. Or the costs to produce the datasets like the Brown Corpus which were so critical. And keep in mind that a lot of the research and work is not public for language models; stuff like NSA interest is obvious, but do you know what Bob Mercer did before he vanished into the black hole of Renaissance Capital? I recently learned from a great talk (spotted by OP, as it happens) https://gwern.net/doc/psychology/linguistics/bilingual/2013-... that it was language modeling!

I can't give you an exact number, of course, but when you consider the fully-loaded costs of researchers at somewhere like IBM/MS/G is usually at least several hundred thousand dollars a year and how many decades and how many authors there are on papers and how many man-years must've been spent on now-forgotten projects in the 80s and 90s to scale to billions of word corpuses to train the n-gram language models (sometimes requiring clusters), I would have to guess it's at least hundreds of millions cumulative.

> They're also not humble.

Funnily enough, the more grandiose use-cases of LMs actually were envisioned all the way back at the beginning! In fact, there's an incredible science fiction story you've never heard of which takes language models, quite literally, as the route to a Singularity, from 1943. You really have to read it to believe it: "Fifty Million Monkeys", Jones 1943 https://gwern.net/doc/fiction/science-fiction/1943-jones.pdf

> I don't know why CYC went that way.

If you read the whole OP, which I acknowledge is quite a time investment, I think Yuxi makes a good case for why Lenat culturally aimed for the 'boil the ocean' approach and how they refused to do more incremental small easily-benchmarked applications as distractions and encouraging deeply flawed paradigms and how they could maintain it for so long. (Which shouldn't be too much of a surprise. Look how much traction DL critics on HN get, even now.)

◧◩◪◨
230. woodru+Zb3[view] [source] [discussion] 2025-04-09 20:45:47
>>joseph+cm
> Any project trying to use deterministic symbolic logic to represent the world fundamentally misunderstands cognition.

The counterposition to this is no more convincing: cognition is fuzzy, but it's not really clear at all that it's probabilistic: I don't look at a stump and ascertain its chairness with a confidence of 85%, for example. The actual meta-cognition of "can I sit on this thing" is more like "it looks sittable, and I can try to sit on it, but if it feels unstable then I shouldn't sit on it." In other words, a defeasible inference.

(There's an entire branch of symbolic logic that models fuzziness without probability: non-monotonic logic[1]. I don't think these get us to AGI either.)

[1]: https://en.wikipedia.org/wiki/Non-monotonic_logic

◧◩◪◨
237. smoyer+oG3[view] [source] [discussion] 2025-04-10 01:04:00
>>YeGobl+Qm1
You're forcing the proud dad function: https://pubmed.ncbi.nlm.nih.gov/36995257/
◧◩◪◨⬒
241. musica+JR3[view] [source] [discussion] 2025-04-10 03:30:12
>>musica+wW
See also (Lenat's final paper):

D. Lenat, G. Marcus, "Getting from Generative AI to Trustworthy AI: What LLMs might learn from Cyc", https://arxiv.org/abs/2308.04445

245. defvar+3V3[view] [source] 2025-04-10 04:10:15
>>todsac+(OP)
The game mentioned in this article reminds me of the story about heuristics told by Richard Feynman in a lecture: https://youtu.be/EKWGGDXe5MA?si=z9TdlWflOkY6b9Qk&t=4249

I was wondering then who was the guy he talked about.It is great that this article provides so much details of it.

◧◩◪◨⬒
257. famous+415[view] [source] [discussion] 2025-04-10 15:19:04
>>woodru+Zb3
>I don't look at a stump and ascertain its chairness with a confidence of 85%

But i think you did. Not consciously, but i think your brain definitely did.

https://www.nature.com/articles/415429a https://pubmed.ncbi.nlm.nih.gov/8891655/

◧◩◪◨⬒⬓⬔⧯▣
260. adastr+re5[view] [source] [discussion] 2025-04-10 16:34:15
>>YeGobl+ji4
I ask Claude to solve problems of similar complexity on a daily basis. A SAT solver specifically is maybe a once a week thing.

Use cases are anything, really. Determine resource allocation for a large project, or do Monte Carlo simulation of various financial and risk models. Looking at a problem that has a bunch of solutions with various trade-offs, pick the best strategy given various input constraints.

There are specialized tools out there that you can pay an arm and a leg for a license to do this, or you can have Claude one-off a project that gets the same result for $0.50 of AI credits. We live in an age of unprecedented intelligence abundance, and people are not used to this. I can have Claude implement something that would take a team of engineers months or years to do, and use it once then throw it away.

I say Claude specifically because in my experience none of the other models are really able to handle tasks like this.

Edit: an example prompt I put here: >>43639320

◧◩◪◨⬒⬓⬔
261. famous+4p5[view] [source] [discussion] 2025-04-10 17:37:03
>>woodru+155
>which is exactly what you'd expect for a sound set of defeasible relations.

This is a leap. While a complex system of rules might coincidentally produce behavior that looks statistically optimal in some scenarios, the paper (Ernst & Banks) argues that the mechanism itself operates according to statistical principles (MLE), not just that the outcome happens to look that way.

Moreover, it's highly unlikely, bordering on impossible, to reduce the situations the brain deals with even on a daily basis into a set of defeasible statements.

Example: Recognizing a "Dog"

Defeasible Attempt: is_dog(X) :- has_four_legs(X), has_tail(X), barks(X), not is_cat(X), not is_fox(X), not is_robot_dog(X).

is_dog(X) :- has_four_legs(X), wags_tail(X), is_friendly_to_humans(X), not is_wolf(X).

How do you define barks(X) (what about whimpers, growls? What about a dog that doesn't bark?)? How do you handle breeds that look very different (Chihuahua vs. Great Dane)? How do you handle seeing only part of the animal? How do you represent the overall visual gestalt? The number of rules and exceptions quickly becomes vast and brittle.

Ultimately, the proof as they say, is in the pudding. By the way, the CyC we are all talking about here is non-monotonic. https://www.cyc.com/wp-content/uploads/2019/07/First-Orderiz...

If you've tried something for decades and it's not working, and it doesn't even look like it's working and experiments with the brain suggest probabilistic inference and probabilistic inference machines work much better than the alternatives ever did, you have to face the music.

◧◩◪◨
272. YeGobl+Bg6[view] [source] [discussion] 2025-04-11 01:28:22
>>photon+iB5
Oh, OK. Well I don't call that a grifter, just an ordinary, garden variety, techie. Many (not all) do that; actively seek funding from militaries for their work.

E.g., just this week MS fired two people for protesting the use of Azure to power the Palestinian Genocide [1].

When people talk about the military-industrial complex, what they really should be talking about is the military-FAANG complex. AI and military intelligence are both the same sad joke.

Lenat was no different in that, so I don't think it's fair to call him a grifter. I do think it's fair to call him out on being an asshole who put money above peoples' lives.

Btw, I've released some of my free stuff under a modified GNU 3.0 with an added clause that prohibits its use for military applications. I've been told that makes it "non-free" and it seems that's a bad thing. Lenat is only one nerd in a long line of nerds that need to think very hard about the ethics of their work.

____________

[1] https://apnews.com/article/microsoft-protest-employees-fired...

They were protesting about this:

https://www.972mag.com/microsoft-azure-openai-israeli-army-c...

◧◩◪◨⬒⬓⬔
274. thesz+YM9[view] [source] [discussion] 2025-04-12 10:23:23
>>YeGobl+sz2
You invented a new kind of learning that somewhat contradicts usual definition [1] [2].

  [1] https://www.britannica.com/dictionary/learning
  [2] https://en.wikipedia.org/wiki/Learning
"Learning" in CDCL is perfectly in line of "gaining knowledge."
◧◩◪◨⬒⬓⬔⧯▣
275. thesz+fN9[view] [source] [discussion] 2025-04-12 10:25:52
>>YeGobl+bi4
Take a look at Satisfaction-Driven Clause Learning [1].

[1] https://www.cs.cmu.edu/~mheule/publications/prencode.pdf

[go to top]