Wikipedia's overview: <https://en.wikipedia.org/wiki/Cyc>
Project / company homepage: <https://cyc.com/>
It's failure is no shade against Doug. Somebody had to try it, and I'm glad it was one of the brightest guys around. I think he clung on to it long after it was clear that it wasn't going to work out, but breakthroughs do happen. (The current round of machine learning itself is a revival of a technique that had been abandoned, but people who stuck with it anyway discovered the tricks that made it go.)
I believe both approaches are useful and can be combined and layered and fed back into each other, to reinforce and transcend complement each others advantages and limitations.
Kind of like how Hailey and Justin Bieber make the perfect couple: ;)
https://edition.cnn.com/style/hailey-justin-bieber-couples-f...
Marvin L Minsky: Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus Scruffy
https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...
https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...
"We should take our cue from biology rather than physics..." -Marvin Minsky
>To get around these limitations, we must develop systems that combine the expressiveness and procedural versatility of symbolic systems with the fuzziness and adaptiveness of connectionist representations. Why has there been so little work on synthesizing these techniques? I suspect that it is because both of these AI communities suffer from a common cultural-philosophical disposition: They would like to explain intelligence in the image of what was successful in physics—by minimizing the amount and variety of its assumptions. But this seems to be a wrong ideal. We should take our cue from biology rather than physics because what we call thinking does not directly emerge from a few fundamental principles of wave-function symmetry and exclusion rules. Mental activities are not the sort of unitary or elementary phenomenon that can be described by a few mathematical operations on logical axioms. Instead, the functions performed by the brain are the products of the work of thousands of different, specialized subsystems, the intricate product of hundreds of millions of years of biological evolution. We cannot hope to understand such an organization by emulating the techniques of those particle physicists who search for the simplest possible unifying conceptions. Constructing a mind is simply a different kind of problem—how to synthesize organizational systems that can support a large enough diversity of different schemes yet enable them to work together to exploit one another’s abilities.
https://en.wikipedia.org/wiki/Neats_and_scruffies
>In the history of artificial intelligence, neat and scruffy are two contrasting approaches to artificial intelligence (AI) research. The distinction was made in the 70s and was a subject of discussion until the middle 80s.[1][2][3]
>"Neats" use algorithms based on a single formal paradigms, such as logic, mathematical optimization or neural networks. Neats verify their programs are correct with theorems and mathematical rigor. Neat researchers and analysts tend to express the hope that this single formal paradigm can be extended and improved to achieve general intelligence and superintelligence.
>"Scruffies" use any number of different algorithms and methods to achieve intelligent behavior. Scruffies rely on incremental testing to verify their programs and scruffy programming requires large amounts of hand coding or knowledge engineering. Scruffies have argued that general intelligence can only be implemented by solving a large number of essentially unrelated problems, and that there is no magic bullet that will allow programs to develop general intelligence autonomously.
>John Brockman compares the neat approach to physics, in that it uses simple mathematical models as its foundation. The scruffy approach is more like biology, where much of the work involves studying and categorizing diverse phenomena.[a]
[...]
>Modern AI as both neat and scruffy
>New statistical and mathematical approaches to AI were developed in the 1990s, using highly developed formalisms such as mathematical optimization and neural networks. Pamela McCorduck wrote that "As I write, AI enjoys a Neat hegemony, people who believe that machine intelligence, at least, is best expressed in logical, even mathematical terms."[6] This general trend towards more formal methods in AI was described as "the victory of the neats" by Peter Norvig and Stuart Russell in 2003.[18]
>However, by 2021, Russell and Norvig had changed their minds.[19] Deep learning networks and machine learning in general require extensive fine tuning -- they must be iteratively tested until they begin to show the desired behavior. This is a scruffy methodology.
I do suspect that well-curated and hand-tuned corpora, including possibly Cyc's, are of significant use to LLM AI. And will likely be more so as the feedback / autophagy problem exacerbates.
Cyc is sort of like that, but for everything. Not just a small limited world. I believe it didn’t work out because it’s really hard.
The goal was that in a decade it would become self-sustaining. It would have enough knowledge that it could start reading natural language. And it just... didn't.
Contrast it with LLMs and diffusion and such. They make stupid, asinine mistakes -- real howlers, because they don't understand anything at all about the world. If it could draw, Cyc would never draw a human with 7 fingers on each hand, because it knows that most humans have 5. (It had a decent-ish ontology of human anatomy which could handle injuries and birth defects, but would default reason over the normal case.) I often see ChatGPT stumped by simple variations of brain teasers, and Cyc wouldn't make those mistakes -- once you'd translated them into CycL (its language, because it couldn't read natural language in any meaningful way).
But those same models do a scary job of passing the Turing Test. Nobody would ever have thought to try it on Cyc. It was never anywhere close.
Philosophically I can't say why Cyc never developed "magic" and LLMs (seemingly) do. And I'm still not convinced that they're on the right path, though they actually have some legitimate usages right now. I tried to find uses for Cyc in exactly the opposite direction, guaranteeing data quality, but it turned out nobody really wanted that.
So you'd use the NN to recognize that the thing in front of the camera is a cat, and that would be fed into the symbolic knowledge base for further reasoning.
The knowledge base will contain facts like the cat is likely to "meow" at some point, especially if it wants attention. Based on the relevant context, the knowledge base would also know that the cat is unlikely to be able to talk, unless it is a cat in a work of fiction, for example.
Cyc, on the other hand, lacks flesh and skin. It's all skeleton and can generate facts but not embellish them into narratives.
The best human writing has both, much as artists (traditional painters, sculptors, and more recently computer animators) has a skeleton (outline, index cards, Zettlekasten, wireframe) to which flesh, skin, and fur are attached. LLM generative AIs are too plastic, Cyc is insufficiently plastic.
I suspect there's some sort of a middle path between the two. Though that path and its destination also increasingly terrify me.
I suppose you'd architect it as a layer. It wants to say something, and the ontology layer says, "No, that's stupid, say something else". The ontology layer can recognize ontology-like statements and use them to build and evolve the ontology.
It would be even more interesting built into the visual/image models.
I have no idea if that's any kind of real progress, or if it's merely filtering out the dumb stuff. A good service, to be sure, but still not "AGI", whatever the hell that turns out to be.
Unless it turns out to be the missing element that puts it over the top. If I had any idea I wouldn't have been working with Cyc in the first place.
Natural-language content-based classification as by Google and Web text-based search relies effectively on documents self-descriptions (that is, their content itself) to classify and search works, though a ranking scheme (e.g., PageRank) is typically layered on top of that. What distinguished early Google from prior full-text search was that the latter had no ranking criteria, leading to keyword stuffing. An alternative approach was Yahoo, originally Yet Another Hierarchical Officious Oracle, which was a curated and ontological classification of websites. This was already proving infeasible by 1997/98 as a whole, though as training data for machine classification might prove useful.
See my sibling post citing Roger Schank who coined the terms, and quoting Marvin Minsky's paper, "Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus Scruffy" and the "Neats and Scruffies" wikipedia page.
Leela AI was founded by Henry Minsky and Cyrus Shaoul, and is inspired by ideas about child development by Jean Piaget, Seymour Papert, Marvin Minsky, and Gary Drescher (described in his book “Made-Up Minds”).
https://mitpress.mit.edu/9780262517089/made-up-minds/
>Leela Platform is powered by Leela Core, an innovative AI engine based on research at the MIT Artificial Intelligence Lab. With its dynamic combination of traditional neural networks for pattern recognition and causal-symbolic networks for self-discovery, Leela Core goes beyond accurately recognizing objects to comprehend processes, concepts, and causal connections.
>Leela Core is much faster to train than conventional NNs, using 100x less data and enabling 10x less time-to-value. This highly resilient AI can quickly adjust to changes and explain what it is sensing and doing via the Leela Viewer dashboard. [...]
The key to regulating AI is explainability. The key to explainability may be causal AI.
https://leela.ai/post/the-key-to-regulating-ai-is-explainabi...
>[...] For example, the Leela Core engine that drives the Leela Platform for visual intelligence in manufacturing adds a symbolic causal agent that can reason about the world in a way that is more familiar to the human mind than neural networks. The causal layer can cross-check Leela Core's traditional NN components in a hybrid causal/neural architecture. Leela Core is already better at explaining its decisions than NN-only platforms, making it easier to troubleshoot and customize. Much greater transparency is expected in future versions. [...]
is precisely Doug Lenat & Gary Marcus' thoughts on how to combine them (July 31st 2023, Lenat's last paper)
But I guess I also don't know enough about the CYC approach to say. Maybe neither of them fit what I think of as "neat".
https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...
https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...
"We should take our cue from biology rather than physics..." -Marvin Minsky
https://grandtextauto.soe.ucsc.edu/2008/02/14/ep-44-ai-neat-...
EP 4.4: AI, Neat and Scruffy
by Noah Wardrip-Fruin, 6:11 am
A name that does appear in Weizenbaum’s book, however, is that of Roger Schank, Abelson’s most famous collaborator. When Schank arrived from Stanford to join Abelson at Yale, together they represented the most identifiable center for a particular approach to artificial intelligence: what would later (in the early 1980s) come to be known as the “scruffy” approach. [7] Meanwhile, perhaps the most identifiable proponent of what would later be called the “neat” approach, John McCarthy, remained at Stanford.
McCarthy had coined the term “artificial intelligence” in the application for the field-defining workshop he organized at Dartmouth in 1956. Howard Gardner, in his influential reflection on the field, The Mind’s New Science (1985), characterized McCarthy’s neat approach this way: “McCarthy believes that the route to making machines intelligent is through a rigorous formal approach in which the acts that make up intelligence are reduced to a set of logical relationships or axioms that can be expressed precisely in mathematical terms” (154).
This sort of approach lent itself well to problems easily cast in formal and mathematical terms. But the scruffy branch of AI, growing out of fields such as linguistics and psychology, wanted to tackle problems of a different nature. Scruffy AI built systems for tasks as diverse as rephrasing newspaper reports, generating fictions, translating between languages, and (as we have seen) modeling ideological reasoning. In order to accomplish this, Abelson, Schank, and their collaborators developed an approach quite unlike formal reasoning from first principles. One foundation for their work was Schank’s “conceptual dependency” structure for language-independent semantic representation. Another foundation was the notion of “scripts” (later “cases”) an embryonic form of which could be seen in the calling sequence of the ideology machine’s executive. Both of these will be considered in more detail in the next chapter.
Scruffy AI got attention because it achieved results in areas that seemed much more “real world” than those of other approaches. For comparison’s sake, consider that the MIT AI lab, at the time of Schank’s move to Yale, was celebrating success at building systems that could understand the relationships in stacks of children’s wooden blocks. But scruffy AI was also critiqued — both within and outside the AI field — for its “unscientific” ad-hoc approach. Weizenbaum was unimpressed, in particular, with the conceptual dependency structures underlying many of the projects, writing, “Schank provides no demonstration that his scheme is more than a collection of heuristics that happen to work on specific classes of examples” (199). Whichever side one took in the debate, there can be no doubt that scruffy projects depending on coding large amounts of human knowledge into AI systems — often more than the authors acknowledged, and perhaps much more than they realized.
[...]
[7] After the terms “neat” and “scruffy” were introduced into the AI and cognitive science discourse by Abelson’s 1981 essay, in which he attributes the coinage to “an unnamed but easily guessable colleague” — Schank.
https://cse.buffalo.edu/~rapaport/676/F01/neat.scruffy.txt
Article: 35704 of comp.ai
From: engelson@bimacs.cs.biu.ac.il (Dr. Shlomo (Sean) Engelson)
Newsgroups: comp.ai
Subject: Re: who first used "scruffy" and "neat"?
Date: 25 Jan 1996 08:17:13 GMT
Organization: Bar-Ilan University Computer Science
In article <4e2th9$lkm@cantaloupe.srv.cs.cmu.edu> Lonnie Chrisman <ldc+@cs.cmu.edu> writes:
so@brownie.cs.wisc.edu (Bryan So) wrote:
>A question of curiosity. Who first used the terms "scruffy" and "neat"?
>And in what document? How about "strong" and "weak"?
Since I don't see a response yet, I'll take a stab. The earliest use of
"scruffy" and "neat" that comes to my mind was in David Chapman's "Planning
for Conjunctive Goals", Artificial Intelligence 32:333-377, 1987. "Weak"
evidence for this being the earliest use is that he does not cite any earlier
use of the terms, but perhaps someone else will correct me and give an
earlier citation.
One earlier citation is Eugene Charniak's paper in AAAI 1986, "A Neat
Theory of Marker Passing". I think, though, that the terms go way
back in common parlance, almost certainly to the 70s at least. Any of
the "old-timers" out there like to comment?
[...] Article: 35781 of comp.ai
From: fass@cs.sfu.ca (Dan Fass)
Newsgroups: comp.ai
Subject: Re: who first used "scruffy" and "neat"?
Date: 26 Jan 1996 10:03:35 -0800
Organization: Simon Fraser University, Burnaby, B.C.
Abelson (1981) credits the neat/scruffy distinction to Roger Schank.
Abelson says, ``an unnamed but easily guessable colleague of mine
... claims that the major clashes in human affairs are between the
"neats" and the "scruffies". The primary concern of the neat is
that things should be orderly and predictable while the scruffy
seeks the rough-and-tumble of life as it comes'' (p. 1).
Abelson (1981) argues that these two prototypic identities --- neat
and scruffy --- ``cause a very serious clash'' in cognitive science
and explores ``some areas in which a fusion of identities seems
possible'' (p. 1).
- Dan Fass
REF
Abelson, Robert P. (1981).
Constraint, Construal, and Cognitive Science.
Proceedings of the 3rd Annual Conference of the Cognitive Science
Society, Berkeley, CA, pp. 1-9.
[...]Aaron Sloman, 1989: "Introduction: Neats vs Scruffies"
https://www.cs.bham.ac.uk//research/projects/cogaff/misc/scr...
>There has been a long-standing opposition within AI between "neats" and "scruffies" (I think the terms were first invented in the late 70s by Roger Schank and/or Bob Abelson at Yale University).
>The neats regard it as a disgrace that many AI programs are complex, ill-structured, and so hard to understand that it is not possible to explain or predict their behaviour, let alone prove that they do what they are intended to do. John McCarthy in a televised debate in 1972 once complained about the "Look Ma no hands!" approach. Similarly, Carl Hewitt, complained around the same time, in seminars, about the "Hairy kludge (pronounced klooge) a month" approach to software development. (His "actor" system was going to be a partial solution to this.)
>The scruffies regard messy complexity as inevitable in intelligent systems and point to the failure so far of all attempts to find workable clear and general mechanisms, or mathematical solutions to any important AI problems. There are nice ideas in the General Problem Solver, logical theorem provers, and suchlike but when confronted with non-toy problems they normally get bogged down in combinatorial explosions. Messy complexity, according to scruffies, lies in the nature of problem domains (e.g. our physical environment) and only by using large numbers of ad-hoc special-purpose rules or heuristics, and specially tailored representational devices can problems be solved in a reasonable time.
Roger Schank
https://en.wikipedia.org/wiki/Roger_Schank
Robert Abelson
https://en.wikipedia.org/wiki/Robert_Abelson
Marvin Minsky
https://en.wikipedia.org/wiki/Marvin_Minsky
Neats and scruffies
https://en.wikipedia.org/wiki/Neats_and_scruffies
>Scruffy projects in the 1980s
>The scruffy approach was applied to robotics by Rodney Brooks in the mid-1980s. He advocated building robots that were, as he put it, Fast, Cheap and Out of Control, the title of a 1989 paper co-authored with Anita Flynn. Unlike earlier robots such as Shakey or the Stanford cart, they did not build up representations of the world by analyzing visual information with algorithms drawn from mathematical machine learning techniques, and they did not plan their actions using formalizations based on logic, such as the 'Planner' language. They simply reacted to their sensors in a way that tended to help them survive and move.[13]
>Douglas Lenat's Cyc project was initiated in 1984 one of earliest and most ambitious projects to capture all of human knowledge in machine readable form, is "a determinedly scruffy enterprise".[14] The Cyc database contains millions of facts about all the complexities of the world, each of which must be entered one at a time, by knowledge engineers. Each of these entries is an ad hoc addition to the intelligence of the system. While there may be a "neat" solution to the problem of commonsense knowledge (such as machine learning algorithms with natural language processing that could study the text available over the internet), no such project has yet been successful.
[...]
>John Brockman writes "Chomsky has always adopted the physicist's philosophy of science, which is that you have hypotheses you check out, and that you could be wrong. This is absolutely antithetical to the AI philosophy of science, which is much more like the way a biologist looks at the world. The biologist's philosophy of science says that human beings are what they are, you find what you find, you try to understand it, categorize it, name it, and organize it. If you build a model and it doesn't work quite right, you have to fix it. It's much more of a "discovery" view of the world."[4]
His question is: Is it preferable for scruffies to become neater, or for neats to become scruffier? His answer explains why he aspires to be a neater scruffy.
"But I use the example as symptomatic of one kind of approach to the cognitive science fusion problem: you start from a neat, right-wing point of view, but acknowledge some limited role for scruffy, left-wing orientations. The other type of approach is the obvious mirror: you start from the disorderly leftwing side and struggle to be neater about what you are doing. I prefer the latter approach to the former. I will tell you why, and then lay out the beginnings of such an approach."
https://cse.buffalo.edu/~rapaport/676/F01/neat.scruffy.txt
Article: 35781 of comp.ai
From: fass@cs.sfu.ca (Dan Fass)
Newsgroups: comp.ai
Subject: Re: who first used "scruffy" and "neat"?
Date: 26 Jan 1996 10:03:35 -0800
Organization: Simon Fraser University, Burnaby, B.C.
Abelson (1981) credits the neat/scruffy distinction to Roger Schank.
Abelson says, ``an unnamed but easily guessable colleague of mine
... claims that the major clashes in human affairs are between the
"neats" and the "scruffies". The primary concern of the neat is
that things should be orderly and predictable while the scruffy
seeks the rough-and-tumble of life as it comes'' (p. 1).
Abelson (1981) argues that these two prototypic identities --- neat
and scruffy --- ``cause a very serious clash'' in cognitive science
and explores ``some areas in which a fusion of identities seems
possible'' (p. 1).
- Dan Fass
REF
Abelson, Robert P. (1981).
Constraint, Construal, and Cognitive Science.
Proceedings of the 3rd Annual Conference of the Cognitive Science
Society, Berkeley, CA, pp. 1-9.
https://cognitivesciencesociety.org/wp-content/uploads/2019/...[I'll quote the most relevant first part of the article, which is still worth reading in its entirety if you have time, since scanned two column pdf files are so hard to read on mobile, and it's so interesting and relevant to Douglas Lenat's work on Cyc.]
CONSTRAINT, CONSTRUAL, AND COGNITIVE SCIENCE
Robert P. Abelson, Yale University
Cognitive science has barely emerged as a discipline -- or an interdiscipline, or whatever it is -- and already it is having an identity crisis.
Within us and among us we have many competing identities. Two particular prototypic identities cause a very serious clash, and I would like to explicate this conflict and then explore some areas in which a fusion of identities seems possible. Consider the two-word name "cognitive science". It represents a hybridization of two different impulses. On the one hand, we want to study human and artificial cognition, the structure of mental representatives, the nature of mind. On the other hand, we want to be scientific, be principled, be exact. These two impulses are not necessarily incompatible, but given free rein they can develop what seems to be a diametric opposition.
The study of the knowledge in a mental system tends toward both naturalism and phenomenology. The mind needs to represent what is out there in the real world, and it needs to manipulate it for particular purposes. But the world is messy, and purposes are manifold. Models of mind, therefore, can become garrulous and intractable as they become more and more realistic. If one's emphasis is on science more than on cognition, however, the canons of hard science dictate a strategy of the isolation of idealized subsystems which can be modeled with elegant productive formalisms. Clarity and precision are highly prized, even at the expense of common sense realism. To caricature this tendency with a phrase from John Tukey (1959), the motto of the narrow hard scientist is, "Be exactly wrong, rather than approximately right".
The one tendency points inside the mind, to see what might be there. The other points outside the mind, to some formal system which can be logically manipulated (Kintsch et al., 1981). Neither camp grants the other a legitimate claim on cognitive science. One side says, "What you're doing may seem to be science, but it's got nothing to do with cognition." The other side says, "What you're doing may seem to be about cognition, but it's got nothing to do with science."
Superficially, it may seem that the trouble arises primarily because of the two-headed name cognitive science. I well remember the discussions of possible names, even though I never liked "cognitive science", the alternatives were worse; abominations like "epistology" or "representonomy".
But in any case, the conflict goes far deeper than the name itself. Indeed, the stylistic division is the same polarization than arises in all fields of science, as well as in art, in politics, in religion, in child rearing -- and in all spheres of human endeavor. Psychologist Silvan Tomkins (1965) characterizes this overriding conflict as that between characterologically left-wing and right-wing world views. The left-wing personality finds the sources of value and truth to lie within individuals, whose reactions to the world define what is important. The right-wing personality asserts that all human behavior is to be understood and judged according to rules or norms which exist independent of human reaction. A similar distinction has been made by an unnamed but easily guessable colleague of mine, who claims that the major clashes in human affairs are between the "neats" and the "scruffies". The primary concern of the neat is that things should be orderly and predictable while the scruffy seeks the rough-and-tumble of life as it comes.
I am exaggerating slightly, but only slightly, in saying that the major disagreements within cognitive science are instantiations of a ubiquitous division between neat right-wing analysis and scruffy left-wing ideation. In truth there are some signs of an attempt to fuse or to compromise these two tendencies. Indeed, one could view the success of cognitive science as primarily dependent not upon the cooperation of linguistics, AI, psychology, etc., but rather, upon the union of clashing world views about the fundamental nature of mentation. Hopefully, we can be open minded and realistic about the important contents of thought at the same time we are principled, even elegant, in our characterizations of the forms of thought.
The fusion task is not easy. It is hard to neaten up a scruffy or scruffy up a neat. It is difficult to formalize aspects of human thought which are variable, disorderly, and seemingly irrational, or to build tightly principled models of realistic language processing in messy natural domains. Writings about cognitive science are beginning to show a recognition of the need for world-view unification, but the signs of strain are clear. Consider the following passage from a recent article by Frank Keil (1981) in Pscyhological Review, giving background for a discussion of his formalistic analysis of the concept of constraint:
"Constraints will be defined...as formal restrictions that limit the class of logically possible knowledge structures that can normally be used in a given cognitive domain." (p. 198).
Now, what is the word "normally" doing in a statement about logical possibility? Does it mean that something which is logically impossible can be used if conditions are not normal? This seems to require a cognitive hyperspace where the impossible is possible.
It is not my intention to disparage an author on the basis of a single statement infelicitously put. I think he was genuinely trying to come to grips with the reality that there is some boundary somewhere to the penetration of his formal constraint analysis into the viscissitudes of human affairs. But I use the example as symptomatic of one kind of approach to the cognitive science fusion problem: you start from a neat, right-wing point of view, but acknowledge some limited role for scruffy, left-wing orientations. The other type of approach is the obvious mirror: you start from the disorderly leftwing side and struggle to be neater about what you are doing. I prefer the latter approach to the former. I will tell you why, and then lay out the beginnings of such an approach.
[...]
To read why and how:
https://cognitivesciencesociety.org/wp-content/uploads/2019/...
May essentialism just does not work.
The wikipedia page about Neats and Scruffies that I linked you to is in my opinion well written, clearly defines the meanings each term, and presents plenty of evidence and citations and background. I'll give you the benefit of doubt that of course you've already read and understand it, so if you disagree with the history and citations on the wikipedia page and all the original papers and books and people cited and quoted, and can present better evidence and arguments to prove that you're right and they're all wrong, then you are free to go try to rewrite history by sharing your own definitions and citations, and correcting the errors on wikipedia. Good luck! I suggest you start by writing suggestions and presenting your evidence on the talk page first, instead of just directly editing the wikipedia page itself, to see what other experts in the field think and achieve consensus, or else it will likely be considered vandalism and be reverted.
You seem to be missing the point that the world is not strictly black and white, and ever since the terms were originally coined, the people who defined them and many other people have strongly recommended fusing both the "neat" and "scruffy" approaches, and LLMs actually do incorporate some ad-hoc "scruffy" aspects into their mathematical "neat" approach, and that's why they work so much better than simple perceptrons or neural nets. But they are still much more "neat" than "scruffy", and combining the two approaches does not flip the meaning of the two terms. I just discussed the fusion of scruffy and neat here, and quoted the original 41-year-old essay from 1982 by Robert Abelson that defined the terms and recommended fusing the two different approaches:
And also:
But before you go off and edit the Neats and Scruffies wikipedia page with your own definitions, please take the time to read the original essay by Robert Abelson that defines the terms first, like I did. In the link above, I cited it, tracked down the pdf, and quoted the relevant part of it for you, but you should probably do your homework first and read the whole thing before editing the wikipedia page about it. But be aware that it uses a lot of other technical terms and jargon that have well known definitions to practitioners in the field, so the common layman definitions of words you learned in grammar school may not apply.
Cyc is clearly the paradigm of "scruffy" and like biology, and perceptrons and neural nets are clearly the paradigm of "neat" and like physics, and that's how those terms have been widely used for more than four decades.
I think what's interesting in the jargon vs. plain definition tension here is related to what you noted in this most recent comment. It's that the words "neat" and "scruffy" - that is, just the english words, not the AI jargon terms - are not really symmetrical. A scruffy thing can easily become more neat while remaining scruffy, but introducing scruffiness into a neat thing tends to just make it scruffy. Neat is more totalizing.
So you say LLMs still fall into the "neat" camp - AI jargon this time - because of their mathematical core and lineage, and that's fair enough. But you also say that they incorporate "scruffy" techniques - jargon again - and I think that makes them - switching to the english words here - seem pretty scruffy, because the scruffy techniques are themselves scruffy, and incorporating all these different techniques is itself a scruffy thing to do.
LLMs understand plenty, in any way that can be tested. It's really funny when i see making mistakes as the evidence of lack of understanding. Guess people don't understand anything at all too.
> I often see ChatGPT stumped by simple variations of brain teasers
Only if everything else is exactly as the basic teaser and guess what ? humans fall for this too. They see something they memorized and go full speed ahead. Simply changing names is enough to get it to solve it.