For people who have only a surface-level understanding of how they work, yes. A nuance of Clarke's law that "any sufficiently advanced technology is indistinguishable from magic" is that the bar is different for everybody and the depth of their understanding of the technology in question. That bar is so low for our largely technologically-illiterate public that a bothersome percentage of us have started to augment and even replace religious/mystical systems with AI powered godbots (LLMs fed "God Mode"/divination/manifestation prompts).
(1) https://www.spectator.co.uk/article/deus-ex-machina-the-dang... (2) https://arxiv.org/html/2411.13223v1 (3) https://www.theguardian.com/world/2025/jun/05/in-thailand-wh...
This is such a bizarre take.
The relation associating each human to the list of all words they will ever say is obviously a function.
> almost magical human-like powers to something that - in my mind - is just MatMul with interspersed nonlinearities.
There's a rich family of universal approximation theorems [0]. Combining layers of linear maps with nonlinear cutoffs can intuitively approximate any nonlinear function in ways that can be made rigorous.
The reason LLMs are big now is that transformers and large amounts of data made it economical to compute a family of reasonably good approximations.
> The following is uncomfortably philosophical, but: In my worldview, humans are dramatically different things than a function . For hundreds of millions of years, nature generated new versions, and only a small number of these versions survived.
This is just a way of generating certain kinds of functions.
Think of it this way: do you believe there's anything about humans that exists outside the mathematical laws of physics? If so that's essentially a religious position (or more literally, a belief in the supernatural). If not, then functions and approximations to functions are what the human experience boils down to.
[0] https://en.wikipedia.org/wiki/Universal_approximation_theore...
https://www.anthropic.com/research/tracing-thoughts-language...
> Clearly computers are deterministic. Are people?
Give an LLM memory and a source of randomness and they're as deterministic as people.
"Free will" isn't a concept that typechecks in a materialist philosophy. It's "not even wrong". Asserting that free will exists is _isomorphic_ to dualism which is _isomorphic_ to assertions of ensoulment. I can't argue with dualists. I reject dualism a priori: it's a religious tenet, not a mere difference of philosophical opinion.
So, if we're all materialists here, "free will" doesn't make any sense, since it's an assertion that something other than the input to a machine can influence its output.
I agree with the author, but people acting like they are conscious or humans isn't weird to me, it's just fraud and liars. Most people basically have 0 understanding of what technology or minds are philosophically so it's an easy sale, and I do think most of these fraudsters also likely buy into it themselves because of that.
The really sad thing is people think "because someone runs an ai company" they are somehow an authority on philosophy of mind which lets them fall for this marketing. The stuff these people say about this stuff is absolute garbage, not that I disagree with them, but it betrays a total lack of curiosity or interest in the subject of what llms are, and the possible impacts of technological shifts as those that might occur with llms becoming more widespread. It's not a matter of agreement it's a matter of them simply not seeming to be aware of the most basic ideas of what things are, technology is, it's manner of impacting society etc.
I'm not surprised by that though, it's absurd to think because someone runs some AI lab or has a "head of safety/ethics" or whatever garbage job title at an AI lab they actually have even the slightest interest in ethics or any even basic familiarity with the major works in the subject.
The author is correct if people want to read a standard essay articulating it more in depth check out https://philosophy.as.uky.edu/sites/default/files/Is%20the%2... (the full extrapolation requires establishing what things are and how causality in general operates and how that relates to artifacts/technology but that's obvious quite a bit to get into).
The other note would be something sharing an external trait means absolutely nothing about causality and suggesting a thing is caused by the same thing "even to a way lesser degree" because they share a resemblance is just a non-sequitur. It's not a serious thought/argument.
I think I addressed the why of why this weirdness comes up though. The entire economy is basically dependent on huge productivity growth to keep functioning so everyone is trying to sell they can offer that and AI is the clearest route, AGI most of all.
https://www.anthropic.com/research/tracing-thoughts-language...
See section “Does Claude plan its rhymes?”?
I think this is sometimes semi-explicit too. For example, this 2017 OpenAI paper on Evolutionary Algorithms [0] was pretty influential, and I suspect (although I'm an outsider to this field so take it with a grain of salt) that some versions of reinforcement learning that scale for aligning LLMs borrow some performance tricks from OpenAIs genetic approach.
The correct level of analysis is not the substrate (silicon vs. wetware) but the computational principles being executed. A modern sparse Transformer, for instance, is not "conscious," but it is an excellent engineering approximation of two core brain functions: the Global Workspace (via self-attention) and Dynamic Sparsity (via MoE).
To dismiss these systems as incomparable to human cognition because their form is different is to miss the point. We should not be comparing a function to a soul, but comparing the functional architectures of two different information processing systems. The debate should move beyond the sterile dichotomy of "human vs. machine" to a more productive discussion of "function over form."
I elaborate on this here: https://dmf-archive.github.io/docs/posts/beyond-snn-plausibl...
Is it? Do we know how human brains operate? We know the basic architecture of them, so we have a map, but we don't know the details.
"The cellular biology of brains is relatively well-understood, but neuroscientists have not yet generated a theory explaining how brains work. Explanations of how neurons collectively operate to produce what brains can do are tentative and incomplete." [1]
"Despite a century of anatomical, physiological, and molecular biological efforts scientists do not know how neurons by their collective interactions produce percepts, thoughts, memories, and behavior. Scientists do not know and have no theories explaining how brains and central nervous systems work." [1]
> Author's note: Despite a century of anatomical, physiological, and molecular biological efforts scientists do not know how neurons by their collective interactions produce percepts, thoughts, memories, and behavior. Scientists do not know and have no theories explaining how brains and central nervous systems work. [1]
That lack of understanding I believe is a major part of the author's point.
[1] "How far neuroscience is from understanding brains" - https://pmc.ncbi.nlm.nih.gov/articles/PMC10585277/#abstract1
What if instead of defining all behaviors upfront, we created conditions for patterns to emerge through use?
Repository: https://github.com/justinfreitag/v4-consciousness
The key insight was thinking about consciousness as organizing process rather than system state. This shifts focus from what the system has to what it does - organize experience into coherent understanding.
https://www.masterclass.com/articles/anthropomorphism-vs-per...
They do not, you are mixing up terms.
> People talk about inanimate objects like they are persons. Ships, cars, etc.
Which is called “personification”, and is a different concept from anthropomorphism.
Effectively no one really thinks their car is alive. Plenty of people think the LLM they use is conscious.
https://www.masterclass.com/articles/anthropomorphism-vs-per...
https://github.com/dmf-archive/IPWT
https://dmf-archive.github.io/docs/posts/backpropagation-as-...
But you're right, capital only cares about performance.
https://www.frontiersin.org/journals/computational-neuroscie...
I don't have any opinion on the qualia debates honestly. I suppose I don't know what it feels like for an ant to find a tasty bit of sugar syrup, but I believe it's something that can be described with physics (and by extension, things like chemistry).
But we do know some things about some qualia. Like we know how red light works, we have a good idea about how photoreceptors work, etc. We know some people are red-green colorblind, so their experience of red and green are mushed together. We can also have people make qualia judgments and watch their brains with fMRI or other tools.
I think maybe an interesting question here is: obviously it's pleasurable to animals to have their reward centers activated. Is it pleasurable or desirable for AIs to be rewarded? Especially if we tell them (as some prompters do) that they feel pleasure if they do things well and pain if they don't? You can ask this sort of question for both the current generation of AIs and future generations.
> Our analysis reveals that emergent abilities in language models are merely “pseudo-emergent,” unlike human abilities which are “authentically emergent” due to our possession of what we term “ontological privilege.”
https://en.wikipedia.org/wiki/Intentional_stance
I think the design stance is appropriate for understanding and predicting LLM behavior, and the intentional stance is not.
But let me substantiate before you (rightly) accuse me of just posting a shallow dismissal.
> They don't want to.
Who's they? How could you possibly know? Are you a mind reader? Worse, a mind reader of the masses?
> It seems a lot of people are uncomfortable and defensive about anything that may demystify LLMs.
That "it seems" is doing some serious work over there. You may perceive and describe many people's comments as "uncomfortable and defensive", but that's entirely your own head cannon. All it takes is for someone to simply disagree. It's worthless.
Have you thought about other possible perspectives? Maybe people have strong opinions because they consider what things present as more important than what they are? [0] Maybe people have strong opinions because they're borrowing from other facets of their personal philosophies, which is what they actually feel strongly about? [1] Surely you can appreciate that there's more to a person than what equivalent-presenting "uncomfortable and defensive" comments allow you to surmise? This is such a blatant textbook kneejerk reaction. "They're doing the thing I wanted to think they do anyways, so clearly they do it for the reasons I assume. Oh how correct I am."
> to any notions of trying to bring discourse about LLMs down from the clouds
(according to you)
> The campaigns by the big AI labs have been quite successful.
(((according to you)))
"It's all the big AI labs having successfully manipulated the dumb sheep which I don't belong to!" Come on... Is this topic really reaching political grifting kind of levels?
[0] tangent: if a feature exists but even after you put an earnest effort into finding it you still couldn't, does that feature really exist?
[1] philosophy is at least kind of a thing https://en.wikipedia.org/wiki/Wikipedia:Getting_to_Philosoph...
Here’s a quote from the ruling:
“First, Authors argue that using works to train Claude’s underlying LLMs was like using works to train any person to read and write, so Authors should be able to exclude Anthropic from this use (Opp. 16). But Authors cannot rightly exclude anyone from using their works for training or learning as such. Everyone reads texts, too, then writes new texts. They may need to pay for getting their hands on a text in the first instance. But to make anyone pay specifically for the use of a book each time they read it, each time they recall it from memory, each time they later draw upon it when writing new things in new ways would be unthinkable. For centuries, we have read and re-read books. We have admired, memorized, and internalized their sweeping themes, their substantive points, and their stylistic solutions to recurring writing problems.”
They literally compare an LLM learning to a person learning and conflate the two. Anthropic will likely win this case because of this anthropomorphisization.
I think looking at physics might be a good example. We love our simplified examples and there's a big culture of trying to explain things to the lay person (mostly because the topics are incredibly complex). But how many people have misunderstood an observer of a quantum event with "a human" and do not consider "a photon" as an observer? How many people think in Schrodinger's Cat that the cat is both alive and dead?[0] Or believe in a multiverse. There's plenty of examples we can point to.
While these analogies *can* be extremely helpful, they *can* also be extremely harmful. This is especially true as information is usually passed through a game of telephone[1]. There is information loss and with it, interpretation becomes more difficult. Often a very subtle part can make a critical distinction.
I'm not against anthropomorphization[2], but I do think we should be cautious about how we use it. The imprecise nature of it is the exact reason we should be mindful of when and how to use it. We know that the anthropomorphized analogy is wrong. So we have to think about "how wrong" it is for a given setting. We should also be careful to think about how it may be misinterpreted. That's all I'm trying to say. And isn't this what we should be doing if we want to communicate effectively?
[0] It is not. It is either. The point of this thought experiment is that we cannot know the answer without looking inside. There is information loss and the event is not deterministic. It directly relates to the Heisenberg Uncertainty Principle, Godel's Incompleteness, or the Halting Problem. All these things are (loosely) related around the inability to have absolute determinism.
[1] https://en.wikipedia.org/wiki/Telephone_game
[2] >>44494022
> Statements such as "an AI agent could become an insider threat so it needs monitoring" are simultaneously unsurprising (you have a randomized sequence generator fed into your shell, literally anything can happen!) and baffling (you talk as if you believe the dice you play with had a mind of their own and could decide to conspire against you).
> we talk about "behaviors", "ethical constraints", and "harmful actions in pursuit of their goals". All of these are anthropocentric concepts that - in my mind - do not apply to functions or other mathematical objects.
An AI agent, even if it's just "MatMul with interspersed nonlinearities" can be an insider threat. The research proves it:
[PDF] See 4.1.1.2: https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686...
It really doesn't matter whether the AI agent is conscious or just crunching numbers on a GPU. If something inside your system is capable of—given some inputs—sabotaging and blackmailing your organization on its own (which is to say, taking on realistic behavior of a threat actor), the outcome is the same! You don't need believe it's thinking, the moment that this software has flipped its bits into "blackmail mode", it's acting nefariously.
The vocabulary to describe what's happening is completely and utterly moot: the software is printing out some reasoning for its actions _and then attempting the actions_. It's making "harmful actions" and the printed context appears to demonstrate a goal that the software is working towards. Whether or not that goal is invented through some linear algebra isn't going to make your security engineers sleep any better.
> This muddles the public discussion. We have many historical examples of humanity ascribing bad random events to "the wrath of god(s)" (earthquakes, famines, etc.), "evil spirits" and so forth. The fact that intelligent highly educated researchers talk about these mathematical objects in anthropomorphic terms makes the technology seem mysterious, scary, and magical.
The anthropomorphization, IMO, is due to the fact that it's _essentially impossible_ to talk about the very real, demonstrable behaviors and problems that LLMs exhibit today without using terms that evoke human functions. We don't have another word for "do" or "remember" or "learn" or "think" when it comes to LLMs that _isn't_ anthropomorphic, and while you can argue endlessly about "hormones" and "neurons" and "millions of years of selection pressure", that's not going to help anyone have a conversation about their work. If AI researchers started coming up with new, non-anthropomorphic verbs, it would be objectively worse and more complicated in every way.
Under my model, these systems you have described are conscious, but not in a way that they can communicate or experience time or memory the way human beings do.
My general list of questions for those presenting a model of consciousness are: 1) Are you conscious? (hopefully you say yes or our friend Descartes would like a word with you!) 2) Am I conscious? How do you know? 3) Is a dog conscious? 4) Is a worm conscious? 5) Is a bacterium conscious? 6) Is a human embryo / baby consious? And if so, was there a point that it was not conscious, and what does it mean for that switch to occur?
What is your view of consciousness?
As simulators, LLMs can simulate many things, including agents that exhibit human-like properties. But LLMs themselves are not agents.
More on this idea here: https://www.alignmentforum.org/posts/vJFdjigzmcXMhNTsx/agi-s...
This perspective makes a lot of sense to me. Still, I wouldn't avoid anthropomorphization altogether. First, in some cases, it might be a useful mental tool to understand some aspect of LLMs. Second, there is a lot of uncertainty about how LLMs work, so I would stay epistemically humble. The second argument applies in the opposite direction as well: for example, it's equally bad to say that LLMs are 100% conscious.
On the other hand, if someone argues against anthropomorphizing LLMs, I would avoid phrasing it as: "It's just matrix multiplication." The article demonstrates why this is a bad idea pretty well.
Joking aside, I read too quickly. I got my wires crossed when I responded, mixing up who was who. My bad >.<
But I do agree with scarface and disagree with you. Let me try to respond to this directly. There's a lot to unpack here, but I do ask that you actually read the whole thing. There is nuance here and I think it is important.
> I’d like to think that this forum is also a place for the proverbial high school kid ... to learn a thing or two from the greybeards.
I agree that HN is *also* this place.But I still do not believe that means we need to assume non-expertise.
Think of HN as a place that "greybeards" (or more accurately, experts. Because only few have gray beards) hang out, but there is no gatekeeping. We don't check your credentials when you come in nor do we ask you to pass any tests of skill. It's open to all. But this is still the place "experts" hang out. Because we don't check for credentials, we'll treat noobs as peers. Why are you saying this is bad?
Anyone is welcome to sit at the "adult table", but that means having adult conversations. Right? It'd be pretty... childish... for a child to sit at the adult table and expect everyone to start talking about kid stuff.
It's okay if newbies come in and don't understand what is being discussed. In fact, being confused is the very first step to learning! I'll put it this way: the first year (maybe 2) of my PhD I was just reading papers and had no idea what was going on. I had to work and work to understand. Had to ask lots of questions to lots of people (consequently getting over the fear of feeling dumb as well as the fear of asking questions). Then, at some point in time I realized I do know what's going on and being discussed. This is a critical skill to becoming a graybeard. You'll constantly have to wade through waters where you're in well over your head.
It is learning through immersion.
We should help noobs. I frequently say "you can't have wizards without noobs." I don't want to gatekeep and I do actually think we should help the noobs. I need this to be clear[1]
BUT that doesn't mean we should change our conversations between ourselves. To do so would destroy the very reason we come here. There are so few places on the internet where you can talk and operate under the assumption that the other person is reasonable well informed about tech. Frankly, many of those places get destroyed because they get dominated by noobs who change the average level of conversation. While we don't want to kick out noobs, it is *THEIR PREROGATIVE* to ask for help and ask for people to elaborate. There's no shame in this. It's the exact same thing we expect from another expert! It is treating noobs equally. And frankly, if people do make fun of the noobs or treat them disrespectfully I'll gladly downvote, flag them, and likely chastise them. Such a response is rather common around here too (which is what makes it welcoming to noobs).
Ultimately, unless we start credential checking (aka gatekeeping) we have 2 options:
- Treat everyone as experts
- Treat everyone as noobs
If we have to modify our language and explain every subtle nuanced detail, well... why would I come here? I'm already a fairly verbose person, and I don't want to write textbooks. I don't expect people to read textbooks either!I don't come to HN to teach. Nor do I want to come here to be lectured. I would find it insulting if the presumption was that I was a noob.
I come to talk with my peers. Some are direct peers, with expertise in my domain, and some are not. I happily ask questions to those with expertise in other domains and so should noobs. But unless someone makes a pretty egregious assumption (e.g. a very niche subject), then pretty much nobody is going to say something about it. That's perfectly okay. Frankly, being comfortable with not knowing and asking for someone to elaborate is one of the, if not *THE*, most important skill required to become a graybeard. You can't know everything, even about a highly specific domain. There's infinite depth and infinite breadth.
So if you don't know, just ask. It's okay. That's really the only way we can both have expert communities AND not gatekeep.
So I ask you:
Where can g̶r̶a̶y̶b̶e̶a̶r̶d̶s̶ experts go to hang out? Specifically, to hang out with other experts.
TLDR:
If you walk into a biker bar, don't chastise someone who assumes you know something about motorcycles.
[0] >>44492437
[1] I even taught a lot during my PhD and was a rather popular TA. The reason being that I am more than happy to help and even would extend my office hours to make sure students got their questions answered. A class is formed through a partnership, not a dictatorship.
Um, that's what I said.
And of course we know that LLMs don't have qualia. Heck, even humans don't have qualia: https://web.ics.purdue.edu/~drkelly/DennettQuiningQualia1988...