The author largely takes the view that it is more productive for us to ignore any anthropomorphic representations and focus on the more concrete, material, technical systems - I’m with them there… but only to a point. The flip side of all this is of course the idea that there is still something emergent, unplanned, and mind-like. So even if it is a stochastic system following rules, clearly the rules are complex enough (to the tune of billions of operations, with signals propagating through some sort of resonant structure, if you take a more filter impulse response like view of a sequential matmuls) to result in emergent properties. Even if we (people interested in LLMs with at least some level of knowledge of ML mathematics and systems) “know better” than to believe these systems to possess morals, ethics, feelings, personalities, etc, the vast majority of people do not have any access to meaningful understanding of the mathematical, functional representation of an LLM and will not take that view, and for all intents and purposes the systems will at least seem to have those anthropomorphic properties, and so it seems like it is in fact useful to ask questions from that lens as well.
In other words, just as it’s useful to analyze and study these things as the purely technical systems they ultimately are, it is also, probably, useful to analyze them from the qualitative, ephemeral, experiential perspective that most people engage with them from, no?
Why would you ever want to amplify a false understanding that has the potential to affect serious decisions across various topics?
LLMs reflect (and badly I may add) aspects of the human thought process. If you take a leap and say they are anything more than that, you might as well start considering the person appearing in your mirror as a living being.
Literally (and I literally mean it) there is no difference. The fact that a human image comes out of a mirror has no relation what so ever with the mirror's physical attributes and functional properties. It has to do just with the fact that a man is standing in front of it. Stop feeding the LLM with data artifacts of human thought and will imediatelly stop reflecting back anything resembling a human.
For people who have only a surface-level understanding of how they work, yes. A nuance of Clarke's law that "any sufficiently advanced technology is indistinguishable from magic" is that the bar is different for everybody and the depth of their understanding of the technology in question. That bar is so low for our largely technologically-illiterate public that a bothersome percentage of us have started to augment and even replace religious/mystical systems with AI powered godbots (LLMs fed "God Mode"/divination/manifestation prompts).
(1) https://www.spectator.co.uk/article/deus-ex-machina-the-dang... (2) https://arxiv.org/html/2411.13223v1 (3) https://www.theguardian.com/world/2025/jun/05/in-thailand-wh...
I think it is inevitable that some - many - people will come to the conclusion that these systems have “ethics”, “morals,” etc, even if I or you personally do not think they do. Given that many people may come to that conclusion though, regardless of if the systems do or do not “actually” have such properties, I think it is useful and even necessary to ask questions like the following: “if someone engages with this system, and comes to the conclusion that it has ethics, what sort of ethics will they be likely to believe the system has? If they come to the conclusion that it has ‘world views,’ what ‘world views’ are they likely to conclude the system has, even if other people think it’s nonsensical to say it has world views?”
> The fact that a human image comes out of a mirror has no relation what so ever with the mirror's physical attributes and functional properties. It has to do just with the fact that a man is standing in front of it.
Surely this is not quite accurate - the material properties - surface roughness, reflectivity, geometry, etc - all influence the appearance of a perceptible image of a person. Look at yourself in a dirty mirror, a new mirror, a shattered mirror, a funhouse distortion mirror, a puddle of water, a window… all of these produce different images of a person with different attendant phenomenological experiences of the person seeing their reflection. To take that a step further - the entire practice of portrait photography is predicated on the idea that the collision of different technical systems with the real world can produce different semantic experiences, and it’s the photographer’s role to tune and guide the system to produce some sort of contingent affect on the person viewing the photograph at some point in the future. No, there is no “real” person in the photograph, and yet, that photograph can still convey something of person-ness, emotion, memory, etc etc. This contingent intersection of optics, chemical reactions, lighting, posture, etc all have the capacity to transmit something through time and space to another person. It’s not just a meaningless arrangement of chemical structures on paper.
> Stop feeding the LLM with data artifacts of human thought and will imediatelly stop reflecting back anything resembling a human.
But, we are feeding it with such data artifacts and will likely continue to do so for a while, and so it seems reasonable to ask what it is “reflecting” back…
We know that Newton's laws are wrong, and that you have to take special and general relativity into account. Why would we ever teach anyone Newton's laws any more?
It’s astounding to me that so much of HN reacts so emotionally to LLMs, to the point of denying there is anything at all interesting or useful about them. And don’t get me started on the “I am choosing to believe falsehoods as a way to spite overzealous marketing” crowd.
What fallacy is that? I’m a fan of logical fallacies and never heard that claim before nor am I finding any reference with a quick search.
Not sure though, the point s/he is making isn't really clear to me as well
What you identify as emergent and mind-like is a direct result of these tools being able to mimic human communication patterns unlike anything we've ever seen before. This capability is very impressive and has a wide range of practical applications that can improve our lives, and also cause great harm if we're not careful, but any semblance of intelligence is an illusion. An illusion that many people in this industry obsessively wish to propagate, because thar be gold in them hills.
It doesn't have a name, but I have repeatedly noticed arguments of the form "X cannot have Y, because <explains in detail the mechanism that makes X have Y>". I wanna call it "fallacy of reduction" maybe: the idea that because a trait can be explained with a process, that this proves the trait absent.
(Ie. in this case, "LLMs cannot think, because they just predict tokens." Yes, inasmuch as they think, they do so by predicting tokens. You have to actually show why predicting tokens is insufficient to produce thought.)
This is too dismissive because it's based on an assumption that we have a sufficiently accurate mechanistic model of the brain that we can know when something is or is not mind-like. This just isn't the case.
Maybe there is some scientific aspect of interest here that i do not grasp, i would assume it can make sense in some context of psychological study. My point is that if you go that route you accept the premise that "something human-like is there", which, by that person's understanding, will have tremendous consequences. Them seeing you accepting their premise (even for study) amplifies their wrong conclusions, that's all I'm saying.
> Surely this is not quite accurate - the material properties - surface roughness, reflectivity, geometry, etc - all influence the appearance of a perceptible image of a person.
These properties are completely irrelevant to the image of the person. They will reflect a rock, a star, a chair, a goose, a human. Similar is my point of LLM, they reflect what you put in there.
It is like puting vegies in the fridge and then opening it up the next day and saying "Woah! There are vegies in my fridge, just like my farm! My friege is farm-like because vegies come out of it."