zlacker

[return to "A non-anthropomorphized view of LLMs"]
1. elliot+qh[view] [source] 2025-07-07 01:17:16
>>zdw+(OP)
To claim that LLMs do not experience consciousness requires a model of how consciousness works. The author has not presented a model, and instead relied on emotive language leaning on the absurdity of the claim. I would say that any model one presents of consciousness often comes off as just as absurd as the claim that LLMs experience it. It's a great exercise to sit down and write out your own perspective on how consciousness works, to feel out where the holes are.

The author also claims that a function (R^n)^c -> (R^n)^c is dramatically different to the human experience of consciousness. Yet the author's text I am reading, and any information they can communicate to me, exists entirely in (R^n)^c.

◧◩
2. quonn+SU[view] [source] 2025-07-07 09:01:32
>>elliot+qh
> To claim that LLMs do not experience consciousness requires a model of how consciousness works.

Nope. What can be asserted without evidence can also be dismissed without evidence. Hitchens's razor.

You know you have consciousness (by the very definition that you can observe it in yourself) and that's evidence. Because other humans are genetically and in every other way identical, you can infer it for them as well. Because mammals are very similar many people (but not everyone) infers it for them as well. There is zero evidence for LLMs and their _very_ construction suggests that they are like a calculator or like Excel or like any other piece of software no matter how smart they may be or how many tasks they can do in the future.

Additionally I am really surprised by how many people here confuse consciousness with intelligence. Have you never paused for a second in your life to "just be". Done any meditation? Or even just existed at least for a few seconds without a train of thought? It is very obvious that language and consciousness are completely unrelated and there is no need for language and I doubt there is even a need for intelligence to be conscious.

Consider this:

In the end an LLM could be executed (slowly) on a CPU that accepts very basic _discrete_ instructions, such as ADD and MOV. We know this for a fact. Those instructions can be executed arbitrarily slowly. There is no reason whatsoever to suppose that it should feel like anything to be the CPU to say nothing of how it would subjectively feel to be a MOV instruction. It's ridiculous. It's unscientific. It's like believing that there's a spirit in the tree you see outside, just because - why not? - why wouldn't there be a spirit in the tree?

[go to top]