Because, morals, values, consciousness etc could just be subgoals that arised through evolution because they support the main goals of survival and procreation.
And if it is baffling to think that a system could rise up, how do you think it is possible life and humans came to existence in the first place? How could it be possible? It is already happened from a far unlikelier and strange place. And wouldn't you think the whole World and the timeline in theory couldn't be represented as a deterministic function. And if not then why should "randomness" or anything else bring life to existence.
It is similar to how human brains operate. LLMs are the (current) culmination of at least 80 years of research on building computational models of the human brain.
Humans make a bad choice, it can end said human's life. The worst choice a LLM makes just gets told "no, do it again, let me make it easier"
Ultimately this matters from evolutionary evolvement and survival of the fittest idea, but it makes the question of "identity" very complex. But death will matter because this signals what traits are more likely to keep going into new generations, for both humans and LLMs.
Death, essentially for an LLM would be when people stop using it in favour of some other LLM performing better.
Is it? Do we know how human brains operate? We know the basic architecture of them, so we have a map, but we don't know the details.
"The cellular biology of brains is relatively well-understood, but neuroscientists have not yet generated a theory explaining how brains work. Explanations of how neurons collectively operate to produce what brains can do are tentative and incomplete." [1]
"Despite a century of anatomical, physiological, and molecular biological efforts scientists do not know how neurons by their collective interactions produce percepts, thoughts, memories, and behavior. Scientists do not know and have no theories explaining how brains and central nervous systems work." [1]
A lot of the people who say “machines will never have feelings” are confident in that statement because they draw the line incredibly narrowly: if it ain't human, it ain't feeling. This seems to me putting the cart before the horse. It ain't feeling because you defined it so.
> Is it?
This is just a semantic debate on what counts as “similar”. It's possible to disagree on this point despite agreeing on everything relating to how LLMs and human brains work.
Do you forget every conversation as soon as you have them? When speaking to another person, do they need to repeat literally everything they said and that you said, in order, for you to retain context?
If not, your brain does not work like an LLM. If yes, please stop what you’re doing right now and call a doctor with this knowledge. I hope Memento (2000) was part of your training data, you’re going to need it.
"The cellular biology of brains is relatively well-understood"
Fundamentally, brains are not doing something different in kind from ANNs. They're basically layers of neural networks stacked together in certain ways.
What we don't know are things like (1) how exactly are the layers stacked together, (2) how are the sensors (like photo receptors, auditory receptors, etc) hooked up?, (3) how do the different parts of the brain interact?, (4) for that matter what do the different parts of the brain actually do?, (5) how do chemical signals like neurotransmitters convey information or behavior?
In the analogy between brains and artificial neural networks, these sorts of questions might be of huge importance to people building AI systems, but they'd be of only minor importance to users of AI systems. OpenAI and Google can change details about how their various transformer layers and ANN layers are connected. The result may be improved products, but they won't be doing anything different from what AIs are doing now in terms the author of this article is concerned about.
IMO, the analogy of how neural networks work as does the cellular biology is only useful as an analogy. They are not actually similar, and certainly the higher order functionality is not similar (namely because we don't know how that higher order functionality works, like we don't understand it _at_ _all_)
> agreeing on everything relating to how LLMs and human brains work
Hence, my question is "how does the human brain work?" Nobody really knows, therefore we can't know if LLMs actually work in a similar way. It's a big point of the author in the paper, it's a big reason why it is so inappropriate to anthropomorphize LLMs.