Uh, what do you mean by this? Are you trying to draw a fundamental science vs engineering distinction here?
Because today's LLMs definitely have capabilities we previously didn't have.
But it is an interesting technology.
Are you defining "artificial intelligence" is some unusual way?
I follow Roger Penrose's thinking here. [1]
But yet (just like with the soul) we're sure we have it, and it's impossible for anything else to have it. Perhaps consciousness is simply a hallucination that makes us feel special about ourselves.
Empathy is the ability to emulate the contents of another consciousness.
While an agent could mimic empathetic behaviors (and words), given enough interrogation and testing you would encounter an out-of-training case that it would fail.
But also, how do you know that LMs aren't empathic? By your own admission they do "mimic empathetic behaviors", but you reject this as the real thing because you claim that with enough testing you would encounter a failure. This raises all kinds of "no true Scotsman" flags, not to mention that empathy failure is not exactly uncommon among humans. So how exactly do you actually test your hypothesis?