Uh, what do you mean by this? Are you trying to draw a fundamental science vs engineering distinction here?
Because today's LLMs definitely have capabilities we previously didn't have.
But it is an interesting technology.
Are you defining "artificial intelligence" is some unusual way?
I follow Roger Penrose's thinking here. [1]
How are you defining "consciousness" and "understanding" here? Because a feedback loop into an LLM would meet the most common definition of consciousness (possessing a phonetic loop). And having an accurate internal predictive model of a system is the normal definition of understanding and a good LLM has that too.