Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.
So utterly predictable and slimy. All of those who are so gravely concerned about "alignment" in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.
The fact that these systems can extrapolate well beyond their training data by learning algorithms is quite different than what has come before, and anyone stating that they "simply" predict next token is severely shortsighted. Things don't have to be 'brain-like' to be useful, or to have capabilities of reasoning, but we have evidence that these systems have aligned well with reasoning tasks, perform well at causal reasoning, and we also have mathematical proofs that show how.
So I don't understand your sentiment.
1. ChatGPT knows the algorithm for adding two numbers of arbitrary magnitude.
2. It often fails to use the algorithm in point 1 and hallucinates the result.
Knowing something doesn't mean it will get it right all the time. Rather, an LLM is almost guaranteed to mess up some of the time due to the probabilistic nature of its sampling. But this alone doesn't prove that it only brute-forced task X.