Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.
So utterly predictable and slimy. All of those who are so gravely concerned about "alignment" in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.
Another paper not from Msft showing emergent task capabilities across a variety of LLMs as scale increases.
https://arxiv.org/pdf/2206.07682.pdf
You can hem and haw all you want but the reality is these models have internal representations of the world that can be probed via prompts. They are not stochastic parrots no matter how much you shout in the wind that they are.