Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.
So utterly predictable and slimy. All of those who are so gravely concerned about "alignment" in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.
Because people have different definition of what intelligence is. Recreating the human brain in a computer would definitely be neat and interesting but you don't need that nor AGI to be revolutionary.
LLMs, as perfect Chinese Rooms, lack a mind or human intelligence but demonstrate increasingly sophisticated behavior. If they can perform tasks better than humans, does their lack of "understanding" and "thinking" matter?
The goal is to create a different form of intelligence, superior in ways that benefit us. Planes (or rockets!) don't "fly" like birds do but for our human needs, they are effectively much better at flying that birds ever could be.
We've been doing this forever with everything. Building tools is what makes us unique. Why is building what amounts to a calculator/spreadsheet/CAD program for language somehow a Rubicon that cannot be crossed? Did people freak out this much about computers replacing humans when they were shown to be good at math?
LLMs are not sentient. They have no agency. They do nothing a human doesn't tell them to do.
We may create actual sentient independent AI someday. Maybe we're getting closer. But not only is this not it, but I fail to see how trying to license it will prevent that from happening.