Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.
So utterly predictable and slimy. All of those who are so gravely concerned about "alignment" in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.
You're leaving out the essentials. These models do more than fitting the data given. They can output it in a variety of ways, and through their approximation, can synthesize data as well. They can output things that weren't in the original data, tailored to a specific request in the tiniest of fractions of the time it would take a normal person to look up and understand that information.
Your argument is almost like saying "give me your RSA keys, because it's just two prime numbers, and I know how to list them."
Do we want to go down the road of making white collar jobs the legislatively required elevator attendants? Instead of just banning AI in general via executive agency?
That sounds like a better solution to me, actually. OpenAI's lobbyists would never go for that though. Can't have a moat that way.