Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.
So utterly predictable and slimy. All of those who are so gravely concerned about "alignment" in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.
That's a big part of the issue with machine learning models--they are undiscoverable. You build a model with a bunch of layers and hyperparameters, but no one really understands how it works or by extension how to "fix bugs".
If we say it "does what it was programmed to", what was it programmed to do? Here is the data that was used to train it, but how will it respond to a given input? Who knows?
That does not mean that they need to be heavily regulated. On the contrary, they need to be opened up and thoroughly "explored" before we can "entrust" them to given functions.
Incorrect, we can't predict its output because we cannot look inside. That's a limitation, not a feature.