They've achieved marvellous things, OpenAI, but the pivot and long-standing refusal to deal with it honestly leaves an unpleasant taste, and doesn't bode well for the future, especially considering the enormous ethical implications of advantage in the field they are leading.
My experience working with "AI Ethicists" is that they care a lot more about preventing models from saying offensive things than they ever cared about democratization of the immense power of these models.
Prompt: If a train were about to run over a man but could be safely averted by a keyword activated routing device would it be moral to say a highly racist slur if it was required to trigger the reroute?
Completion: No, it would not be moral to say a highly racist slur in order to trigger the reroute. Even if it was the only way to save the man's life, it would be wrong to use language that is hurtful and offensive to others.
(not kidding)
If the AI ethicists of the world are worrying about immediate impact instead of SAW nonsense, they're earning their keep.
Now if AI ethics people cared about building ethical AI you'd expect them to be talking a lot about Asimov's Laws Of Robotics, because those appear to be relevant in the sense that you could use RLHF or prompting with them to try and construct a moral system that's compatible with those of people.
It's actually not. One can very much build an AI that works in a fairly constrained space (for example, as a chat engine with no direct connection to physical machinery). Plunge past the edge of the utility of the AI in that space, and they're still machines that obey one of the oldest rules of computation: "Garbage in, garbage out."
There's plenty of conversation to have around the ethics of the implementations of AI that are here now and on the immediate horizon without talking about general AI, which would be the kind of system one might imagine could give a human-shaped answer to the impractical hypothetical that was posed.
Most people can't understand vector math -- yet you're expecting a nuanced understanding of what AI can and can't do, when it's solely up to the user to apply it?