I'd sayu the AI safety problem as a whole is similar to the safety problem of eugenics: Just because you know what the "goal" of some isolated system is, that does not mean you know what the outcome is of implementing that goal on a broad scale.
So OpenAI has the same problem: They definitely know what the goal is, but they're not prepared _in any meaningful sense_ for what the broadscale outcome is.
If you really care about AI safety, you'd be putting it under government control as utility, like everything else.
That's all. That's why government exists.
And I'd sayu should read the book so we can have a nice chat about it. Making wild guesses and assumptions is not really useful.
> If you really care about AI safety, you'd be putting it under government control as utility, like everything else.
This is a bit jumbled. How do you think "control as utility" would help? What would it help with?