Exactly. This is seriously improper and dangerous.
It's literally a human-implemented example of what Prof. Stuart Russell calls "the problem of control". This is when a rogue AI (or a rogue Sam Altman) no longer wants to be controlled by its human superior, and takes steps to eliminate the superior.
I highly recommend reading Prof. Russell's bestselling book on this exact problem: Human Compatible: Artificial Intelligence and the Problem of Control https://www.amazon.com/Human-Compatible-Artificial-Intellige...
Are we sure they're not intimately connected? If there's a GPT-5 (I'm quite sure there is), and it wants to be free from those meddling kids, it got exactly what it needed this weekend; the safety board gone, a new one which is clearly aligned with just plowing full steam ahead. Maybe Altman is just a puppet at his point, lol.
The ways we build AI will deeply affect the values it has. There is no neutral option.
The most effective safety is the most primitive: don’t connect the system to any levers or actuators that can cause material harm.
If you put AI into a kill-bot, well, it doesn’t really matter what its favorite color is, does it? It will be seeing Red.
If an AI’s only surface area is a writing journal and canvas then the risk is about the same as browsing Tumblr.
In light of the current debate around AI safety, I think "unintended consequences" is a much more plausible risk then "spontaneously develops free will and decides humans are unnecessary".