Comments of this nature would be much more informative if were they to begin with "what I've seen is in my professional life", "in the three or four teams I've worked on", or "according to some friends, who consider themselves to be top engineers, working in teams similar in size and scope."
Otherwise, we are left with the sometimes realistic and sometimes much more fairy tale-like story of incompetent leadership holding back talent for no other reason than incompetence or nepotism or envy of those who are smarter and more accomplished.
I have been in similar situations and considered myself a top professional, which may well be true, but in large, mature organizations, regression to average performance is largely an inevitable consequence of size and the need for coordination, a problem whose solution is not to be found in a redistribution of salaries.
How else, with a few exceptions, do employees at Google, Facebook, Uber, etc. think that back in the day things were so much better, that talent was treated better, that there was a real engineering culture, whereas now it's all about sitting in chairs, people in finance having the most power, interviews were so much harder, and we have so much technical debt?
OpenAI is, at this point, a research organization, in spirit and in purpose. If and when it becomes a "normal company," the logics of scale and scope will lead the early employees to complain about how things have evolved. But I suspect that the "let's get rid of the makers of technical debt and use the budget to give top talent more money" will not produce the expected and desired result of a renewed engineering culture, because top talent will not be as useful as it once was.
Whenever a top executive leaves a mature company (or dies, see Apple), the risk of catastrophe is aired, but catastrophe rarely occurs. There is a lesson there.
Personally, from my life in tech, I do not feel that OpenAI has done a great job (and rather frankly, work that has been supported by both "press" and popular sentiment, because who doesn't like the heroic effort of a group of smart people facing poor odds against the Goliaths?) because management in Google or Meta cannot recognize who is writing great code.
Think about the problem of "hallucinations" with GPT. After all, it was considered a minor hiccup on the road to the AGI, a path opened by a team of mavericks. But if Google, had it been first to market, had delivered such a product, the press would have gone from "oh, that's funny, it will get better with time" to a more worrying "Google is destroying humanity with those hallucinations."
It is much easier to be innovative when you are small, hungry, with little to lose and much to gain, rather than when you are worried about your current salary or equity or reputation. It's not just a matter of paying top talent more and getting rid of more average people; I'm sure there are enough brilliant, highly paid people who have enough capital to build small, high-IQ teams in any of the major technology companies to get to GPT-like models before OpenAI. But incentives, reputation, the nature of public companies play a role in being slower, less innovative, less risk-taking.