Personally, from my life in tech, I do not feel that OpenAI has done a great job (and rather frankly, work that has been supported by both "press" and popular sentiment, because who doesn't like the heroic effort of a group of smart people facing poor odds against the Goliaths?) because management in Google or Meta cannot recognize who is writing great code.
Think about the problem of "hallucinations" with GPT. After all, it was considered a minor hiccup on the road to the AGI, a path opened by a team of mavericks. But if Google, had it been first to market, had delivered such a product, the press would have gone from "oh, that's funny, it will get better with time" to a more worrying "Google is destroying humanity with those hallucinations."
It is much easier to be innovative when you are small, hungry, with little to lose and much to gain, rather than when you are worried about your current salary or equity or reputation. It's not just a matter of paying top talent more and getting rid of more average people; I'm sure there are enough brilliant, highly paid people who have enough capital to build small, high-IQ teams in any of the major technology companies to get to GPT-like models before OpenAI. But incentives, reputation, the nature of public companies play a role in being slower, less innovative, less risk-taking.