OpenAI is one of half a dozen teams [0] actively working on this problem, all funded by large public companies with lots of money and lots of talent. They made unique contributions, sure. But they're not that far ahead. If they stumble, surely one of the others will take the lead. Or maybe they will anyway, because who's to say where the next major innovation will come from?
So what I don't get about these reactions (allegedly from the board, and expressed here) is, if you interpret the threat as a real one, why are you acting like OpenAI has some infallible lead? This is not an excuse to govern OpenAI poorly, but let's be honest: if the company slows down the most likely outcome by far is that they'll cede the lead to someone else.
[0]: To be clear, there are definitely more. Those are just the large and public teams with existing products within some reasonable margin of OpenAI's quality.
Most of the new AI startups are one trick ponies obsessively focused on LLM's. LLM's are only one piece of the puzzle.
That's the current Yudkowsky view. That it's essentially impossible at this point and we're doomed, but we might as well try anyway as its more "dignified" to die trying.
I'm a bit more optimistic myself.