But, by moving fast and scaling quickly, are they at the Too Big to Fail stage already? The attempted board coup makes me think so.
The openai fails, absolutely nothing happens other than its shareholder losing their paper money. So no, they're not too big to fail.
If OpenAI fails nothing actually important happens.
If Microsoft loses 30 billion dollars, it ain't great, but they have more than that sitting in the bank. If Sequoia or Ycombinator goes bankrupt, it's not great for lots of startups, but they can probably find other investors if they have a worthwhile business. If Elon loses a billion dollars, nobody cares.
More over, if capital markets suddenly become ways to just lose tons of money, that hurts capital investment everywhere, which hurts people everywhere.
People like to imagine the economy as super siloed and not interconnected but that is wrong, especially when it comes to capital markets.
Open source models are actually potentially worse. Even if OAI is not TBTF because of the competition, we have a scenario where AGI sector as a whole becomes TBTF and too big to halt.
And as for the whole idea of "company value equals value to society", I see monopolies and rent seeking as heavy qualifiers on that front.
The "house of cards" is outperforming everyone else.
It would have to come out that the slow generation times for GPT-4 are a sweatshop in Egypt tired of typing.
Either that, or something inconceivable like that board coup firing the CEO as a material event triggering code and IP escrow to be released to Microsoft...
PS. “Too big to fail” generally is used to mean a government+economy+sector ecosystem will step in and fund the failed enterprise rather than risk harm to the ecosystem. That's not this. Arguably not Tesla or even Google either. That said, Satya's quote in this filing suggets Microsoft already legally contracted for that eventuality: if this legal entity fails, Microsoft keeps the model online.
The sooner SCOTUS rules that training on copyrighted material is infringement, the better.
Probably Microsoft would hire them to some AI shop, because Microsoft is the one deploying the stuff. But Microsoft has rights to use it and the code, so for them OpenAI is only a research partner.
Maybe research would get slower.
Update the codebase to what exactly? Are there generative AI companies not training on copyrighted material that achieve anything even close to the results of gpt4? I'm not aware of any