https://twitter.com/teddyschleifer/status/172721237871736880...
Would you trust someone who doesn't believe in responsible governance for themselves, to apply responsible governance elsewhere?
The real teams here seem to be:
"Team Board That Does Whatever Altman Wants"
"Team Board Provides Independent Oversight"
With this much money on the table, independent oversight is difficult, but at least they're making the effort.
The idea this was immediately about AI safety vs go-fast (or Microsoft vs non-Microsoft control) is bullshit -- this was about how strong board oversight of Altman should be in the future.
They haven't really said anything about why it was, and according to business insider[0] (the only reporting that I've seen that says anything concrete) the reasons given were:
> One explanation was that Altman was said to have given two people at OpenAI the same project.
> The other was that Altman was said to have given two board members different opinions about a member of personnel.
Firing the CEO of a company and only being able to articulate two (in my opinion) weak examples of why, and causing >95% of your employees to say they will quit unless you resign does not seem responsible.
If they can articulate reasons why it was necessary, sure, but we haven't seen that yet.
[0] https://www.businessinsider.com/openais-employees-given-expl...
There is no way MS is going to let something like ChatGPT-5 build better software products than what they have for sale.
This is an assassination and I think Ilya and Co know it.
Corresponding Princess Bride scene: https://youtu.be/rMz7JBRbmNo?si=uqzafhKISmB7A-H7
When 95% of your staff threatens to resign and says "you have made a mistake", that's when it's time to say "no, the very good reasons we did it are this". That didn't happen.
I can be that common man
Smeagol D’Angelo
Microsoft can and will be using GPT4 as soon as they get a handle on it, and if it doesn't boil their servers to do so. If you want deceleration you would need someone with an incentive that didn't involve, for example, being first to market with new flashy products.
I emphasized product because OpenAI may have great technology. But any product they sell is going to require mass compute and a mass sales army to go into the “enterprise” and integrate with what the enterprise already has.
Guess who has both? Guess who has neither?
And even the “products” that OpenAI have now can only exist because of mass subsidies by Microsoft.
While this tech has the ability to replace a lot of jobs, it has likely the ability to replace a lot of companies.
Right now, quota is very valuable and scarce, but credits are easy to come by. Also, Azure credits themselves are worth about $0.20 per dollar compared to the alternatives.