One developer (Ilya) vs. One businessman (Sam) -> Sam wins
Hundreds of developers threaten to quit vs. Board of Directors (biz) refuse to budge -> Developers win
From the outside it looks like developers held the power all along ... which is how it should be.
1. They can get equivalent position and pay at the new Microsoft startup during that time, so their jobs are not at risk.
2. Sam approved each hire in the first place.
3. OpenAI is selecting for the type of people who want to work at a non-profit with a goal in mind instead of another company that could offer higher compensation. Mission driven vs profit driven.
Either way on how they got to that conclusion of banding together to quit, it was a good idea, and it worked. And it is a check on power for a bad board of directors, when otherwise a board of directors cannot be challenged. "OpenAI is nothing without its people".
He backed it and then signed the pledge to quit if it wasn't undone.
What's the evidence he was behind it and not D'Angelo?
Employees, customers, government.
If motivated and aligned, any of these three could end you if they want to.
Do not wake the dragons.
Employees who have $$$ incentive threaten to quit if that is taken away. News at 8.
One developer (Woz) vs One businessman (Jobs) -> Jobs wins
I've been in companies where the board won, and they installed a stoolie that proceeded to drive the company into the ground. Anybody who stood up to that got fired too.
Lucky for us this fiasco has nothing to do with AGI safety, only AI technology. Which only affects automated decision making in technology that's entrenched in every fact of our lives. So we're all safe here!
This is Altman's playbook. He did a similar ousting at Reddit. This was planned all along to overturn the board. Ilya was in on it.
I'm not normally a conspiracy theorist. But fool me ... you can't be fooled again. As they say in Tennessee
Maybe that was the case at some point, but clearly not anymore ever since the release of ChatGPT. Or did you not see them offer completely absurd compensation packages, i.e. to engineers leaving Google?
I'd bet more than half the people are just there for the money.
It's clear most employees didn't care much about OpenAI's mission -- and I don't blame them since they were hired by the __for-profit__ OpenAI company and therefore aligned with __its__ goals and rewarded with equity.
In my view the board did the right thing to stand by OpenAI's original mission -- which now clearly means nothing. Too bad they lost out.
One might say the mission was pointless since Google, Meta, MSFT would develop it anyway. That's really a convenience argument that has been used in arms races (if we don't build lots of nuclear weapons, others will build lots of nuclear weapons) and leads to ... well, where we are today :(
citation?
I don’t get this perspective. The first planes, cars, computers, etc. weren’t initially made with safety in mind. They were all regulated after the fact and successfully made safer.
How can you even design safety into something if it doesn’t exist yet? You’d have ended up with a plane where everyone sat on the wings with a parachute strapped on if you designed them with safety first instead of letting them evolve naturally and regulating the resulting designs.
Note that the response is Altman's, and he seems to support it.
As additional context, Paul Graham has said a number of times that Altman is one of the most power-hungry and successful people he know (as praise). Paul Graham, who's met hundreds if not thousands of experienced leaders in tech, says this.
https://en.wikipedia.org/wiki/United_States_government_role_...
If you're trying to draw a parallel here then safety and the federal government needs to catch up. There's already commercial offerings that any random internet user can use.
Ilya is also not a developer, he's a founder of OpenAI and was the CSO.
There should be regulations on existing products (and similar products released later) as they exist and you know what you’re applying regulations to.
If people are easily replaceable then they don’t hold nearly as much power, even en mass.
At some point, our try it until it works approach will bite us. Consider the calculations done to determine if fission bombs would ignite the atmosphere. You don't want to test that one and find out. As our technology improves exponentially we're going to run into that situation more and more frequently. Regardless if you think it's AGI or something else, we will eventually run into some technology where one mistake is a cataclysm. How many nuclear close calls have we already experienced.
It was capital and the pursuit of more of it.
It always is.