hah, Microsoft will be in control from here on out whether they have someone technically on the board or not. They did the embrace and extend now we're on to extinguish.
Turning it into an emergency and surprise coup with innuendo of wrongdoing looks to have been a huge mistake, and may result in total loss of control where a more measured course correction could have succeeded.
openai will be writing papers and asking for donations within a weeks time at that point as the rest of openAI quits
Next week is going to be interesting!
OpenAI investors try to get Sam Altman back as CEO after sudden firing - >>38326834 - Nov 2023 (73 comments)
We’re in an arms race. Ilya is Otto Hahn.
You don’t think they’re kicking themselves that Microsoft got the deal?
Apparently not?
Please say they are not going to put a board just as bad as before.
There are no checks and balances. Should Open AI employees be allowed to veto a board decision vote if they have 50% or 67% of the vote? Should Open AI employees be allowed to vote for at least some members to be allowed on the board? Like the Senate voting to confirm a Supreme Court Justice?
No matter how good the next board will be, the power rules still apply as before, and the same thing could happen again if no other changes are put in place...
esp if Altman takes the majority of the folks from money making side
Just give him some seat or what he wants, stop acting like some series like Suits
I'm curious where the rank & file OpenAI employees stand on this, as it seems to me like they will be the ultimate kingmakers. The Reddit thread on Friday made it seem like they supported Ilya - but for all we know, the anonymous Reddit poster might have been Ilya himself.
Looks like they were right to boot him, but may have done it way too late, having already de facto lost control due to the direction he’d guided the organization. If he comes out on top, it’ll mean the original OpenAI and its mission is dead, looks like to me, and the board was already cut out months ago but didn’t realize it yet.
Anyone with even a basic level of business sense isn’t going to hold Microsoft responsible in a negative light for prudent reactions to volatile partner behaviors. These are not just startup cloud credits being given to OpenAI.
...losing their licenses to OpenAI's technology and thus the Azure OpenAI service offering for which they have enterprise customers who went with them because Microsoft is the secure, enterprise vendor whose reliability they have learned to count on.
Good way to make the "Nobody got fired for hiring Microsoft" that followed the same thing for IBM a thing of the past.
Yeah, with the right people, Sam's company might eventually give Microsoft a technically-adequate replacement technology, but Microsoft's enterprise position isn't founded on technical adequacy alone.
But will they leave Microsoft (or, at least, be less inclined to rely on Microsoft in the future where competitors exsit) because of Microsoft terminating a relationship on which their access to a technology at the core of an enterprise service that enterprise customers rely on is based?
OpenAI’s actions do not give people who approve tens or hundreds of millions of dollars in spend the warm fuzzy feeling. Microsoft knows exactly the consistency and stability these customers desire. They are the conduit by which value flows from OpenAI to Microsoft customers until Microsoft can deliver the value themselves.
(also why people get fed Teams vs Slack; because of who is making the purchasing decision, and why it’s being made)
They get hacked by foreign governments due to their utter incompetence a lot less, too.
Because they already do well on their own, Meta is doing exceptionally well actually.
It's better business for them if OpenAI just burns into the ground and leaves the cake up for grabs again. It doesn't take a lot of brain power to see that.
The only thing that has sent Google into "Code Red" in it's whole history has been OpenAI. They'd love to see it evaporate, and now they're not even spending a dime!
>You actually think of the most esteemed AI researchers will have trouble finding funding after this?
Plenty of those actually left already ...
Ilya is good but is one of many, and by many I mean there's 100s of equally capable researchers, many of those with more flexible morals. Note: I'm being generous to Ilya and taking him at face value on being the self-proclaimed AI messiah that is keeping us from the destruction of the world.
Thanks Ilya, but money is money and investors would definitely prefer to put their money in a for-profit than a non-profit. This is even more true after this whole fiasco.
But maybe the beginning of the end of OpenAI.
A week ago I was saying that it's likely the leading AI company in 5 years time hasn't been founded yet.
After the news Friday and looking more at how Ilya sees the future of neural networks, I actually thought there's a decent chance OpenAI might correct course to continue to lead.
If it becomes too productized under a strengthened Altman, it's back on the list of companies building into their own obsolescence.
The right way to the future is adapting alignment to the increasing complexity of the model. It's not just about 'safety' but about performance and avoiding Goodhart's Law.
The way all major players, OpenAI included, are handling that step is by carrying forward the techniques that were appropriate for less complex models. Which is a huge step back from the approach reflected very early on pre-release for GPT-4's chat model. An approach that seemed to reflect Ilya's vision.
As long as OpenAI keeps fine tuning to try to meet the low hanging fruit product demand they've created and screwing up their pretrained model on measures that haven't become the industry target, they aren't going to be competitive against yet to exist companies that don't take such naive approaches with the fine tuning step. Right now they have an advantage in Ilya being ahead of the trend, but if Altman returning is at the cost of Ilya's influence, they are going to continue to dig their long term grave in the pursuit of short term success.
Big money on the line. Insane, life-changing payouts in the cards. Altman and MS on the side of those, the board on the side of the mission. Money’s likely to win.
Ilya losing access to the GPUs he needs to do his research so that the company can service a few more customers seemed like a fundamental betrayal to him and a sign that Sam was ignoring safety in order to grow marketshare.
If Elon is able to promise him the resources he needs to do his research then I think it could work out.
The problem comes when the situations starts to resemble the line about how, if you owe a bank a billion dollars, you own the bank: if the direction the CEO has taken the company differs enough from the vision of the board, and they've had enough time to develop the company in that direction, they can kinda hold the organization hostage. Yes, the company isn't what the board really wanted it to be, but it's still worth a bajillion dollars: completely unwinding it and starting over is unthinkable, but all the options that include firing the CEO (the only real lever the board has, the foundation of all the decision-making weight that they have, remember) end up looking like that.
The board was naive, to say the least.
All of them would have left if Sam left, if anything letting Sam go would significantly hamstring OpenAI than letting Ilya go.
This claim on the press had zero substance. It could well be, but there were no wide resignations or anything. Just 4 execs.
OTOH, Altman going for the high score could bump employee TCO 10x. So who knows. Passion vs greed.
https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-lau...
I can’t wait to read the autobiography of involved parties.
In a city where a normal person cannot buy a house and employer wants 25% office time? Give me a break, they just want to live like people could in 1950s.
That said, not clear to me that board is supported by the staff.
So if Sam goes, and many of the key staff go... will be interesting.
And the boards style in all this, if that is the "mission" - is wild. You have partners, staff, businesses - doing a VC round and you blow it all up without it sounds like even talking to your board president? Normally this type of ouster requires a properly noticed meeting!
Who on earth would ever trust an Elon promise at this point? The guy literally can’t open his mouth without making a promise he can’t keep.
Unless Ilya is getting something in a bulletproof contract and is willing to spend a decade fighting for it in court, he’s an idiot doing anything with Elon.
It would be ironic if everyone killed OpenAI by denying them compute though.
Still, OpenAI is a peacock feather in Microsoft’s cap. They’re either bluffing, foolish, or prescient to let it go.
And Microsoft has total rights to the models and weights, so they can CONTINUE their services and then spin up with Sam's new company.
Ilya may still be someone who should be on the board... Especially given his role as head of alignment research. He deserves a say on key issues related to OpenAI.
People get excited. Stupid things happen. Especially in startups.
ChatGPT having become so successful doesn't change the fact that the company as a whole, is fairly immature still.
They should seriously just laugh about it and move on.
Let's just say that Ilya had a bad couple of days, and probably needs a couple of weeks of vacation.
I can absolutely empathize with Ilya here, though. As far as I know the tech making openai function is largely his life’s work. It would be extremely frustrating to have Sam be the face of it, and be given the credit for it.
Sam is clearly a very accomplished businessman and networker. Those people are super important, I wish I had a person like him on my team.
I’ve had the experience of other people tacitly taking credit for my work. Giving talks about it, receiving praise for their vision. It’s incredibly demoralizing.
I’m not necessarily saying Sam did this, since I don’t know any of these people. Just speculating on how it might feel to ge Ilya watching Sam go on a world tour meeting heads of state to talk about what is largely Ilya’s work.
It's probably more of an intellectual / philosophical position, given that they just did not think through the real impact on the business (and thus the mission itself)
I'm inclined to assume that something stupid was done. It happens. They should resolve it, fix the rules for how the board can behave, and move on.
Despite the bungling, Ilya is probably still a good voice to have on the board. His key responsibility (super-alignment), is a key part of OpenAI's mission.
Plus, the other board members supported him, so decent blame to go around for this embarrassment.
What's there to be passionate about in a doomed business? Where is OpenAI going to get compute from? What work are they going to get done if everybody else follows Altman?
I think it's reasonable to assume that even a controversial board checked with their lawyer and did what was legally required. Especially as nobody involved seems to be claiming otherwise.
To push back on this a bit. If two yet unknown people, "an Altman" and "an Ilya", both applied to YC to start a company that builds and sells AI models, guess who would get funded. Not the guy who can't build AI models.
I find it bizarre that the guy who can build is suddenly the villain-nerd who can't be trusted, and the salesman is the hero, in this community.
With the current shortage of hardware, it's not as simple as "scaling up" if the resources literally don't exist.
If Sam beats Ilya + Bret, I will be even more impressed than I already am.
Can’t wait for Matt Levine’s play by play if they hire the same legal team Bret used in the last days of twitter.
Or it’ll be over in 2 hrs and Sam will win now. Let’s see.
Whether you agree with investors agreeing with firing Sam or not, future investors will absolutely be nervous about sinking serious money in a company that split it's board without talking to key partners/investors first
Sam seems to have a "move fast and break things" approach which would be appropriate for a less critical industry
https://en.wikipedia.org/wiki/History_of_artificial_intellig...
"The field of AI research was founded at a workshop held on the campus of Dartmouth College, USA during the summer of 1956."
At this point we should only post if something happens to the consumer product or API
It's why he fell out and left OpenAI despite investing $100 million to start it.
I'd say he's well aligned with Ilya's position. Early on I wondered if he was an instigator of the entire board coup.
I also wouldn’t want Ilya in there without checks and balances, to be clear. So the challenge is identifying the right adults.
I don’t think it’s realistic to expect that negotiation to complete successfully in the eyes of all parties by 5 PM today. It’s possible that Ilya will give up on having his requirements satisfied and leave.
Altman had a massive influence on the makeup of the board that ultimately fired him.
I bet that in 5 years, we won't be talking about AI at all. Whatever can be squeezed from current technologies, will be squeezed much sooner.
Real AI will eventually be based on totally different principles. Maybe by utilizing the natural intelligence of living organisms (e.g. bacteria).
Sam is backed by investors who are looking for returns, and are not sure if Ilya will get them the same juicy 100X.
So, if Sam comes back, then I’m pretty sure Ilya will go on his own. Whether he will focus on GPT or AGI or ?, is anyone’s guess, as is how many from OpenAI will follow him as everyone loves money.
EDIT: Ilya should have no trouble finding benefactors of his own, whether they are one of the FAANGs or VCs is TBD.
In reality you need all of them, and they are all separate talents, but things are clearly unbalanced.
This is not a good sign. Microsoft the largest $10Bn investor, who is in the middle of pushing through the restructuring of the company, hasn't decided if they want board representation? The only reason you would do that if they want to keep their options open, in the future, to hit OpenAI hard (legally and/or to raid the personnel).
Board representation would come with a fiduciary responsibility and it looks like they may not want that. I could only imagine the intensity of Microsoft senior engineers screaming that they could replicate all of this in-house (not saying whether it's justified or not).
Our board "OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner."
There is also a prominent red notice that seems made for somebody in Seattle...
IMPORTANT
*Investing in OpenAI Global, LLC is a *high-risk investment*
*Investors could lose their capital contribution and not see any return*
*It would be wise to view an investment in OpenAI Global,LLC in the spirit of a donation, with the understanding that it may be difficult to know what role money will play in a post-AGI world*
I am going to grab more popcorn...
He's pretty bad at honoring contracts too
OpenAI is a different beast, they (or some LLM) could displace Google as the main provider of information to the world. You just don't know what you're talking about, lol.
1: https://www.forbes.com/sites/davidphelan/2023/01/23/how-chat...
There are very wealthy competitors out there, any of which could end up beating OpenAI if they get half an edge. If you don't beat them, you don't get to figure out safety.
If Sam starts another company, you know deep in your soul he'll have all the backing he could ever dream of. Everyone who can sign a check is dying to get in on this. All the smart talent in the world would love to be employee number 1 through 1000. He's figured that you need the money if you want to stay in the game and he's world-class at making that happen. If OpenAI has all the purity of conviction and never gets another dollar because it all flows to SamCo...do they still win and figure out safety?
(Plus get some profits, attract staff who want to make bank, get full control of the board anyway, etc)
[1] We're nowhere near GPT controlling nukes, elections or the bond market, or desiring to. We need at least a couple massive algo changes before things take off. So some speed at this point isn't thaaat dangerous.
Throw in a huge investment, a 90 billion dollar valuation and a rockstar ceo. It’s pretty clear the court of public opinion is wrong about this case.
It would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation. The Company exists to advance OpenAl, Inc.'s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The Company's duty to this mission and the principles advanced in the OpenAI, Inc. Charter take precedence over any obligation to generate a profit. The Company may never make a profit, and the Company is under no obligation to do so. The Company is free to re-invest any or all of the Company's cash flow into research and development activities and/or related expenses without any obligation to the Members.
I guess safe artificial general intelligence is developed and benefits all of humanity, means an open AI (hence the name) and safe AI.
Giving birth to an idea is a necessary condition and sets the boundaries for so much of what it can achieve. But if you're unable to raise it to become a world champion, it isn't worth anything.
I've been on the raising ideas side way more in my 20+ career in tech. I know some people became bitter and scornful of me because I pushed their ideas to become something big and received a lot of credit for that. And I try to give credit where credit is due. But often enough, when I try to share the spotlight (in front of a customer or when presenting at BoD, for example), the brilliant engineer withers under pressure or actively harms his idea by pointing out its flaws excessively. It's a delicate balance.
It is entirely possible a program that spits out the complete code for a nuclear targeting system should not be released in the wild.
Extra points if Google were to sweep in and buy OpenAI. I think Sundar is probably too sleepy to manage it, but this would be a coup of epic proportions. They could replace their own lackluster GenAI efforts, lock out Microsoft and Bing from ChatGPT (or if contractually unable to, enshittify the product until nobody cares), and ensure their continued AI dominance. The time to do it is now, when the OpenAI board is down to 4 people, the current leader of whom has prior Google ties, and their interest is to play with AI as an academic curiosity, which a fat warchest would accomplish. Plus if the current board wants to slow down AI progress, one sure way to accomplish that would be to sell it to Google.
"Just speculating on how it might feel to Ilya watching Sam go on a world tour meeting heads of state to talk about what is largely Ilya’s work."
The whole point of a CEO is to do this kind of stuff. If your best engineers are going on world tours, talking to politicians, and preparing for keynotes, that's a pretty terrible use of their time. Not to mention that most of them would hate doing it.
Also, the executive who said it wasn’t for malfeasance wasn’t himself on the board and appears to be trying to push for Altman’s return. The board themselves has not yet said there was no malfeasance. To the contrary, they said that Altman had not been completely candid with them, which could very well be the last straw of malfeasance in a pattern of malfeasance which in aggregate reaches a sufficient threshold to justify a firing.
I don’t know whether there was or wasn’t malfeasance, but taking that executive’s word for it seems unwise in this polarized PR war.
Microsoft I don't think they need it:
Assuming they have the whole 90B USD to spend: it doesn't really make sense;
they have full access to the source-code of OpenAI and datasets (because the whole training and runtime runs on their servers already).
They could poach employees and make them better offers, and get away with a much more efficient cost-basis, + increase employee retention (whereas OpenAI employees may just become so rich after a buy-out that they could be tempted to leave).
They can replicate the tech internally without any doubt and without OpenAI.
Google is in deep trouble for now, perhaps they will recover with Gemini. In theory they could buy OpenAI but it seems out-of-character for them. They have strong internal political conflicts within Google, and technically it would be a nightmare to merge the infrastructure+code within their /google3 codebase and other Google-only dependencies soup.
Whether we can actually safely develop AI or AGI is a much tougher question than whether that's the intent, unfortunately.
and also Bitcoin might be the exception that proves the rule - every other chain or token is managed by a few insiders taking get-rich-quick marks for a ride.
Plenty of precedent for tech founders to have total board control. It will take a little while for Sam to consolidate power, but he won't forget what happened this weekend and he'll play the long game accordingly.
Having no information on what laws and governance documents apply to OpenAI or on what steps the board took, I express no opinion on whether the legal requirements were actually met, but it’s possible they were.
Would they be also able to keep up with development?
* https://en.wikipedia.org/wiki/Worldcoin
* pushing for AI regulationWikipedia gives these names:
In December 2015, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, Amazon Web Services (AWS), Infosys, and YC Research announced[15] the formation of OpenAI and pledged over $1 billion to the venture
Do any of those people sound like their day job was running non-profits? Had any of them EVER worked at a non-profit?
---
So a pretty straightforward reading is that the business/profit-minded guys started the non-profit to lure the idealistic researchers in.
The non-profit thing was a feel-good ruse, a recruiting tool. Sutskever could have had any job he wanted at that point, after his breakthroughs in the field. He also didn't have to work, after his 3-person company was acquired by Google for $40M+.
I'm sure it's more nuanced than that, but it's silly to say that there was an idealistic and pure non-profit, and some business guys came in and ruined it. The motive was there all along.
Not to say I wouldn't have been fooled (I mean certainly employees got many benefits, which made it worth their time). But in retrospect it's naive to accept their help with funding and connections (e.g. OpenAI's first office was Stripe's office), and not think they wouldn't get paid back later.
VCs are very good at understanding the long game. Peter Thiel knows that most of the profits come after 10-15 years.
Altman can take no equity in OpenAI, because he's playing the long game. He knows it's just "physics" that he will get paid back later (and that seems to have already happened)
---
Anybody who's worked at a startup that became a successful company has seen this split. The early employees create a ton of value, but that value is only fully captured 10+ years down the road.
And when there are tens or hundreds of billions of dollars of value created, the hawks will circle.
It definitely happened at say Google. Early employees didn't capture the value they generated, while later employees rode the wave of the early success. (I was a middle-ish employee, neither early nor late)
So basically the early OpenAI employees created a ton of value, but they have no mechanism to capture the value, or perhaps control it in order to "benefit humanity".
From here on out, it's politics and money -- you can see that with the support of Microsoft's CEO, OpenAI investors, many peer CEOs from YC, weird laudatory tweets by Eric Schmidt, etc.
The awkward, poorly executed firing of the CEO seems like an obvious symptom of that. It's a last-ditch effort for control, when it's become obvious that the game is unfolding according to the normal rules of capitalism.
(Note: I'm not against making a profit, or non-profits. Just saying that the whole organizational structure was fishy/dishonest to begin with, and in retrospect it shouldn't be surprising it turned out this way.)
Probably. If the people running it and the shareholders were committed to keeping up and spending money to do so.
Seems like a well done nothing burger with a side of french cries to me.
The original Verge article says (with no given sources): > missing a key 5PM PT deadline by which many OpenAI staffers were set to resign.
The tweet removes the qualifier: > The staff at OpenAI set a 5PM deadline for the entire board to resign, or else they quit and join Sam in his new company.
And you seem to parrot that point even though it is well past that deadline and no news of mass resignations
I mean, there are tons of think tanks, advocacy organizations, etc. that write lots of AI safety papers that nobody reads. I'm kind of piqued at the OpenAI board not because I think they had the wrong intentions, but because they failed to see that the "perfect is the enemy of the good."
That is, the board should have realistically known that there will be a huge arms race for AI dominance. Some would say that's capitalism - I say that's just human nature. So the board of OpenAI was in a unique position to help guide AI advancement in as safe a manner as possible because they had the most advanced AI system. They may have thought Altman was pushing too hard on the commercial side, but there are a million better ways they could have fought for AI safety without causing the ruckus they did. Now I fear that the "pure AI researchers" on the side of AI Safety within OpenAI (as that is what was being widely reported) will be even more diminished/sidelined. It really feels like this was a colossally sad own goal.
https://www.wsj.com/tech/ai/openai-leadership-hangs-in-balan...
This is the most important quote: "We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board."
If it were a plant by the other camp how would this make it there? Also the whole article sounds like "You don't want him as a CEO? He is going to get sooo much money, and going to out compete you sooo hard. He is already in talks for his new venture." Which is obviously what Sam's side would like to project.
Sure, Microsoft has physical access to the source code and model weights because it's trained on their servers. That doesn't mean they can just take it. If you've ever worked at a big cloud provider or enterprise software system, you'll know that there's a big legal firewall around customer data that is stored within the company's systems, and you can't look at it or touch it without the customer's consent, and even then only for specific business purposes.
Same goes for the board. Legally, the non-profit board is in charge of the for-profit OpenAI entity, and Microsoft does not get a vote. If they want the board gone but the board does not want to step down, too bad. They have the option of poaching all the talent and trying to re-create the models - but they have to do this employee-by-employee, they can't take any confidential OpenAI data or code, etc. Microsoft may have OpenAI by the balls economically, but OpenAI has Microsoft by the balls legally.
A buyout solves both of these problems. It's an exchange of economic value (which Microsoft has in spades) for legal control (which the OpenAI board currently has). Straightens out all the misaligned incentives and lets both parties get what they really want, which is the point of transactions in the first place.
Wild that there can be various factions with apparently only three relevant people (yes I know 3 choose 2 is three, but c’Mon…).
Hard in sense that for-profit part has actual power here?
Microsoft's relationship with OpenAI was really ideal from a speed-of-advancement perspective. That is, reams and reams have been written about how Google has to move at such a slow pace with AI productization because, essentially, they have so much to lose. Microsoft saw this first hand with their infamous Tay AI bot, which turned into a racist Hitler lover in a day.
Microsoft's relationship with OpenAI was perfect - they could realistically be seen as separate entities, and they let OpenAI take all the risk of misaligned AI, and then only pull in AI into their core services as they were comfortable. Google's lack of this sort of relationship is a direct hindrance to their speed in AI advancement and productization. Microsoft's lack of a board seat gives them a degree of "plausible deniability" if you will.
Plus, it's not like Microsoft's lack of a board seat impacts their influence that much. Basically everyone believes that the push to get Altman back has Microsoft's/Nadella's fingerprints all over it. Their billions give them plenty of leverage, and my bet going forward is that even if they don't take a board seat outright, they will demand that board membership be composed of more "professional", higher caliber board members that will likely align with them anyway.
That's sound like the riad to disaster. People who can sign a check and are dying to get in on this, usually lack any moral or ethics.
Not a smart move to work for those people just because you fell for a clever salesman.
The example he gave is a model that could independently do science.
It does not matter that the board has the legal power to do whatever they want eg fire the CEO. If the investors and key employees that keep the company going walk away, they end up with nothing so they might as well resign and preserve the organization rather than burn the whole thing down.
Destroying OpenAI good backfire big time.
IP lawyers would sell their own mothers for a chance to "wanna bet?" Microsoft.
As an example, couple years ago Crisis Text Line decided to sell data to a for profit spin off. Their justification was that data was anonymized, which was bs for it’s unstructured text data, and that it’s not against terms of service, which users had agreed to. Mind you, these users were people in crisis maybe even on a brink of a suicide. This was highly unethical and caused a backlash. Then one of the bod members wrote a half assed “reflection” post [1]. If some core employees of CTL did a “coup” to stop this decision, because they believed it’s unethical and dangerous, wouldn’t it be justifies?
[1] http://www.zephoria.org/thoughts/archives/2022/01/31/crisis-...
Internal to the entire OpenAI org, sounds like all we had was just the for-profit arm <-> board of directors. Externally, you can add investors and public opinion (basically defaults to siding with the for-profit arm).
I wish they worked towards something closer to a functional democracy (so not the US or UK), with a judicial system (presumably the board), a congress (non-existent), and something like a triumvirate (presumably the for-profit C-suite). Given their original mission, it would be important to keep the incentives for all 3 separate, except for "safe AI that benefits humanity".
The truly hard to solve (read: impossible?) part is keeping the investors (external) from having an outsize say over any specific branch. If a third internal branch could exist that was designed to offset the influence of investors, that might have resulted in closer to the right balance.
According to FT this could be the cause for the firing:
“Sam has a company called Oklo, and [was trying to launch] a device company and a chip company (for AI). The rank and file at OpenAI don’t dispute those are important. The dispute is that OpenAI doesn’t own a piece. If he’s making a ton of money from companies around OpenAI there are potential conflicts of interest.”
Bigger concern would be the construction of a bomb, which, still, takes a lot of hard to hide resources.
I'm more worried about other kinds of weapons, but at the same time I really don't like the idea of censoring the science of nature from people.
I think the only long term option is to beef up defenses.
No they don’t. Both Bard and Llama are far behind GPT-4, and GPT-4 finished training in August 2022.
this is the future that orwell feared.
Either Sam forms a new company with mass exodus of employees, or outside pressure changes structure of OpenAI towards a clear for-profit vision. In both cases, there will be no confusion going forward whether OpenAI/Sam have become a profit-chasing startup.
Chasing profits is not bad in itself, but doing it under the guise of a non-profit organization is.
You can't judge a non-profit by the same success metrics as a for-profit.
I also personally loathe Microsoft, but even I will concede that they probably have the technical wherewithal to follow known trajectories, the cat is out of the bag with AI now.
> in a post on X, formerly Twitter
It keep surprising me that someone can so completely torpedo their brand that news organisations feel compelled to keep referring to the old name so people have some idea of what they’re talking about.
And they can pick two. Gpus don't grow on trees so without billions in funding they can't provide it to everyone.
Available means that I should have access to the weights.
Safe means they want to control what people can use it for.
The board prioritised safe over everything else. I fundamentally disagree with that and welcome the counter coup.
The average postgraduate in physics can design a nuclear bomb. That ship sailed in the 1960s. Anyone who uses that as an argument wants a censorship regime that the medieval catholic church would find excessive.
You are set on believes based on rumors.
Microsoft's power has nothing to do with OpenAI employees.
https://chat.openai.com/share/3dd98da4-13a5-4485-a916-60482a...
There are many people that would do great things with god-like powers, but more than enough that would be terrible.
Mozilla Corporation's Experience
*Challenges and Adaptation:* Mozilla Corporation has faced financial challenges, leading to restructuring and strategic shifts. This includes layoffs, closing offices, and diversifying into new ventures, such as acquiring Fakespot in 2023
*Dependence on Key Partnerships:* Its heavy reliance on partnerships like the one with Google for revenue has been both a strength and a vulnerability, necessitating adaptations to changing market conditions and partner strategies
*Evolution and Resilience:* Despite challenges, Mozilla Corporation has shown resilience, adapting to market changes and evolving its strategies to sustain its mission, demonstrating the effectiveness of its governance model within the context of its organizational goals and the broader technology ecosystem
In conclusion, while both OpenAI and Mozilla Corporation have navigated unique paths within the tech sector, their distinct governance structures illustrate different approaches to balancing mission-driven goals with operational sustainability and market responsiveness.
Read a good a article about the history of the OpenAI board that argued this all went down due to the recent loss of 3 board members, bringing total board membership from 9 to 6 (including losses like Reid Hoffman, who never would have voted for something like this), and Altman wanted to increase board membership again. Likely the "Gang of Four" here saw this as their slim window to change the direction of OpenAI.
To be frank, they need to really spell out what "benefitting mankind" is. How is it measured? Or is it measured? Or is it just "the board says this isn't doing that so it's not doing that"?
It's honestly a silly slogan.
Something tells me most people are going to go for fucktons of money and working on what they think is interesting. Even if it makes other people even more money.
I believe that is indeed the case, it is the responsibility of the board to make that call.
He's secured the resources and partnerships to grow a $100B company virtually overnight, but he hasn't done anything noteworthy or laudible?
HN takes are the best. Humor really is the best medicine.
I think a better approach is to have a system of guiding principles that should guide everyone, and then putting in place a structure where there needs to be periodic alignment that those principles aren't being violated (e.g. a vote requiring something like a supermajority across company leadership of all orgs in the company, but no single org has the "my job is to slow everyone else down" role).
Feel however you will about it, but people have been rattling this pan for decades now. Google's bottom line will exist until someone finds a better way to extract marginal revenue than advertising.
- Not limiting access to a universally profitable technology by making it only accessible to highest bidder (e.g. hire our virtual assistants for 30k a year).
- Making models with a mind to all threats (existential, job replacement, scam uses)
- Potentially open-sourcing models that are deemed safe
So far I genuinely believe they are doing the first two and leaving billions on the table they could get by jacking their price 10x or more.
If the board were to have any influence they had to be able to do this. Whether this was the right time and the right issue to play their trump card I don't know - we still don't know what exactly happened - but I have a lot more respect for a group willing to take their shot than one that is so worried about losing their influence that they can never use it.
That is the only way.
Monumental, like the invention of language or math, but not like a god.
https://twitter.com/elonmusk/status/1726376406785925566?s=61...
The answer to the question is - even people who don't like him realize he's a smart guy and this was a dumb move, and it was done in an amateurish way by a board out of their depth.
No mention of the people actually responsible for any of this? Y'know, the scientists and engineers that actually had to do something to create the crazy technology he's taking credit for, the noteworthy dude is the generic MBA C-suite type that managed to not screw the pooch when given a team of the brightest minds out there?
https://www.bloomberg.com/news/articles/2023-11-20/sam-altma... | https://archive.is/sv8SH ("Bloomberg: The Doomed Mission Behind Sam Altman's Shock Ouster From OpenAI")
> At the same time, companies that depend on OpenAI’s software were hastily looking at competing technologies, such as Meta Plaforms Inc.’s large language model, known as Llama. “As a startup, we are worried now. Do we continue with them or not?” said Amr Awadallah, the CEO of Vectara, which creates chatbots for corporate data.
> He said that the choice to continue with OpenAI or seek out a competitor would depend on reassurances from the company and Microsoft. “We need Microsoft to speak up and say everything is stable, we’ll continue to focus on our customers and partners,” Awadallah said. “We need to hear something like that to restore our confidence.”
Notice how in shock everyone was that a CEO was fired the same way us regular peasants are fired everyday.
At this point that coming from Elon may not be the endorsement you think it is.
Also maybe Elon sees that Ilya is going to be ousted and wants to extend a hand to him before others do.
Copying and pasting code from ChatGPT, I created a functional iOS app today based on my design. I have never before written a mobile app, any Swift code, or much of any code aside from Power Apps, in at least 15 years.
I am on that hype train.
The thing is starting to look more and more like, become the biggest name in AI through claiming non-profit status, leverage the brand to go for profit.
> The fact that you think “non-technical” people don’t contribute meaningful value to anything means you are stupid
Happy to call myself an imbecile in this case!
And again, what exactly has Sam himself done to bring about the tech? Without Ilya and the rest of the engineers and researchers you wouldn't have been able to copy/paste the code, why exactly is Sam the one that gets the credit here?
Right now, OpenAI mostly has a big cost advantage; fully exploiting that requires lower pricing and high volume.
They all want to get the big payday, including Microsoft.
People are in the startup game for the big payday. And Sam Altman is the best person given where OpenAI is at right now.
I use it probably 20 times a day at this point.
example: "I ran performance tests on two systems, here's the results of system 1, and heres the results of system 2. Summarize the results, and build a markdown table containing x,y,z rows."
"extract the reusable functions out of this bash script"
"write me a cfssl command to generate a intermediate CA"
"What is the regex for _____"
"Here are my accomplishments over the last 6 months, summarize them into a 1 page performance report."
etc etc etc
If you're not using GPT4 or some LLM as part of your daily flow you're working too hard.
Get GPT4All (https://gpt4all.io), log into OpenAI, drop $20 on your account, get a API key, and start using GPT4.
We're just seeing a variation of that playing out live. There are multiple teams working on AI, ChatGPT got "there" first and now we have a single heroic figure to worship. Personality cults seem to be a part of the quintessential human condition.
But still, irrelevant, I didn't comment on whether it's good or bad, I just said it's overhyped, and comments like these never fail to come up when someone says anything even slightly negative about the tech.
Plus, the tech scene is extremely prone to hopping on trends and then taking it way too far. If you want some real cringe, check out @varun_mathur's long Twitter post from Nov 18th.
Although at its core, firing Altman under current circumstances was still a poorly thought-out decision which evidently caused the event itself to become a major centre of attention.
I am not necessarily an Altman hype man. As an obvious outsider, my best bet as to why he came back so strong was because apparently many researchers (employees) said they would leave as well. I can't read peoples' minds but I can infer a a bit based on human financial interest.
The employees with equity were very close to a liquidity event at a valuation of $86B. That is likely life changing money for many, and this whole Altman getting fired mess put that life changing money on hold.
I wonder if his ouster had been done in a more sane/stable way, if things could have kept chugging along without him.
As far as the average HN opinion, I donno, I have seen many upvoted comments saying... yeah, he's just the CEO.
What this misses is all the regulatory capture that he’s been campaigning for. All the platforms have now closed their gardens. Authors and artists are much more vigilant about copyright etc. So it’s now a totally different game compared to 3 years ago because the data is not just there up for grabs anymore.
If a model is not safe, the access should be limited in general.
Or, from a business model perspective; a 'sane' nonprofit doing what OpenAI should, at least in my mind, be able to do the following harmoniously:
1. Release new models that do the same thing they make others allow access do to via their 'products' with reasonable instructions on how to run them on-prem (i.e. I'm not saying what they do has to be fully runnable on a single local box, but it should be reproducible as a nonprofit purportedly geared towards research.)
2. Provide on-line access to models with a cost model that lets others use while furthering the foundation.
3. Provides enough overall value in what they do that outside parties invest regardless of whether they are guaranteed a specific individual return.
4. Not allow potentially unsafe models to be available via less than both research branches.
Perhaps, however, I am too idealistic.
On the other hand, Point 4 is important, because we can never know under the current model, whether a previous unsafe model has been truly 'patched' for all variations of a model.
OTOH, if a given model would violate Point 4, I do not trust the current org to properly disclose the found gaps; better to quietly patch the UI and intermediate layers than ask whether a fix can be worked around with different wording.
Now, I feel even just "OK" agential AI would represent god-like abilities. Being able to spawn digital homunculi that do your bidding for relatively cheap and with limited knowledge and skill required on the part of the conjuror.
Again, this is very subjective. You might feel that god-like means an entity that can build Dyson Spheres and bend reality to it's will. That is certainly god-like, but just a much higher threshold than what I'd use.
While google did do a good job milking knowledge and improving from its queries and interaction data, openai surely knows how to get information from high quality textual data even better.
Openai made an interface where you can just speak your natural language, it didn't make you learn it's own pool of keyword jargon bastardized quasi command language. It's way more natural.
Here's to hoping there's still some poetic irony left to dish out in the world.
People speculated it was the funding, or attracting talent or having "access". Turns out it was none of them (obviously they all have a part, but having all three doesn't mean you can best OpenAI which gives you the fundemental reason why it is so hard to compete with them).
Yes, exactly.
Strong qualifications, strong execution, strong results
One wishes someone had pulled a similar (in sentiment) move on energy companies and arms suppliers.
If the app does use certificate pinning, then you can use an Android phone and a modified app that removes the logic that enforces certificate pinning. This is more involved but also not impossible.
Sama also went on Lex and got over 5M views. The title was: OpenAi ceo on, ChatGPT, GPT4, and the future of AI.
I bet Google has already spent an order of magnitude more money on GPT-4 rival development than OpenAI spent on GPT-4.
they "steal" access to data because the LLM launders it on the other end
Done.
Any actual AI takeover will be boring and largely voluntary. For certain definitions of voluntary.
Easier said than done
But I don't know if this is truly the case.
Whether it was "smearing" or uncovering actual wrongdoing depends on the facts of the matter, which will hopefully emerge in due course. A board should absolutely be able and willing to fire the CEO, oust the chairman, and jeopardize supplier relationships if the circumstances warrant it. They're the board, that's what they're for!
https://academictorrents.com/details/89d24ff9d5fbc1efcdaf9d7...
I assume this must be only the text portion, and heavily compressed?
Then, he progressively sold more and more of the companies future to Ms.
You don’t need chatgpt and it’s massive gpu consumption to achieve the goals of openai. A small research team and a few million, this company becomes a quaint quiet overachiever.
The company started to hockey stick and everyone did what they knew, Sam got the investment and money. The tech team hunkered down and delivered gpt-4 and soon -5
Was there a different path? Maybe.
Was there a path that didn’t lead to selling the company for “laundry buddy”, maybe also.
On the other hand, Ms knew what they were getting into when its hundredth lawyer signed off on the investment. To now turn around as surprised pikachu when the board starts to do its job and their man on the ground gets the boot is laughable.
The data has to come from somewhere, and all of the outlets that were used to train ChatGPT, stable diffusion, etc. have since been locked down. Any new company that Sam Altman makes in the AI space won't be competing just on merits of talent and product, they will also need to pay for and negotiate access to data.
I'd actually expect this to get far worse going forward, now that other organizations have an idea of how valuable their data is. It's also trivial to justify locking it down under the guise of protecting people, privacy, etc.
Llms know the contents of books because they are analyzed, reviewed and spoken about everywhere. Pick some obscure book that doesn't show up on any social media and ask about it's contents. GPT won't have a clue
Can you fill me in as to what the goal of OpenAI is?
So yeah, Ilya is a very known entity. No, ordinary folks don't need to know him, but if you are in IT and especially if you have anything to do with AI, then not knowing about Ilya tells more about your informational bubble than about Ilya's alleged lack of recognition.
It is akin to claiming to be into crypto on development side and not knowing the name of Vitalik Buterin.
What's your evidence contrary to this? Sounds like your common sense rather than inside knowledge
The entire English language Wikipedia is only around 60GB in a format that can be readily searched and randomly accessed (ZIM), for example: https://kiwix.org/
Whether fulfilling their mission or succumbing to palace intrigue, it was a gamble they took. If they didn't realize it was a gamble, then they didn't think hard enough first. If they did realize the risks, but thought they must, then they didn't explore their options sufficiently. They thought their hand was unbeatable. They never even opened the playbook.
Other companies tried competing against Chrome, and so far Mozilla is the most successful, as everyone else gave up and ship Chrome skins people basically only use by subterfuge or coercion. I'd say that's pretty good.
It's like imagine a guy has a nice idea to cure cancer, but plays the princess with it and refuses to industrialize it, while people are dying left and right. Surely, it becomes indefensible, and at some point, someone brave will do the right thing and implement the idea. You have a right to reap the benefit of your ideas but you have a duty not to deprive humanity of any benefit just because you thought of it first, I feel ?
The board didn't plan or think this through at all.
This isn't about Sam being powerful, just about him being a reasonable predictable cofounder Microsoft can work with. It's the rest of the board that shocked Microsoft with how unprofessional their actions were, that they can't be worked with.
The very idea that they would fire Sam without even consulting with Microsoft first is such a gigantic red flag in judgment.
For the mobile app I used one of the smaller Wikipedia subsets, since I didn't want to take up too much space on my phone. The full offline Wikipedia download is saved to my laptop.
it is inconsistent in language usage to write differently than to speak. we don’t speak big sounds, that’s why we don't write them either. and: doesn’t one say the same thing with one alphabet as with two alphabets? why does one merge two alphabets of completely different characters into one word or sentence and thereby make the written image inharmonic? either large or small. the large alphabet is illegible in the typesetting. therefore the small alphabet. and: if we think of the typewriter, the limitation to lower case characters means great relief and is time saving. and if we think further, it would be simplified by switching off upper case characters.
For a for-profit, the pragmatic approach due to Microsoft also being the majority computer provider (We can set aside the investments for the moment - most are in the form of compute credits and come in tranches. OpenAI is not sitting on $10B in cash in their bank accounts or whatever) would make a lot of sense.
But they're a non-profit that operate in accordance to their pipe-dream charter. You and I might be skeptical of it or just think it's generally dumb, but non-profits are allowed to believe in pipe-dreams and pursue them.
At the very least they still issued a poorly worded statement and have not been able to recover from that, but it is quite possible that their attitude towards the investors in the for-profit is entirely consistent with the charter they are supposed to be following.
It think the above is consequence of a "I can afford to write all lowercase/do unconventional thing X". And by "afford" I think here more like "I don't have bosses, nor do I have to please anyone doing conventional things".
There was an article or a discussion here a while ago how in a organizational pyramid the people on the botton usually write as normal/nice as possible, while going upwards; people can afford themselves to write however the like, including being super rude if they choose so.
https://twitter.com/emilychangtv/status/1726436311215845862?...
My favorite was Rainbow MosAIc, a Rashomon style film taking place mainly from Friday to Monday. It played with all the different potential motivations and theories. It did a half decent metaphor with representing the different points of view via the different video conferencing cameras.
That would imply they couldn’t have considered that Altman was beloved by vital and devoted employees? That big investors would be livid and take action? That the world would be shocked by a successful CEO being unceremoniously sacked during unprecedented success, with (unsubstantiated) allegations of wrongdoing, and leap on the story. Generally those are the kinds of things that would have come up on a "Fire Sam Pro and Cons" list. Or any kind "what's the best way to get what we want and avoid disaster" planning session. They made the way it was done the story, and if they had a good reason, it's been obscured and undermined by attempting to reinstate him.
I despise both of these companies, but Google's advantage here is so blatantly obvious that I struggle to see how you can even defend OpenAI like this.
Even the recent OpenAI profile in one of prominent publications covered Mira, Ilya and gdb in addition to Sam.
But the fundamental question is why would a researcher expect (if they do) that they will be as well known as the CEO who is the face of organisation?
It doesn't. It never does. Especially when you're not profitable.
For a profit company, you buy shares and elect the board, right? If they turn the company into something nobody wants, the shares lose value, and maybe someone picks up the shares cheap and turns it around. So there's a feedback cycle. The profit motive is almost incidental.
But here do the current board members just appoint the next board members and grow/shrink the total number?
What I'm fairly sure of though is that if the board had been stocked with heavyweights rather than lightweights that this would have been handled in a procedural correct way with a lot more chance that it would stick.
How is that guaranteed? If investors remove him from board of directors, he may get pissed off and quit, no?
> In addition, no one is irreplaceable.
In theory, maybe. In practice, it is not always easy. Nearly a year after ChatGPT came out Google hasn't been able to catch up. If it was easy to replace Ilya after he left Google, they would have caught up by now.
The board however has only made things ever more murky with their vague and fig-leaf like defenses. They appear utterly unprepared to deal with the aftermath of their own action, which they and only they knew was coming.
If it weren't for the mentality you are rallying against we wouldn't have ChatGPT. Google, Meta, everyone had these LLMs sitting around. OpenAI was the only company with the balls to release it to the public.
The communication was certainly very poor, and we don't know if the reasons were good, but I don't understand the speed complaint.
I'm building a magazine encyclopedia and I would estimate that 99.9% of all magazines ever published are not available electronically. And that the content in magazines probably exceeds the content in books by an order of magnitude.
Exactly. Google has so much more resources, tries so hard to compete (it's literally life or death for them), and yet it's still so far behind. It's strange that you don't see that - if you haven't tried comparing Bard's output to GPT-4 for the same questions - try it, it will become obvious.
It's quite possible their rumored Gemini model might finally catch up with GPT-4 at some point in the future - probably around the time GPT-5 is released.
For a while I felt more hopeful about OpenAI being truly open and benefiting everyone (perhaps I was naive).
I hope he doesn't end up being the CEO again.
That would at least made it seem like they knew what they were doing.
But investors would have still beaten them over the head.
It's like a dance, you can't just go do the Polka if everybody expects a Waltz, that's going to attract a ton of attention. They should have gotten out in front of any potential blowback to ensure that even if their decision was by the book (which for their sake I hope it was) that it also didn't ruffle the scales of very large dragons.
Many people have betrayed their country to foreign governments in exchange for mere thousands of dollars. It is never safe to rule out the willingness of employees to engage in corporate espionage, even in exchange for truly pitiful rewards. It would be a stupid idea, but that doesn't mean it won't happen.
Investment is only partially about trust. I agree Sam's a pretty investable guy. I expect Sam to pursue growth through fundraising, product commercialization, corporate partnerships, etc in exactly the YC mode. He's also clearly ok with letting the momentum of that growth overwhelm the original stated aims of OpenAI, especially given what the original firing press release said about Sam not being entirely forthright. I suspect Microsoft made their investment knowing that something like this might happen. It's not trustworthy that he tried to overwhelm nonprofit aims under for-profit momentum, but if you're an investor do you care?
---
A Nobel Prize was awarded to Ilya Prigogine in 1977 for his contributions in irreversible thermodynamics. At his award speech in Stockholm, Ilya showed a practical application of his thesis.
He derived that, in times of superstability, lost trust is directly reversible by removing the cause of that lost trust.
He went on to show that in disturbed times, lost trust becomes irreversible. That is, in unstable periods, management can remove the cause of trust lost--and nothing happens.
Since his thesis is based on mathematical physics, it occupies the same niche of certainty as the law of gravity. Ignore it at your peril.
-- Design for Prevention (2010)
It is harder to prove to a "should have known" standard compared to say buying stolen speakers from the back of a truck for 20% of the list price.
> And Musk proposed a possible solution: He would take control of OpenAI and run it himself.
Depending on where you live you will open yourself up for at least the consequences of your own actions (negligence, errors of judgment) and possibly even for the errors of other board members because you are not only there to oversee the company, you also oversee the other board members. That's why on the spot board resignations are usually a pretty bad sign unless they are for health or other urgent personal reasons. It is a very strong signal that a board member feels that they have not been able to convince their colleagues that they are off the straight and narrow path and that their choices exceed their own thresholds for ethics or liability (or both...). And that in turn is one of the reasons why a board would normally be very upset if they feel that they have not been given all information they require to do their job and that was the very first line that the board trotted out as to why Altman was let go. But even then they should have built their case rather than just to take a snap poll at a point in time when they had a quorum to get rid of him because it seems that that and not Altman's behavior (which as far as I can see was fairly consistent from day #1) was the real reason they did what they did. In the original board (9 people) the four didn't have the vote but in the shrunk board (6) they did.
Benefit and available can have very different meaning when you mix in alignment/safety concerns.
From an indexing/crawling POV, the content generated by LLMs might (and IMO will) permanently defeat spam filters, which would in turn cause Google (and everyone else) to permanently lose the war against spam SEO. That might be an existential threat to the value of the web in general, even as an input (for training and for web search) for LLMs.
LLMs might already be good enough to degrade the benefit of freedom of speech via signal-to-noise ratio (even if you think LLMs are "just convincing BS generators"), so I'm glad the propaganda potential is one of the things the red team were working on before the initial release.
For instance:
https://www.seattletimes.com/business/paul-allen-goes-after-...
In the case of Tesla, to accelerate the development of electric cars, in the case of Twitter, to reduce the probability of civil war and in the case of SpaceX to eventually have humanity (or our descendants) spread out enough that a single catastrophic event (like a meteor, gray goo or similar) doesn't wipe us out all at once.
His detractors obviously will question both his motives and methods, but if we imagine he's acting out of good faith (whether or not he's wrong), his approach to AI fits the pattern, including his story about why he helped with the startup of OpenAI in the first place.
From someone with an ex-risk approach to AI safety, the first concern is, to quote Ilya from the recent Alignment Workshop "As a bare minimum, let's make it so that if the tech does 'bad things', it's because of its operators, rather than due to some unexpected behavior".
In other words, for someone concerned with existential risk, even intentional "bad use" such as using AI for killer robots at a large scale in war or for a dictator to use AI to suppress a population are secondary concerns.
And it appears to me that Elon and Ilya both have this outlook, while Sam may be more concerned with shorter term social impacts.
The board was considering the requests to bring back Sam because they realized they were handling the situation badly and didn’t want the organization to blow up and fail at its mission, but they refused to resign unless and until suitably mission-aligned replacement board members were agreed upon (note that profit is not the nonprofit’s mission).
Of course they didn’t bring him back in the end, or resign, after all.
If the board had yielded to similarly minded replacements and brought back Sam, that isn’t the same as exonerating him, only realizing how badly they handled the firing. I can imagine that an independent investigation into the truth of the existing board’s allegations would still have been ordered by the new board, just as the new interim CEO actually did. If it was truly just a personality clash leading to mistrust, that would probably be the end of it. If there truly was malfeasance that makes Sam and unsuitable CEO, they’d probably then engage a PR firm to help make the case to the world far more persuasively than happened on Friday.
Yes, this is speculation, but I’ve been a nonprofit director and president myself, and if I were on that replacement board it’s what I’d do. In that case, the organization was much lower-profile than OpenAI, and we were spare-time volunteers with a tiny budget. The closest we came to self-dealing is when a long-time director wanted to become a paid software engineer contractor for us, but he left the board in order to make that ethically clear, and the remaining board approved the arrangement. Nothing hidden or dishonest there, and he’s continued to be a great help to the organization.
(Disclaimer: I stopped my own involvement with the org over 4 years ago myself, but that was truly because the rest of my life got too busy. There was no drama or anything around that.)
Let’s assume for a second that it is a power play. If the point of it is just the power struggle between two factions seeking power then yeah it’s not a good thing to majorly disrupt an organization. But if the point of the power play is to rescue the nonprofit’s pursuit of its mission from a CEO’s misuse of power that goes against the mission, it’s a board acting exactly as it should, other than badly handling the communications around this mess.
I have no inside info and therefore am not expressing any opinion on what the truth is. But I’m not going to rush to believe the PR war being waged by Altman and his allies merely because the current board is bad at PR/comms.
I look forward to reading any public summary of the report from the investigation which the new interim CEO has ordered.
Satya got what he wanted, @sama joins MS to create pretty much a spin-off startup there, OpenAI on suicide watch with employees leaving, absolutely no new funding ever and a just-appointed CEO that wants to "pause" the company, lol.
Right all along!
https://twitter.com/ilyasut/status/1726590052392956028
https://www.wired.com/story/openai-staff-walk-protest-sam-al...
Soon (1-2 years) LLMs will be good enough to improve the general SNR of the web. In fact I think GPT-4 might already be.