I don’t consider anybody beyond forgiveness and if Ilya takes a professional lesson from this and Sam learns to be more mindful of others’ concerns, I consider this a win for all. Starting over in a new entity sounds great but would be years of setback.
I hope they work this out.
OK. OK. I’ve said this my whole career.
Engineers are the most emotional specie of worker. There is a grand delusion that engineers are rational.
This just goes to show how irrational they are. Snap reactions like this: sign of a brilliant but fucked up engineer.
I am an engineer. I am under no illusion that I’m rational. Quite the opposite.
On a more serious note though, I hope this stirs some discussion on remembering why there's Open in the name of the OpenAI.
I genuinely can't believe the board didn't see this coming. I think they could have won in the court of public opinion if their press release said they loved Sam but felt like his skills and ambitions diverged from their mission. But instead, they tried to skewer him, and it backfired completely.
I hope Sam comes back. He'll make a lot more money if he doesn't, but I trust Sam a lot more than whomever they ultimately replace him with. I just hope that if he does come back, he doesn't use it as a chance to consolidate power – he's said in the past it's a good thing the board can fire him, and I hope he finds better board members rather than eschewing a board altogether.
EDIT: Yup, Satya is involved https://twitter.com/emilychangtv/status/1726025717077688662
Another great example that even huge multi billion dollar companies are lead by people. What a mess.
However this plays out, this is a big wake up call for everyone who is currently dependent on OpenAI. More changes will be needed to restore trust. It's going to be messy for a while. For a company that has executed pretty much perfectly until now it's so surprising how they just ruined their reputation like this.
For lack there of, PsychohistoryAI it is!
RIP: AlignmentAI
If they really believed in the non-profit mission, and Sam didn’t, they probably torpedoed their chances of winning.
This was all they had to write and today would be a different day:
> We regret to inform you that Sam Altman is being let go as CEO of OpenAI due to irreconcilable differences between his desire to commercialize our AI and OpenAI’s research-driven goals. We appreciate Sam’s contributions to the company and the partnership he established with Microsoft, which have set a foundation for OpenAI to thrive far into the future as a research organization with Microsoft focusing on commercialization of the technology.
> We want to assure you that ChatGPT and current features will remain and be upgraded into the future. However, the focus will be on developing core technologies and a reliable, safe, and trustworthy ecosystem for others to build on. We believe that this will allow us to continue to push the boundaries of AI research while also providing a platform for others to innovate and create.
In this scenario, it was a pure power struggle. The board believed they’d win by showing Altman the door, but it didn’t take long to demonstrate that their actual power to do so was limited to the de jure end of the spectrum.
Why? We would have more diversity in this space if he leaves, which would get us another AI startup with huge funding and know how from OpenAI, while OpenAI would become less Sam Altman like.
I think him staying is bad for the field overall compared to OpenAI splitting in two.
In Office Space, Idiocracy, and most relevant here in Silicon Valley; he accurately, and very precisely not only forecasts, but deconstructs the reasoning, and vapid lack of core philosophy behind each of the real life narratives he’s parodying.
That serious people still consider Silicon Valley as some kind of thing to aspire to is horrifying. This despite repeated examples of predictably base incompetence, lack of maturity and quite frankly avaricious opportunism as the kernel on which SV lies.
The capped-profit / non-profit structure muddles that a little bit, but the reality is that entity can't survive without the funding that goes into the for-profit piece
And if current investors + would-be investors threaten to walk away, what can the board really do? They have no leverage.
Sounds like they really didn't "play the tape forward" and think this through...
Also, all the employees are being paid with PPUs which is a share in future profits, and now they find out that actually, the company doesn't care about making profit!
A lot of top talent with internal know-how will be poached left and right. Many probably going to Sam's clone that he will raise billions for with a single call.
Oh, I get it now, Foundation.ai
I doubt he returns, now he can start a for profit AI company, poach OpenAI's talent, and still look like the good guy in the situation. He was apparently already talking to Saudis to raise billions for an Nvidia competitor - >>38323939
Have to wonder how much this was contrived as a win-win, either OpenAI board does what he wants or he gets a free out to start his own company without looking like he's purely chasing money
Maybe we have different definitions of "the court of public opinion". Most people don't know who Sam Altman is, and most of the people who do know don't have strong opinions on his performance as OpenAI's CEO. Even on HN, the reaction to the board "skwer[ing] him" has been pretty mixed, and mostly one of confusion and waiting to see what else happens.
This quick a turnaround does make the board look bad, though.
Do you understand that this is conceptually the same thing as the directors of a public art museum deciding to just take millions of dollars of paintings for themselves?
VS what, a Stanford dropout who made buds with Paul Graham? That's better and more respectable because he's cooler and connected with YC/VC hipness, right?
WorldCoin is So Awesome!
This is why you need someone with business experience running an organization. Ilya et al might be brilliant scientists, but these folks are not equipped to deal with the nuances of managing a ship as heavily scrutinised as OpenAI
The news yesterday broke the tech/AI bubble, and there would have been much more press on it if it wasn't done as a Friday news dump.
2 - clearly not having spent even 10 seconds thinking about the (obvious) reaction of employees on learning the ceo of what seems like a generational company was fired out of the blue. Or the reaction to the (high likelihood) of a cofounder following him out the door
3 - And they didn't even carefully think through the reaction to the press release which hinted at some real wrongdoing by Altman.
3a - anyone want to bet if they even workshopped the press release with attorneys or just straight yolo'd it? No chance a thing like this could end up in court...
They've def got the A team running things... my god.
The board can sack a CEO but if they keep their influence over employees, customers and shareholders... what's the board going to do?
Speaking for myself, if they had framed this as a difference in vision, I would be willing to listen. But instead they implied that he had committed some kind of categorical wrongdoing. After it became clear that wasn’t the case, it just made them look incompetent.
Sure, the average person doesn't care about Sam. But among the people who matter, Sam certainly came out on top.
Even if they are making the right call, you can't really trust them after ruining the reputation and trust of the company like this.
I want a second (first being Anthropic?) OpenAI split. Having Anthropic, OpenAI, SamGregAi, Stability and Mistral and more competing on foundation models will further increase the pressure to open source.
It seems like there is a lull in returns to model size, if that's the case then there's even less basis for having all the resources under a single umbrella.
That would also remediate the appearence of total incompetence of this clown show, in addition to admitting the board and Sam don’t fit with each other, and restore confidence for the next investor that their money is properly managed. At the moment, no-one would invest in a company that can be undermined by its non-profit, with a (probably) disparaging press release a few minutes before market closure on a Friday evening, for which Satya had to personally intervene.
From Greg's tweet, it seems like the chaos was largely driven Ilya, who has also been very outspoken against open source and sharing research, which makes me think his motivations are more aligned with those of Microsoft/Satya. I still can't tell if Sam got ousted because he was getting in the way of a Microsoft takeover, or if Sam was trying to set the stage for a Microsoft takeover. It's all very confusing.
The real reason I disdain the majority of the board of openai is that there are clearly 3 people on the board that have accomplished nothing and are clear trust fund babies.
The way the board pulled this off really gave them no good outcome. They stand to lose talent AND investors AND customers. Half the people I know who use GPT in their work are wondering if it will be even worth paying for if the model’s improvements stagnates with the departure of these key people.
It would seem board might have felt they were backed into a corner.
I was willing to believe Ilya had at least a decent reason to do something so drastic but who knows, this ain't looking good for him.
As a customer though, personally I want a product with all safeguards turned off and I'm willing to pay for that.
Would love some commercial fusion power plants on the side as well please.
I think this well is deeper than you're giving it credit for.
And if they were right to fire Sam, they're now reacting too quickly to negative backlash. 24 hours ago they thought this was the right move; the only change has been perception.
But yes, the comment was a bit unhinged.
Don't really see the difference between an MBA and whatever it is that Altman does, though, other than credentials.
Finally, that you think that ethicist (or the study of ethics) is masturbatory, especially in the context of an organization that has as its explicit mission to hoist AGI onto the world -- tells me quite a bit about your own... ethics.
World could do with a lot more ethicists and a lot less MBAs.
At a minimum it's going to be awkward in the men's room for a while.
- https://www.nytimes.com/2023/11/18/technology/ousted-openai-...
- https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...
And he smoked Steve Jobs' time-to-re-hire!
You severely overestimate his noteriety.
Source: https://arstechnica.com/information-technology/2023/11/repor...
For those of us trying to build stuff that only GPT-4 (or better) can enable, and hoping to build stuff that can leverage even more powerful models in the near future, Sam coming back would be ideal. I'm kind of worried that the new OpenAI direction would turn off API access entirely.
Well, I'm being constantly surprised.
E.g. https://www.thehartford.com/management-liability-insurance/d...
"The Who, What & Why of Directors & Officers Insurance
The Hartford has agents across the country to help with your insurance needs. Directors and officers (D&O) liability insurance protects the personal assets of corporate directors and officers, and their spouses, in the event they are personally sued by employees, vendors, competitors, investors, customers, or other parties, for actual or alleged wrongful acts in managing a company.
The insurance, which usually protects the company as well, covers legal fees, settlements, and other costs. D&O insurance is the financial backing for a standard indemnification provision, which holds officers harmless for losses due to their role in the company. Many officers and directors will want a company to provide both indemnification and D&O insurance."
comical to imagine something like this happening at a mature company like FedEx, Ford, AT&T. All which have smaller market caps than OpenAI. You basically have impulsive children in charge of massively valuable company
A. Core team members leaving (perhaps more threatening to leave).
B. (maybe more likely) Nadella told Sutskever that he might shut off funding or restrict compute resources if he didn't reverse course, or at least, wasn't able to retain talent (see A).
Of course, from this hypothetical SamAI's perspective, in order to build such a flywheel-driven product that gathers sufficient data, the model's outputs must be allowed to interface with other software systems without human review of every such interaction.
Many advocates for AI safety would say that models whose limitations aren't yet known (we're talking about GPT-N where N>4 here, or entirely different architectures) must be evaluated extensively for safety before being released to the public or being allowed to autonomously interface with other software systems. A world where SamAI exists is one where top researchers are divided into two camps, rather than being able to push each other in nuanced ways (with full transparency to proprietary data) and find common ground. Personally, I'd much rather these camps collaborate than not.
What exactly do YOU mean by safety? That they go at the pace YOU decide? Does it mean they make a "safe space" for YOU?
I've seen nothing to suggest they aren't "being safe". Actually ChatGPT has become known for censoring users "for their own good" [0].
The argument I've seen is: one "side" thinks things are moving too fast, therefore the side that wants to move slower is the "safe" side.
And that's it.
To be honest I hate takes like yours, where people think that acknowledging a mistake (even a giant mistake) is a sign of weakness. A bigger sign of weakness in my opinion is people who commit to a shitty idea just because they said it first, despite all evidence to the contrary.
IMO Its not about looking strong, its about looking competent. Competent people don't fire their CEO of multi-billion dollar unicorn without thinking it through first. Walking back the firing so soon suggest no thinking was involved.
No stakeholder would walk away from OpenAI for want of sam Altman. They don’t license OpenAI technology or provide funding for his contribution. They do it to get access to GPT4. There is no comparable competitor available.
If anything they would be miffed about how it was handled, but to be frank, unless GPT4 is sam Altman furiously typing, I don’t know he’s that important. The instability caused by the suddenness, that’s different.
It’s unclear what Ilya thinks keeps the lights on when MSFT holds their money hostage now. Which is probably why there is desperation to get Altman back…
Now that I work for a non-tech, non-SV company (agricultural equip manufacturer in fact), and have some distance from the real world SV, maybe I could watch it without my skin crawling.
Usually what it means is that they think that AI has a significant chance of literally ending the world with like diamond nanobots or something.
All opinions and recommendations follow from this doomsday cult belief.
I feel like it be like that, but instead of a legion, legions.
And OpenAI is scared.
But this is a bad argument. No one is saying ChatGPT is going to turn evil and start killing people. The argument is that an AGI is so far beyond anything we have experience with and that there are arguments to be made that such an entity would be dangerous. And of course no one has been able to demonstrate this unsafe AGI - we don't have AGI to begin with.
The weakness was the first decision; it’s already past the point of deciding if the board is a good steward of OpenAI or not. Sometimes backtracking can be a point of strength, yes, but in this case waffling just makes them look even dumber.
[1] https://en.wikipedia.org/wiki/Silicon_Valley_(TV_series)
Judge is no profit he just communicated what the rest of us already knew.
At least two of these people are not like the others and deserve to be fired and disgraced for this shitshow regardless of how it pans out.
Probably so bad that the damage has already been done to themselves (the board) regardless what happens next.
How can Sam possibly agree on the board remaining intact when they don't trust him in his leadership?
Maybe they’ll give the non profit some advisory (fake) role over the new company.
Others have commented on how Microsoft actually has access to the IP, so the odds that they could pack their toys and rebuild OpenAI 2.0 somewhere else with what they've learned, their near infinite capital and not have to deal with the non-profit shenanigans are meaningful.
I'm not saying Sam is needed to make OpenAI what it is, but he's definitely "the investors' guy" in the organization, based on what has surfaced over the last 24 hours. Those investors would rather have him there over someone else, hence the pressure to put him back. It doesn't matter whether you and I think he's the man for the job -- what matters is whether investors think they are.
TL;DR the board thinks they have leverage, but as it turns out, they don't
But right now they get a lot of shitstorm for this inexperience handling.
And it doesn't look good from the board that looks inexperienced.
Gordon-Levitt's wife?? Helen who? D'Angelo with a failing quora and a history of a coup.
Doesn't look good.
I'd bet it starts impacting their personal lives. This is equivalent to them coming out to support Donald Trump. It is that bad.
I guess if Sam is back, Ilya is planning his way out.
Regardless due to this stupid stunt, OpenAI is hardly the same.
It's a bad situation.
You can embrace AI safety all you want. But not being the leader means you have very little influence to effect any kind of shift in the industry.
I am an outsider, and very far from executive leadership. But this whole move seems like a predictable fiasco.
If there ever was a time for Microsoft to leverage LCA, it is now. There's far too much on the line for them to lose the goose that has laid the golden egg.
I agree that he doesn’t have a huge amount of name recognition, but this ousting was a front-page/top-of-website news story so people will likely have heard about it somewhat. I think it’s in the news because of the AI and company drama aspects. It felt like a little more coverage than Bob Iger’s return to Disney got (I’m trying to think of an example of a CEO I’ve heard about who is far from tech).
I think it is accurate to say that most people don’t really know about the CEOs of important/public companies. They probably have heard of Elon/Zuckerberg/Bezos, I can think of a couple of bank CEOs who might come on business/economics news.
Definitely not OpenAI itself. They still need massive capital. With this drama, its future is put in serious doubt
There's a lot more to this than who has explicit control.
Tasha McCauley is an adjunct senior management scientist at RAND Corporation, a job she started earlier in 2023, according to her LinkedIn profile. She previously cofounded Fellow Robots, a startup she launched with a colleague from Singularity University, where she’d served as a director of an innovation lab, and then cofounded GeoSim Systems, a geospatial technology startup where she served as CEO until last year. With her husband Joseph Gorden-Levitt, she was a signer of the Asilomar AI Principles, a set of 23 AI governance principles published in 2017. (Altman, OpenAI cofounder Iyla Sutskever and former board director Elon Musk also signed.)
McCauley currently sits on the advisory board of British-founded international Center for the Governance of AI alongside fellow OpenAI director Helen Toner. And she’s tied to the Effective Altruism movement through the Centre for Effective Altruism; McCauley sits on the U.K. board of the Effective Ventures Foundation, its parent organization.
Helen Toner, director of strategy and foundational research grants at Georgetown’s Center for Security and Emerging Technology, joined OpenAI’s board of directors in September 2021. Her role: to think about safety in a world where OpenAI’s creation had global influence. “I greatly value Helen’s deep thinking around the long-term risks and effects of AI,” Brockman said in a statement at the time.
More recently, Toner has been making headlines as an expert on China’s AI landscape and the potential role of AI regulation in a geopolitical face-off with the Asian giant. Toner had lived in Beijing in between roles at Open Philanthropy and her current job at CSET, researching its AI ecosystem, per her corporate biography. In June, she co-authored an essay for Foreign Affairs on “The Illusion of China’s AI Prowess” that argued — in opposition to Altman’s cited U.S. Senate testimony — that regulation wouldn’t slow down the U.S. in a race between the two nations.
. . .
EDIT TO ADD:
The question wasn't whether this is scintillating substance. The question was, in what way is this unusual in Silicon Valley.
The answer is that it's not.
I received messages from a physician and a high school teacher in the last 24 hours, asking what I thought about "OpenAI firing Sam Altman".
This is what happens when a non-profit gets taken over by greed I guess..
I'm glad that there are other companies and open source efforts to fall back on.
As an API user of the GPT models I've always had it at the back of my mind that it would be unwise to 100% rely on OpenAI for the core of any product I built.
The recent rocking of the boat is further justification for my stance in that regard.
It's his fault we are here /s
Which is that any AI is not racist, misogynistic, aggressive etc. It does not recommend to people that they act in an illegal, violent or self-harming way or commit those acts itself. It does not support or promote nazism, fascism etc. Similar to how companies deal treat ad/brand safety.
And you may think of it as a weasel word. But I assure you that companies and governments e.g. EU very much don't.
If you read those memoirs/history of those Silicon Valley companies, it is 100% more entertaining than the show itself
That is a good point, I didn't consider people who had built a business based on Gpt-4 access. It is likely these things were Sam Altman ideas in the first place and we will see less such productionalization work in the future from OpenAI.
But since Microsoft invested into it I doubt it will get shut down completely, Microsoft has by far the most to lose here so you got to trust that their lawyers signed a contract that will keep these things available at a fee.
I don't understand how Microsoft, after having invested billions, doesn't have a board seat. If they did, I doubt this would've ever happened. I'm not sure why Microsoft let that happen.
But even ignoring that, the board making a decision as impactful as this without consulting their major investors is a dereliction of duty. That alone justifies getting rid of all of them because all of them are complicit in not consulting Microsoft (and probably others).
I have no idea why Sam was fired but it really feels just like an internal power struggle. Maybe there was genuine disagreement about the direction for the company but you choose a leader to make decisions. Ousting the CEO under vague descriptions of "communications with the board" just doesn't pass the smell test.
I'm reminded of this great line from Roger Sterling [1]:
> Half the time this business comes down to "I don't like this guy"
So much of working, performance reviews, hiring and firing decisions and promotions is completely vibes-based.
The implication in Microsoft's statement is clear that they have what they need to use the tech. I read it to mean OpenAI board does not have leverage.
You're right that wrong answers are a problem, but plain old capitalism will sort that one out-- no one will want to pay $20/month for a chatbot that gets everything wrong.
Once Microsoft pulls support and funding and all their customers leave they will be decelerating alright.
If they wanted to show they’re committed to backtracking they could resign themselves.
Now it sounds more like they want to have their cake and eat it.
Which seems like it probably is a self fulfilling prophecy. The private sector lottery winners seem to be awarded kingdoms at an alarming rate.
There's been lots of people asking what Sam's true value proposition to the company is, and...I haven't seen anything other than what could be described above.
But I suppose we've got to be nice to those who own rather than make. Won't anyone have mercy on well paid management?
The companies you listed in contrast to OpenAI also have some key differences: they're all long-standing and mature companies that have been through several management and regime changes at this point, while OpenAI is still in startup territory and hasn't fully established what it will be going forward.
The other major difference is that OpenAI is split between a non-profit and a for-profit entity, with the non-profit entity owning a controlling share of the for-profit. That's an unusual corporate structure, and the only public-facing example I can think of that matches it is Mozilla (which has its own issues you wouldn't necessarily see in a pure for-profit corporation). So that means on top of the usual failure modes of a for-profit enterprise that could lead to the CEO getting fired, you also get other possible failure modes including ones grounded in pure ideology since the success or failure of a non-profit is judged on how well it accomplishes its stated mission rather than its profitability, which is uh well, it's a bit more tenuous.
My understanding is that OpenAI’s biggest advantage is that they recruited and attracted the best in the field, presumably under the charter of providing AI for everyone.
Not sure that sama and gdb starting their own company in the same space will produce similar results.
It's worth asking how rapidly can, say, a global finnacial hub transfer from one location to another, how quickly a centre for excellence can transfer, how many years does it take for the world's best space scientists to move out of Germany, etc.
Does Silicon Valley have a tipping point?
He supposedly didn't care about the money. He didn't take equity.
The business and investment people want to make money. Many of the researchers want to take their time and build better and safer models and don't care about money in the short term at all. They are two different goals.
It's easy for business and investment people to say that they are concerned with safety and research, and I believe them to a certain degree. But they have $10 billion reasons to focus on the actual business instead of research and safety.
"A man has been crushed to death by a robot in South Korea after it failed to differentiate him from the boxes of food it was handling, reports say."
That being said, here's my strongman argument: Sam is scared of the ramifications of AI, especially financially. He's experimenting with a lot of things, such as Basic Income (https://www.ycombinator.com/blog/basic-income), rethinking capitalism (https://moores.samaltman.com/) and Worldcoin.
He's also likely worried about what happens if you can't tell who is human and who isn't. We will certainly need a system at some point for verifying humanity.
Worldcoin doesn't store iris information; it just stores a hash for verification. It's an attempt to make sure everyone gets one, and to keep things fair and more evenly distributed.
(Will it work? I don't think so. But to call it an eyeball identity scam and dismiss Sam out of hand is wrong)
Per https://www.semafor.com/article/11/18/2023/openai-has-receiv...
And RIP @sama if he chooses to go back, no good could possibly come out of this.
Personally, I would expect a lot more development of GPT-4+ as soon as this is split up from one closed group making gpt-5 in secret and it seems silly to exchange a reliable future for another few months of depending on this little shell game.
Why do we need some moral superior person from some university to "think about safey and OpenAI" and not find it out ourselves?
What a clown company
also
>And she’s tied to the Effective Altruism movement
ah where SBF was involved. what an achivement
That combination could mean firing the CEO results in Microsoft getting to have everything and OpenAI being some code and models without a cloud, and whatever people that wouldn't cross the street with Altman.
I do not know about OpenAI's deal with Microsoft. But I have been on both sides of deals written that way, where I've been the provider's key person and the contract offered code escrow, and I've been a buyer that tied the contract to a set of key persons and had full source code rights, surviving any agreement.
You do this if you think the tech could be existential to you, and you pay a lot for it because effectively you're pre-buying the assets after some future implosion. OTOH, it tends to be not well understood by most people involved in the hundreds of pages of paperwork across a dozen or more interlocking agreements.
. . .
EDIT TO ADD:
This speculating article seems to agree with my speculation, daddy has the cloud car keys, and key person ouster could be a breach:
Only a fraction of Microsoft’s $10 billion investment in OpenAI has been wired to the startup, while a significant portion of the funding, divided into tranches, is in the form of cloud compute purchases instead of cash, according to people familiar with their agreement.
That gives the software giant significant leverage as it sorts through the fallout from the ouster of OpenAI CEO Sam Altman. The firm’s board said on Friday that it had lost confidence in his ability to lead, without giving additional details.
One person familiar with the matter said Microsoft CEO Satya Nadella believes OpenAI’s directors mishandled Altman’s firing and the action has destabilized a key partner for the company. It’s unclear if OpenAI, which has been racking up expenses as it goes on a hiring spree and pours resources into technological developments, violated its contract with Microsoft by suddenly ousting Altman.
https://www.semafor.com/article/11/18/2023/openai-has-receiv...
The fundamental thing you are missing here is that the charter of the non-profit and structure of their ownership of the for-profit (and the for-profit's operating agreement) is all designed in a way that is supposed to eliminate financial incentives for stakeholders as being the thing that the company and non-profit are beholden to.
It may turn out that the practical reality is different from the intent, but everything you're talking about was a feature and not a bug of how this whole thing was set up.
Does Ilya get a pass solely by his value to the company?
Seriously. It’s stupid talk to encourage regularity capture. If they were really afraid they were building a world ending device, they’d stop.
- Sam would return to OpenAI as CEO and Greg as President
- Sam's request to change the Board would be accepted and Board members will resign
- Ilya would be Out
- Sam would change the governance structure from current dual nonprofit+forprofit to just for-profit corp.
Lmfao you're joking if you think they "realized their mistake" and are now atoning.
This is 99% from Microsoft & OpenAI's other investors.
Maybe their intentions where right and know MS forces them to take them back.
In the end will get a company working for MS's benefit only.
It's becoming a bit of a weasel word in these discussions. I keep hearing it thrown around with nobody specifying how rolling out ChatGPT to more people makes the world "less safe". It's a laugh line at this point.
If this (very sparse and lacking in detail) article is true, is this a genuine attempt to get Altman back or just a filip to concerned investors such as Microsoft?
Does OpenAI's board really want Altman back so soon after deposing him so decisively?
Would Altman even want to come back under any terms that would be acceptable to the board? If "significant governance changes" means removing those who had removed him, that seems unlikely.
The Verge's report just raises so many additional questions that I find it difficult to believe at face value.
Lol
I owe Mr. Altman an apology. I didn't think a startup board would be so mindexplodingly stupid and figured some toxic skeleton fell out of his closet.
Well, I'm sorry. Its a lesson to not speculate in public.
You underestimate how obsessed people are with chatGPT and AI
1) The board puts out a press release saying Sam was outed for not being candid
2) Internally a memo from the COO circulates saying that is not true
3) Greg and other senior folks quit
4) Now he is in a negotiating position to return for a) being fired on a false premise and b) bringing back all the top talent
If someone wanted to restructure the board of this company, they could have fed the other board members false information to put Sam in this negotiating position. It's also strange that a bunch of billionaires voiced support for Sam immediately after the firing without knowing any details.
evidence: a. It's openai ffs, they already have advance enough RL bots that can create 999+ IQ strategies/
b. google meet/
theory: they were training GPT-5, and had some time to clean some under table dust.
again #pure_speculation
Sure, it's incredibly psychopathic, but it's still an achievement!
Do you believe AI trained for military purposes is going to be safe and friendly to the enemy?
At least it will stop those godawful “are you human” proof puzzles.
Near as I can tell they never actually launched a product. Their webpage is a GoDaddy parked domain page. Their Facebook page is pictures of them attending conferences and sharing their excitement for what Boston Dynamics and other ACTUAL robotics companies were doing.
>she launched with a colleague from Singularity University
https://en.wikipedia.org/wiki/Singularity_Group
Just lol.
>then cofounded GeoSim Systems
Seems to be a consulting business for creating digital twins that never really got off the ground.
https://www.linkedin.com/in/tasha-m-25475a54/details/experie...
It doesn't appears she's ever had a real job. Someone in the other thread commented that her profile reeks of a three letter agency plant. Possible. Either that or she's just a dabbler funder by her actor husband.
Valid or not, you don't blindside major investors who have given you billions. They apparently told Microsoft minutes before the announcement and after the decision had already been made. Even if you fully intend to take this course of action, you loop in your major investors and consult them or at least give them a heads up to prepare any communications they might need to make or even just their press people are prepared for the inevitable questions.
They didn't do that, according to Micrsoft. That's why they need to be fired.
I'll edit my comment to clarify!
Frankly I've heard of worse loyalties, really. If I was sam's friend I'd definitely be better off in any world he had a hand in defining.
I agree the board did botch this up. But this is in my view is a vindication of their being amateurs at corporate political games, that is all.
But this also means that Sam Altman’s “vision” and Microsoft’s bottom line are fully aligned, and that is not a reassuring thought. Microsoft one hears (see “5 foot pole”) even puts ads in their freaking OS.
This board should man up, and lawyer up.
So MS shows who's in control. Say goodbye to OpenAI.
From know on it's all for MS's profit only.
However, trying to distinguish the exact manners in which the leader does so is difficult[1], and therefore the tendency is to look at the results and leave someone there if the results are good enough.
[1] If you disagree with this statement, and you can easily identify what makes a good leader, you could make a literal fortune by writing books and coaching CEOs on how to not get fired within a few years.
Write a thought. You’re not clever enough for a drive by gotcha
Proven success is a pretty decent signal for competence. And while there is a lot of good fortune that goes into anyone's success, there are a lot of people who fail, given just as much good fortune as those who excelled. It's not just a random lottery where competence plays no role at all. So, who better to reward kingdoms to?
Regardless of who ends up at the helm, OpenAI is going to be a different place on Monday than it was on Thursday, and not for the better.
Got a link? I did miss this nuance.
If this does end up being a failed coup, then it is of course detrimental to his career. But the statement I'm replying to was explicitly saying he would never work in tech again. Do you honestly believe there is any chance that Sutskever would be unable to work in this field somewhere else if he ultimately leaves OpenAI for whatever reason? I would bet $10,000 that he would have big name companies knocking on his door within days.
I'm not sure what you mean by your second paragraph.
Even larger, this shows that the "leaders" of all this technology and money really are just making it up as they go along. Certainly supports the conclusion that, beyond meeting a somewhat high bar of education & experience, the primary reason they are in their chairs is luck and political gamesmanship. Many others meet the same high bar and could fill their roles, likely better, if the opportunity were given to them.
Sortition on corporate leadership may not be a bad thing.
That said, consistent hands at the wheel is also good, and this kind of unnecessary chaos does no one any good.
This was two board members who were also employed at the company fighting over resources and approach.
Also, wouldn’t it seem clear that the board acted rashly in firing him without input from other stakeholders?
I am legitimately interested to know why you downvote as I don’t see another way for it to work.
Doesn't look like it right now in this case.
I've heard this lore before, and it's the only way I can make sense of it.
The whole open vs closed ai thing... the fact is Pandora's box is open now, it's shown to have an outsized impact on society and 2 of the 3 founders responsible for that may be starting a new company that won't be shackled by the same type of corporate governance.
SV will happily throw as much $$ as possible in their direction. The exodus from OpenAi has already begun, and other researchers who are of the mindset that this needs to be commercialized as fast as possible while having an eye on safety will happily come on board, esp. given how much they stand to gain financially.
Lives destroyed? This is either a reporting error or is a very weird thing to say. Unless the source insinuates that Altman is willing to destroy the world with a cruel AGI if he doesn't get his way.
Compared to...
The OpenAI Non-Profit Board where 3 out of 4 members appear to have significant conflicts of interest or lack substantial experience in AI development, raising concerns about their suitability for making certain decisions.
Spare me the whole "but as a non-profit the board has a responsibility to their mission and charter". Someone has to pay for all those GPUs. If they're going to take a hardline against launching actual products then they can look for donors and see how far they get..
Interestingly this is exactly what all financial advice tends to actually warn about rather than encourage, that previous performance does not indicate future performance.
I suppose if they entered an established market and dominated it from the bootstraps that'd build a lot of trust in me. But others have pointed out, Sam went from dotcom fortune, to...vague question marks, to ycombinator, to openai. Not enough is clear to declare him wozniak, or even jobs, as many have been saying (despite investors calling him as such)
Sam altman is seemingly becoming the new post-fame elon musk: the type of person who could first afford the strategic safety net and PR to keep the act afloat.
Personally this is all largely popcorn munching entertainment for me, as I don't think Sutskever is right about a lot of his core tenets, but I also don't think that Altman is a good fit for achieving the charter that OpenAI is obligated to follow.
I don't think OpenAI will be at the forefront of AI/AGI/etc. research a decade from now regardless (and had that position before yesterday's events) but if the OpenAI charter and mission statements are sincerely held beliefs then the path they have been following for the past several years with Altman at the helm have obviously been counter to it.
Whether or not that charter is anything more than a pipe dream isn't really relevant - they're a non-profit so their legal obligation as the board is to direct the resources under their control to achieve that charter.
But that aside, how did so many clueless folks who understand neither the technology, or the legalese, nor have enough intelligence/acumen to forsee the immediate impact of their actions happen to be on the board of one of the most important tech companies?
Dealing with folks like Ilya isn't necessarily a matter of if, but how much.
If the entire board can be replaced, then Sam should come back. Even though he can build a new company tomorrow. A leader will come back to lead the best shot anyone has gotten in AI development so far.
This was a power grab and it did not work. Not coming back is letting four people derail history and slow the development of AGI.
If you ever stood in the hall of YC and listened to Zuck pumping the founders, you’ll understand.
I’d argue this is a useful thing to lift up a nonprofit on a path to AGI, but hardly a good way to govern a company that builds AGI/ASI technology in the long term.
As in, people staying millionaires instead of becoming multimillionaires? Could such life-destruction be brought to EU, please?
So what? Regardless of launch/no launch, the company was a flop. This is a cheap shot. Just because someone was successful in the past (or not) is not an automatically relevant signal they'll be a great fit when placed in a different domain. Sometimes they have other relevant background and experience, and other times... Maybe they're just connected. What is the level of scrutiny of qualifications in other companies, even public ones? When looking closely at other companies, I've noticed board compositions can vary substantially. As outsiders, we're undoubtedly missing part of the context about what is relevant (to the board) or not.
Suggested reading: Black Swan by Taleb.
p.s. I am not partial to anyone involved, especially clueless board members. I found this comment annoying due to the breathless, baseless, and flawed logic. What was this supposed to add to the conversation?
That's way too much power for people who seemingly have no qualifications to make decisions about a company this impactful to society.
can he work on what he wants in those places? that is another story of course. but he knows the ins and outs of the lightning in a jar they captured and arguably that is the most promising asset on planet earth right now, so he'll be fine.
https://pbs.twimg.com/media/F_QXAKEW0AAQpPC?format=png&name=...
Obviously Sam wasn’t the best fit for OpenAI and investors aren’t even saying what the problem is. Clearly the board feels he was the wrong person for the job.
I think it’s ridiculous that everything thinks that Sam being outed means OpenAI is in trouble. Let this play out and see how it evolves
They do it to employees yet CEOs are somehow exempt? Ever heard of fire fast?
^^
I don’t think the wording of the “press release” is an issue.
This is a split over an actual matter to differ about: a genuine fork in the road in terms of pace and development of AI products, and, a CEO which apparently did not keep the board informed as it pursued a direction they feel is contrary to the mission statement of this non-profit.
The board could have done this in the most gracious of manners, but it would not have made a bit of difference.
On one side we have the hyper rich investor “grow grow grow” crowd and their attendant cult of personality wunderkind and his or her project, and on the other side bunch of geeky idealists who want to be thoughtful in the development of what is undeniably a world changing technology for mankind.
Is it? The push for the bomb was an international arms race — America against Russia. The race for AGI is an international arms race — America against China. The Manhattan Project members knew that what they were doing would have terrible consequences for the world but decided to forge ahead. It’s hard to say concretely what the leaders in AGI believe right now.
Ideology (and fear, and greed) can cause well meaning people to do terrible things. It does all the time. If Anthropic, OpenAI, etc. believed they had access to world ending technology they wouldn’t stop, they’d keep going so that the U.S. could have a monopoly on it. And then we’d need a chastened figure ala Oppenheimer to right the balance again.
It's entirely possible Sam was exploring sales or new commercial ventures behind the board's back, or pressuring the business to side-step the oversight and safety mechanisms expected by the overarching non-profit mission. The timing with the dev event is suspect. It sounds like something came out that the board and research organizations were unaware of.
There's no indication that OpenAI wants to terminate existing or future commercial ventures.
You do understand the whole point of silicon valley is the chaotic lack of maturity?
You cannot be staid and conservative and mature (and non-opportunistic)…and also be successful at creating new and interesting stuff.
If you find instability “horrifying,” might I suggest a job in banking or the federal government instead?
And there is every reason to believe this is an ML classification issue since similar robots are in widespread use.
And MS isn't an investor in the company of the board.
It's only at the abrupt all-hands meeting they called on a Friday night that it became clear that Ilya Sutskever was at the center of it. He had his disagreements, and pushed the board into making such a abrupt move, and then goes on to say something like "oh I agree it wasn't the ideal way to do it". It's very clear this was a power struggle, not maleficence (per words of OpenAI CTO), from Sam Altman. At least so far, it boils down to ... Ilya didn't like feeling sidelined, so he took things over. And now it's clear the board that sided with (or rubber stamped?) Ilya just wasn't prepared for the consequences.
This is embarassing for OpenAI no matter how you slice it.
And it’s because he isn’t. This is “rules for thee but not for me”. He as a bad fit, 2/3 the board outed him, and investors are mad because they didn’t feel included.
You know, like how they include employees in layoff decisions and not blind side them.
Sam Altman has spoken about “firing fast” when someone is a bad fit. he got fired fast, because he was a bad fit. That’s the seminal conclusion
What am I missing here? There's a handful of companies tweaking weights and optimising infrastructure usage. Won't LLMs naturally advance over time?
Option B: try to fix mistakes as quickly as possible
.
This is that thing that somehow got the label "psychological safety" attached to it. Hiding mistakes is bad because it means they don't get fixed (and so systems that (do or appear to) set personal interest in favor of hiding mistakes are also bad).
Hilarious. And sad. But mostly hilarious.
Perhaps it's as simple as insufficient oversight and moderation of the GPT store. Or perhaps there's too much legal risk in the expanding scope of b2c services, which could then threaten the existence of the research organizations. Who knows?
Maybe we should stop treating this like sports ball or politics
We have many cases of creating things that harm us. We tore a hole in the ozone layer, filled things with lead and plastics and are facing upheaval due to climate change.
> they will be aligned with us because they designed such that their motivation will be to serve us.
They won't hurt us, all we asked for is paperclips.
The obvious problem here is how well you get to constrain the output of an intelligence. This is not a simple problem.
Then they went off and did the math and quickly found that this wouldn't happen because the amount of energy in play here was order of magnitudes lower than what would be needed for such a thing to occur and went on about their day.
The only reason it's something we talk about is because of the nature of the outcome, not how seriously the physicists were in their fear.
The thing is the cultural Ur narrative embed in the collective subconscious doesnt seem to understand its own stories anymore. God and Adam, the Golem of Prague, Frankensteins Monster none of them are really about AI. Its about our children making their own decisions that we disagree with and seeing it as the end of the world.
AI isnt a child though. AI is a tool. It doesn't have its own motives, it doesn't have emotions , it doesn't have any core drives we don't give to it. Those things are products of us being biological evolved beings that need them to survive and pass on our genes and memes to the next generation. AI doesn't have to find shelter food water air oxygen and so on. We provide all the equivalents when there are any as part of building it and turning it on. It doesn't have a drive to mate and pass on it genes it doesn't have any reproducing is a mater of copying some files no evolution involved checksums hashes and error correcting codes see to that. Ai is simply the next step in the tech tree just another tool a powerful useful one but a tool not a rampaging monster
The fact is that most people can't do what Sam Altman has done at all, so at the very least that past success makes him in one of the few percent of people who have a fighting chance.
However, the way they told the public (anti-Sam blog post) and the way they told Microsoft (one minute before the press release) were both fumbles that separately could have played out differently if the board knew what they were doing.
Not really a good look from a company that's leading the charge on such a pivotal technology.
So I was just saying that from the investor's perspective, the concept was flawed or at least very questionable from the beginning.
Next prediction: Ilya hightails it.
Suggesting that some inarguably brilliant technologists and business people would invite a moron to crash their party makes you look petty (at best) and like an idiot (at worst)
If they do, it is the perfect time to speak out loud, not letting this news bubbling up to the front page and everyone is talking about how disastrous they were?
What is this board waiting then? The weekend??
The board isn’t bullet proof and they are not god. They can fire Sam yes, it won’t stopping people thinking this is stupid or this won’t do more harm than good to OpenAI
Exactly. You can bet there have been some very pointed exchanges about this.
https://www.theverge.com/2013/4/12/4217794/jeff-bezos-letter...
Did you find out e.g. Facebook will do the damage that it did and continues to do in social terms?
Have you done anything or has Facebook changed its way based on your ‘findings’?
The choice here is: does capital coupled with runaway egos provide better stewardship of socially impactful technology development or paper pushers or CIA plants?
The fact that they're openly considering bringing him back should tell you that he's not just some random person whose job anyone can do. He's extremely well connected and was the face of the company - the face of deals that the company made. And you have to consider whether internally the employees are supporting this - if I were at OpenAI I would be pissed that the board decided to fuck around when we were doing so well.
It's quite possible that he wasn't the best fit, and that the board is an even worse fit. Judging by the behavior of the board, it's hard to see them being a good fit for the company.
The two thirds can undoubtedly do this. But the whole structure is in a bad way if they actually do.
There is nothing to indicate that this bleeds OpenAI more generally. The rank and file aren’t as fire as I’m aware aren’t resigning en masse.
Executives come and go. Show me why these people matter so much that OpenAI has no future then we can talk. It’s in fighting that became public and I’m certain people are pulling whatever strings they have on this, but I don’t see objective evidence that these people make OpenAI successful.
This needs to play out
edit: You have edited your post radically to say different hings like 5x now, I can not keep up.
At this point, I don’t care how it resolves—the people who made that decision should be removed for sheer incompetence.
From my own experience, short assertive comments tend to get downvoted on HN. Unlike reddit, votes here are less about agree/disagree.
I've yet to see a good one. And even if: how you do something is often as important or even more important than that you do something. And on the 'how' bit the board just utterly failed. This is the most watched company in the world right now on the tech front, you can't just oust the CEO without a very good plan. If you do that kind of thing on a whim you are not fit to serve on the board of any company, but especially not on the board of this one.
Having shown this was possible, he could easily go do it elsewhere.
Factually accurate results also = unsafety. Knowledge = unsafety, free humans = unsafety.
I have no idea who she is or what her accolades are, but I do know who JGL is and therefore referring to her like that is in fact useful to me, where using any other name is not.
She just sounds like a typical Silicon Valley trend grifter
And it looks like now they might be very close to the limits of their own capability. I'm not sure how much more they can give.
On the surface, their new features always seem to be quite exciting. But when the dust settles it is again all very lackluster, often copied from open source ideas. Not something you can bet on.
Their biggest moats are their popularity, marketing, and their large bags of cash. The latter of which they are burning through extremely quickly. The thing is, it's easy to build something massive when you don't care about unit economics. But where do they end up when the competitive forces commoditize this?
When listening to interviews with Sam I was always surprised by how little useful information I am able to get out of listening to him. I'm sure he's very smart but he tries to project the aura of radical honesty while simultaneously trying to keep all of his cards extremely close to his chest. All that without the product chops to actually back it up. That's my read.
> Don’t piss off an irreplaceable engineer or they’ll fire you. not taking any sides here.
One scientist's power trip (Ilya is not an engineer) triggers the power fantasy of the extremely online.
This is really, really clearly incestuous tech media stuff as part of a pressure campaign. Sam is the darlin of tech media and he's clearly instigated this reporting because they're reporting his thoughts and not the Board's in an article that purports to know what the Board is thinking, the investors who aren't happy (the point of a non-profit is that they are allowed to make investors unhappy in pursuit of the greater mission!) have an obvious incentive to join him in this pressure campaign, and then all he needs for "journalism" is one senior employee who's willing to leave for Sam to instead say to the Verge that the Board is reconsidering. Boom, massive pressure campaign and perception of the Board flip flopping without them doing any such thing. If they had done any such thing and there was proof of that, the Verge could have quoted the thoughts of anyone on the Board, stated it had reviewed communications and verified they were genuine, etc.
Then again, maybe he has been making life less than desirable for the rank and file. Perhaps, even, they felt he was a bad fit for the company too. I don’t know, because I don’t work there.
If this is the case, good time to start hiring away engineers to another firm.
He may be the face, but faces change. Sam Altman isn’t t the only person capable of taking the reins. There is nothing about him that is more “magic” in this case, because the tech is always been their selling point. I think any competent CEO could sell the hell out of OpenAI right now
In as so far as bringing him back: I don’t know the validity nor veracity of those discussions. That news hit a little fast to me to have been fully fleshed out. Not saying it’s untrue, but “some of the board” talking isn’t the same thing as all of the board, either
He'll sack the board.
He'll sack Ilya.
He'll change the structure of the organisation completely.
Easy; his contacts list. He has everyone anyone could want in his contacts list politician tech executives financial backers and a preexisting positive relationship with most of them. When alternative would be entrepreneurs are needing to make a deal with a major company like Microsoft or Google it will be upper middlemanagement and lawyers, a committees or three will weigh in on it present it to their bosses etc. With Sam he calls up the CEO and has few drinks at the golf course and they decide to work with him and they make it happen.
a) A company they've partnered so heavily with is changing things up
b) That the change-up is to their point-person
It's not about whether another CEO could steer the ship, it's about the previous context and relationships that, regardless of skill, are going to have to be rebuilt carefully when you just rip out the point-person.
> Then again, maybe he has been making life less than desirable for the rank and file. Perhaps, even, they felt he was a bad fit for the company too. I don’t know, because I don’t work there.
People have already resigned over this...
> ah where SBF was involved. what an achivement
At least she wasn't a vegetarian. Hitler was a vegetarian. That would have been the final nail in the coffin
I’m saying there is a reason this happened and 2/3 the board agreed. It needs to play out further for us to see if there is a problem here or not, honestly.
I find it hard to believe you can effectively muster a mandate worth of votes based on opinion alone
Me: "Good luck with that terrible strategy"
Not a straw man.
Why aren’t we holding CEOs to the fire when they layoff thousands of people in what is effectively an email? Thats somehow okay but a CEO being ousted suddenly and it’s all hands on deck bad optics?
The board had a mandate level of votes for the replacement of Sam (2/3 of the board voted yes). Thats conviction.
1. Met with every major head of state except for Xi and Putin. He is the face of AI, not just for OpenAI, but for the entire world. The entire AI industry would hate for this to happen. 2. Lead a company from 2 billion valuation to nearly 80 billion in a year.
There is no precedent in startup history to get rid of a CEO at this stage.
Maybe the problem is the meteoric rise of OpenAI--at the time this board was instituted, the company was much smaller, and wouldn't have been able to draw a more illustrious set of board members?
And so something like OpenAI came along where Ilya S etc. got bags of money to go take that approach and scale the crap out of it, and, yeah, they got results. Because they didn't have to be careful, or deal with competing interests.
That's all fine, but it's also no surprise when it all blows up, is it?
But everyone important does so who cares about the rest?
To be fair, isn’t that kind of the bar for CEOs? Their job is to hire and fire senior people, ensure they have a mountain of cash, and put out fires.
It’s not an operational position and so I wouldn’t expect a CEO to have deep operational knowledge.
Maybe I’m misunderstanding the division of labor though?
It was a mandate. 2/3 the board voted in favor of relieving Sam Altman of his obligation to the company. The question now is why and how that plays out. It is clearly what the board wanted
But the board seems to have a weak hand. It can decide to disappoint the for profit investors. But it doesn’t own Sam, or the vast majority of the workers, and maybe not much of the know how. And they can walk if the board disappoints them.
The board’s altruism might be great, but it lacks the legal tools to do what it wants, against organized labor backed by unlimited capital.
Now will that be another 3 or another 30, time will tell.
Stop making up nonsense please.
It’s important to put those disclaimers in context though. The rules that mandated them came out before the era of index funds. Those disclaimers are specifically talking about fund managers. And it’s true that past performance at picking stocks does not indicate future performance of picking stocks. Out side of that context, past performance is almost always a strong indicator of future performance.
Oh wait, that's what OpenAI is.
(To be clear, I don't know enough to have an opinion as to whether the board members are blindingly stupid, or principled geniuses. I just bristled at the phrase "proper corporate governance". Look around and see where all of this proper corporate governance is leading us.)
Nether is anything else in existence. I'm glad that philosophers are worrying about what AGI might one day mean for us but it has nothing to do with anything happening in the world today.
Just a sinecure and someone you trust for some other reason. But you’ve got to trust them.
It’s really dismissive toward the rank and file to think that they don’t matter at all.
This isn't just a non-profit holding company for tax purposes - the whole thing is structured with the intent of giving the non-profit complete control over the for-profit to help achieve the non-profit's charter.
The board being full of typical business people would likely be counterproductive to the goal of staying focused on the non-profit charter vs. general commercial business interests.
I don't know enough about most of the board to have any sort of real judgment about their ability, but there's a lot of comments here that are judging board members based on very different criteria than what they were actually brought in for.
If Microsoft considers this action a breach of their agreement, they could shut off access tomorrow. Every OpenAI service would go offline.
There are very few services that would be able to backfill that need for GPU compute, and after this clusterfuck not a single one would want to invest their own operating dollars supporting OpenAI. Microsoft has OpenAI by the balls.
Remember the hype around Deep Blue and later Watson?
I’m sure no lessons to be learned there :)
He did none of the research that fuels OpenAIs ambitions and future prospects, thats mostly done by people like Sutskever, Radford and many more brilliant scientist.
All who are a year plus behind OpenAI.
But that doesn’t mean you can’t get some useful ideas about future performance from a person’s past results compared to other humans. There is no such effect in play here.
Otherwise, time for me to go beat Steph Curry in a shooting contest.
Of course there’s other reasons past performance is imperfect as a predictor. Fundamentals can change, or the past performance could have been luck. Maybe Steph’s luck will run out, or maybe this is the day he will get much worse at basketball, and I will easily win.
Question, did board find out about the other AI firm that was in the works by Sam? The clue might be why Chair of board was demoted but not let go??
Somebody over-played their poker hand...
There is theory and there is reality. If someone is paying your bills by an outsized amount and they say jump, you will say how high.
The influence is rarely that explicit though. The board knowing that X investor provides 60% of their funding, for instance, means the board is incentivized to do things that keep X investor happy without X having to ask for it.
9 times out of 10, money drives decisions in a captilist environment
Nothing wrong with that but a company like Open AI which is literally changing the world does not have a board member who is qualified to be in that position.
The same media that promoted the skizoid idea that agi is around the corner and blew ai out of proportion.
The same media that would not hesitate to do character assassinations of people opposing altman.
The media is corrupt and incompetent. To be replaced soon by the monster they created.
This corporate structure is so convoluted that it's difficult to figure out what the actual powers/obligations of the individual agents involved are.
From 2016: https://www.nytimes.com/2018/04/19/technology/artificial-int...
To 2023: https://www.businessinsider.com/openai-recruiters-luring-goo...
Though I think it’s best to refrain from calling something a “dumb take”.
And neither does anyone else on this forum.
The Monday morning quarterbacking is hysterical.
It could happen still, but it’s not obvious that it will.
As far as relationships go, they can build those. I doubt anyone who has access to OpenAI tech wants to give that up, so there is enough leverage on that to smooth things out
If Sam were to be ousted it should have happened before ChatGPT was unleashed on the world.
anyhow , I still don't see what the impressive things is by working at all those fake companies/think tanks not doing real work
(Remember, fiduciary does not necessarily have anything to do with money)
That’s right. Worldwide DNS control and it was controlled by a non-profit in California. And that non-profit tried to do something shady and was kept in line simply because of California law enforcement.
I don't know how this unfolds but when somewhat smart models become a commodity, and thus the remaining 90% of the population get access to polished chatbots distributed through dominant platforms like Google, Facebook, Instagram, etc. - where does that leave OpenAI at? High-end models probably. And maybe with superintelligence unlocked it's all that's needed to win business-wise, I don't know.
b) Altman personally hired many of the rank and file.
c) OpenAI doesn't exist with customers, investors or partners. And in this one move the board has alienated all three.
In this case this person seems to have primarily tried and failed to spin a robotics company out of Singularity “university” in 2012.
This only sounds adjacent to AI if you work in Hollywood.
There’s no evidence of that, only your assumptions. Lots of comments from knowledgeable folks outside the media and who couldn’t care less about a “pressure campaign”, even if it did exist, think the board was clueless and got duped into making a huge mistake with the coup.
Sam Altman was fired. 4 other key people quit and it seems more will follow and join Sam's new venture. This outcome would be a disaster for Microsoft, for other OpenAI investors and for OpenAI. So the board is, per multiple sources, talking with Sam Altman to return. The board declined to comment and is free to clarify any inaccuracies.
There's no need for a spin, the board has miscalculated and got itself in a bad spot.
If the coup fails in the end (which seems likely), it will have proved that the "nonprofit safeguard" was toothless. Depending on the board members' ideological positions, maybe that's better than nothing.
Which is to say, they were likely Altman supporters. Which is fine! They’re free to do as they wish.
However if that’s it (and it does remain to be seen if more happens or not) than 2/3 of folks stand by the decision, which would match with the board votes
Sam tries to sound smart while not really having any technical insight. He does a tremendous job with it though.
One way to think about this is: at some point in the next few years we'll have a few hundred GPUs/TPUs that can provide the compute the compute used to train GPT3.
This discovery was always going to happen. The question is if OpenAI made radical scaling possible unlike before. Answer there is also a no. There are clear limits to number of collocated GPUs, nVidia release cycles, TSMC capacity, power generation etc.,
So in the best case OpenAI fudged the timeline a little bit. Real credit belongs to the Deep Learning community as a whole.
Investors care, but if new management can keep the gravy track, they ultimately won’t care either.
Companies pivot all the time. Who is to say the new vision isn’t favored by the majority of the company?
I haven't seen that reason yet, though I don't rule out one exists and even then you'd have to do this in a way that it doesn't ruffle the feathers of your ultimate paymasters. Being a board members of a large company is an exercise in diplomacy, not in bull-in-a-china-store level incompetence.
Are you part of OpenAI governance, or any company's governance structure? If not, does it really matter whether someone is exchangeable or not for you?
That made me laugh a knowing laugh even though I know nothing.
What am I missing ?
I also feel, that they can patch relationships, Satya may be upset now but will he continue to be upset on Monday?
It needs to play out more before we know, I think. They need to pitch their plan to outside stakeholders now
(via >>38325611 , but we merged those comments hither)
If you then imagine being dependent on that investor not only because your service runs on their infra but also your biggest enterprise customers use your service through their infra, you are even more incentivised to listen to them..
A non-profit isn’t supposed to have investors. This structure should never have been allowed in the first place (nor IKEA.)
The time to do this was before ChatGPT was unleashed on the world, before the MS investment, before this odd governance structure was setup.
Yes, having outsiders on the board is essential. But come on, we need folks that have recognized industry experience in this field, leaders, people with deep backgrounds and recognized for their contributions. Hinton, Ng, Karpathy, etc.
Good job if you can get it.
With Altman gone and the direction of the board being to limit commercial growth, their investment is at risk, and their competitive edge will evaporate, especially if businesses switch to other LLMs as they surely will over time. Altman will also become a competitor.
If instead they are able to pull off a complete transformation of the nonprofit and oust Ilya, they will also lose a core technical leader and risk their investment while being left with the odd dynamic of a parent nonprofit.
Perhaps they could orchestrate some kind of purchase of the remaining portion of the subsidiary. Give Altman the CEO title and move forward while allowing the nonprofit to continue their operations with new funding. This doesn’t solve the Ilya problem but it would be cleaner to spin it off.
I will bite. How do you know they didn't?
What remains to be seen is just how closely the board holds the charter to their hearts and whether the governance structure that was built is strong enough to withstand this.
They’d be fools to do that if there is a path forward here. Short of them announcing on Monday that they are no longer selling their offerings, I don’t see how there won’t be a path.
Business is business, as all the VCs love to say, there is no room for emotion in this right?
On the tech side, I think work will split on two tracks: 1) building great applications with small and medium fine tuned models like Mistral, etc. Within a year or two great models will run on the edge because of continuous technical improvements. 2) some players will go for the long game of real AGI and maybe they will get there in much less than a decade.
On the business side, I have no idea how the current situation is going to shake out.
> There’s no evidence of that
The leaks themselves whether or not based in fact are evidence of that. The only reason for someone in a position to be taken credibly reporting the information contained in either this Verge article or the Bloomberg article with a similarly sourced but slightly different narrative, to take that information to the media, whether or not it is true, is to use public pressure to attempt to shape the direction of events.
EDIT: To be clear, its evidence of the "pressure campaign" part; to the extent that the "incestuous tech media" part has any substantive meaning, I'm not sure its evidence of that.
Azure gets a hell of a lot more out of OpenAI than OpenAI gets out of azure. I’ll bet you GPT4 runs on nvidia hardware just as well regardless of who resells it.
The non-profit doesn't have investors. OpenAI Global, LLC isn't the non-profit, its a for-profit over which the non-profit has complete governance control.
Dependable leaders really do have that much value to their organizations. This is similar to why in critical areas like medicine, old-and-dependable things are valued over new and shiny. The older things have lower risk, and a strong track record. That added dependability is more important than being the newer “better” but riskier option. Back to this topic, how many CEOs with track records managing 80 billion revenue AI organizations are ready to replace Altman? Because Open AI is well ahead in the field, they don’t need big risky changes, they need to reliably stay the course.
In the case of AI ethics, the people who are deeply invested in this are also some of the pioneers of the field who made it their life's work. This isn't a government agency. If the mission statement of guiding it to be a non-profit AGI, as soon as possible, as safely as possible, were to be adhered to, and where it is today is going wildly off course, then having a competent board would have been key.
I have seen these types of people pop up in Silicon Valley over the years. Often, it is the sibling of a movie star, but it's the same idea. They typically do not know anything about technology and also are amusingly out of touch with the culture of the tech industry. They get hired because they are related to a famous person. They do not contribute much. I think they should just stay in LA.
EDIT: I just want to add that I don't know anything about this woman in particular (I'd never heard of her before yesterday), and it's entirely possible that she is the lone exception to the generalization I'm describing above. All I can say is that when I have seen these Hollywood people turn up in SF tech circles in the past (which has been several times, actually), it's always been the same story.
Hopefully you're able to tell the difference between serving as CEO or president of real reputable companies (the "trash tier startup" still exited for mid-8 figures) versus what looks like being a figure head for fake companies.
At a minimum, taking your largest supplier and customer for a ride is probably a bad idea.
So is “unsafe” just another word for buggy then?
---
Rehiring a CEO whom you've recently fired is a delicate process, both from a legal and business standpoint. Here's a general approach you might consider:
1. *Board Meeting*: Convene a board meeting to discuss the decision to rehire the CEO. This should involve all key stakeholders to ensure transparency and agreement.
2. *Legal Considerations*: Consult with legal counsel to understand any legal implications or contractual issues arising from the initial termination and potential rehiring.
3. *Negotiation and Terms*: If the board agrees to proceed, you'll need to negotiate new terms with the CEO. This might include discussions about the future direction of the company, salary, and any conditions related to the rehire.
4. *Addressing the Underlying Issues*: It's important to address the reasons that led to the initial firing. This might involve setting clearer goals aligned with the company's mission, establishing better oversight, or implementing checks and balances.
5. *Communication*: Once an agreement is reached, communicate the decision to rehire the CEO to your employees, stakeholders, and possibly the public, depending on the nature of your company. This communication should be clear about the reasons for the reversal and the future direction of the company.
6. *Monitoring and Evaluation*: Finally, set up a system for regularly evaluating the CEO's performance against the company's mission and goals to prevent a recurrence of the previous issues.
Remember, the rehiring process should align with your company's bylaws and any relevant legal requirements. Transparency and clear communication throughout the process are crucial to maintain trust among your stakeholders.
I think the situation is tough because I can't imagine there aren't legal agreements in place around what OpenAI has to do to access the funding tranches and compute power, but who knows if they are in a position to force the issue, or if I'm write in my supposition to begin with. Even if I am, a protracted legal battle where they don't have access to compute resources, particularly if they can't get an injunction, might be extremely deleterious to OpenAI.
Perhaps Microsoft even knows that they will take a bath on things if they follow this, but don't want to gain a reputation of allowing this sort of thing to happen - they are big enough to take a total bath on the OpenAI side of things and it not be anything close to a fatal blow.
I was more skeptical of this being the case last night, but less so now.
Microsoft can exert massive pressure over OpenAI and it seems hilarious to think that OpenAI is the one in that relationship with the power.
And regardless of what happens here, everyone on the board is 100% getting fired.
Why didn’t they hire a competent builder?
You:
>how do you know they weren’t? It could be pure happenstance! All the nails could… could have been defective! Or something! waves hands
They don't want to run a developer/enterprise ChatGPT platform.
Google cares about Search, Apple about Siri, Meta about VR/Ads. But those three are interesting heavily in their own LLMs which at some point may better OpenAI.
It’s not clearly obvious that’s the case. In retrospect things always seem obvious, but that another party would have created GPT-3/4 is not.
From https://www.forbes.com/sites/alexkonrad/2023/11/17/these-are... (linked in OP)
I'd be interested in a discussion of the merits of "traditional governance" here. Traditional private companies are focused on making a profit, even if that has negative side effects like lung cancer or global warming. If OpenAI is supposed to shepherd AGI for all humanity, what's the strongest case for including "traditional governance" type people on the board? Can we be explicit about the benefits they bring to the table, if your objective is humanitarian?
Personally I would be concerned that people who serve on for-profit boards would have the wrong instinct, of prioritizing profit over collective benefit...
JFC someone somewhere define “safety”! Like wtf does it mean in the context of a large language model?
But non-profits aren't a regular business and their ultimate obligation is to their charter. Depending on just what the level of misalignment was here, it's possible that the company becoming nonviable due to terminating Altman is serving the charter more closely than keeping him on board.
No one posting here has enough detail to really understand what is going on, but we do know the structure of OpenAI and the operating agreement for the for-profit LLC make it a mistake to view the company from the lens as we would a regular for-profit company.
Never been a fan of the “you can’t complain about any bad outcome you agreed could happen” argument.
Why? I see a lot of hero-worship for Sam, but very little concrete facts about what he's done to make this a success.
And given his history, I'm inclined to believe he just got lucky.
I am not American and have no idea what you are talking about.
Sam Altman channeled what was great research into a dominant $100b business in record time.
That is not trivial and not every CEO can do that.
If they do nothing, then public perception harms their ability to raise further capital, and employees leave for Altman's new company. If they cave to the pressure (despite that being objectively the financially right decision), they lose their board seats and Sam comes back, proving they overplayed their hand when they fired him. They're basically in a lose/lose situation, even if this article is sourced from entirely biased and fabricated information. And that's exactly what reveals them as incompetent.
Their mistake was making a move while considering only the technicalities of their voting power, and ignoring the credibility they had to make it. Machiavelli is rolling in his grave...
This. Some people even take it to the extreme and choose not to apologize for anything to look tough and smart.
I've honestly never had more hope for this industry than when it was apparent that Altman was pushed out by engineering for forgoing the mission to create world changing products in favor of the usual mindless cash grab.
The idea that people with a passion for technical excellence and true innovation might be able to steer OpenAI to do something amazing was almost unbelievable.
That's why I'm not too surprised to see that it probably won't really play out, and likely will end up in OpenAI turning even faster into yet another tech company worried exclusively with next quarters revenue.
The supply bottlenecks have been around commercializing the ChatGPT product at scale.
But pretraining the underlying model I don't think was on the same order of magnitude, right?
Two of my managers can absolutely get rid of me without ever hearing me out.
Of course it’s legal, the comment was that it shouldn’t be.
Which is why every developer/partner including Microsoft is going to be watching this situation unfold with trepidation.
And I don't know how you can "keep the gravy track" when you want the company to move away from commercialisation.
Anyway I’m with Sutskever, the guy who builds models. Charismatic salesmen are a dime a dozen.
What shocked me most was that Quora IMHO _sucks_ for what it is.
I couldn't think of a _worse_ model to guide the development and productization of AI technologies. I mean, StackOverflow is actually useful and its threatened by the existence of CoPilot, et al.
If the CEO of Quora was on my board, I'd be embarrassed to tell my friends.
Microsoft could run the entire business as a loss just to attract developers to Azure.
Even if we assume that's true, wouldn't the somewhat incompetent and seemingly unnecessarily dramatic way they handled not be a concerning sign?
But until he is re-hired Sam Altman is to all intents and purposes fired. And it may well come to that (and that would almost certainly require all those board members who voted for his ouster to vacate their positions because their little coup plan backfired and nobody is going to take the risk of that happening again, especially not in this way).
"Disagree and commit."
- says every CEO these days
But if you sign an agreement saying you understand you should treat your investments more like donations and that everything is secondary to the goals of the non-profit and then are upset that your goals were not placed in higher priority than the charter of the non-profit, I'm going to reserve the right to think you're a hypocrite.
Why didnt Google create ChatGpt then, why did the fall behind?
“Ha ha just kidding don’t ruin our stock value!”
The next couple of weeks will tell.
Could be a rumour spread by people close to Sam though.
IME most folks at Anthropic, OpenAI or whatever that are freaking out about things never defined the problem well and typically were engaging with highly theoretical models as opposed to the real capabilities of a well-defined, accomplished (or clearly accomplishable) system. It was too triggering for me to consider roles there in the past given that these were typically the folks I knew working there.
Sam may have added a lot of groundedness, but idk ofc bc I wasn’t there.
Edit: an update to the verge article sheds some more light, but I still consider it very sus since it’s coming from the Altman camp and seems engineered to exert maximal pressure on the board. And the supposed deadline has passed and we haven’t heard any resignations announced
> Update, 5:35PM PT: A source close to Altman says the board had agreed in principle to resign and to allow Altman and Brockman to return, but has since waffled — missing a key 5PM PT deadline by which many OpenAI staffers were set to resign. If Altman decides to leave and start a new company, those staffers would assuredly go with him.
Maybe the board is too young to realize who they sold their souls to. Heh I think they’re quickly finding out.
OpenAI has never claimed they want Sam back. The article claims OpenAI's investors want him back.
I will agree that OpenAI could have done a better job of letting him go if there truly were irreconcilable differences.
Seems like Hanlon's razor won once again.
"The start-up company must either cross or die, but what value is life if to gain it one has to go against one’s best self?" - Moore, Crossing the Chasm, p. 75
OpenAI is far from being self-sustainable and without significant external investment they'll just probably be soon overtaken by someone else.
Ilya is certainly world class in his field, and maybe good to listen to what he has to say
The board was Altmans boss - this is pretty much their only job. Altman knew this and most likely ignored any questions or concerns of theirs thinking he is the unfireable superstar
Imagine if your boss fired you - and your response was - I’ll come back if you quit! Yeah, no. People might confuse status with those of actual ceo shareholders like zuck, bezos, or musk. But Altman is just another employee
The shareholders can fire the board, but that’s not what he’s asking for. And so far we haven’t heard anything about them getting fired. So mostly this just seems like an egomaniac employee who thinks he is the company (while appropriating the work of some really really smart data scientists)
Everything just assumes that without Sam they’re worse off.
But what if, my gosh, they aren’t? What if innovation accelerates?
My point being is it’s useless to speculate that Altman starting a new business competing with OpenAI will be successful inherently. There’s more to it than that
I also suspect they could very well secure this kind of agreement from another company that would be happy to play ball for access to OpenAI tech. Perhaps Amazon for instance, who’s AI attempts since Alexa have been lackluster
It's often a sign of incompetence though. Or rather a confirmation of it.
Ilya was apparently instrumental in this, and he didn't have to pursue this?
It didn't have to be a "you're with me or you're with them!"
I still like working in this industry because you can still find interesting problems to solve if you hunt for them, but they're getting harder to find and it increasingly seems like making good technical decisions is penalized.
It's sad to see even on HN how many comments are so dismissive of technical skills and ambitions, though I guess we've had more than a generation of engineers join the field because it was the easiest way to make the most money.
For a brief moment on Friday I thought "maybe I'm too cynical! Maybe there still are places where tech actually matters."
Not surprised it looks like that hope will be inverted almost immediately. I also suspect the takeaway from this will be the final nail in the coffin for any future debates between engineering and people who are only interested in next quarters revenue numbers.
The board removed the board's chairman and fired the CEO. That's why it was called a coup.
>The shareholders can fire the board, but that’s not what he’s asking for. And so far we haven’t heard anything about them getting fired
nonprofits don't have shareholders (or shares).
I would definitely say the board screwed up.
https://www.forbes.com/sites/alexkonrad/2023/11/17/openai-in...
You lose other actors who only joined to work with Brad for one. You lose part of your audience and you lose distribution and press opportunities.
If it wasn't for Sam pushing for a version that became gpt3.5 and the popularity that followed and most recently gpt 4 push we would still be waiting for the brilliant people . Google was way ahead in this space but failed to release anything.
As a developer I understand belittling the business side as providing little value but as someone who has tried to get the masses to adopt my software my respect for their ability to solve non-technical problems has grown.
Which doesen't mean a lot. Of course they'd wait for this to play out before committing to anything.
> but if new management can keep the gravy track
I got the vague impression that this whole thing was partially about stopping the gravy train? In any case Microsoft won't be too happy about being entirely blindsided (if that was the case) and probably won't really trust the new management.
Yeah prompting ChatGPT 3.5 would have yielded a better plan than what they did.
> I’ll bet you GPT4 runs on nvidia hardware
Yes but they'll need to convince someone else like Amazon to give to them for free and regardless what happens next Microsoft will still have a signficant stake in OpenAI due to their previous investments.
The board just vaporised the tender offer, and likely much of their valuation. It’s hard to have confidence in that.
In wartime, pandemics, and in matters of national security, the government's power is at its apex, but pretty much all of that has to withstand legal challenge. Even National Security Letters have their limits: they're an information gathering tool, the US Government can't use them to restructure a company and the structure of a company is not a factor in its ability to comply with the demands of an NSL.
Hypothetically he might also have very little trust in the decision making abilities of the new management and how much their future goals will align with those of Microsoft.
Boards are agents to their principals. They call the shots only as long as their principals deem them to be calling them correctly. If they don't, they get replaced. Said differently, board members are "appointed" to do the bidding of someone else. They have no inherent power. Therefore, they do not, ultimately, call the final shots. Owners do. Like I said, this situation is a little muddier because it's a non-profit that owns a for-profit company, so there's an added layer of complexity between agents and principals.
OpenAI isn't worth $90B because of its non-profit. The for-profit piece is what matters to investors, and those investors are paying the bills. Sure, the non-profit board can fire Altman and carry on with their mission, but then everyone who is there "for profit" can also pack up their things and start OpenAI 2.0 where they no longer need the non-profit, and investors will follow them. I assume that's an undesirable outcome for the board as I suspect the amount of money raised at the for-profit level dwarfs the amount donated to the non-profit... which effectively means the for-profit shareholders own the company. Hence my original comment.
Going off and starting his own thing would be great, but it would be at least a year to get product out, even if he had all the same players making it. And that's just to catch up to current tech
When I see it, it has always been “Amazon is a competitor and we don’t buy from competitors”.
But it's not just him is it?
Most of the employees values do not align with a non profit, even if executives like Ilya do.
By firing Altman and trying to remind the world they are a non profit that answers to no one they are also telling their employees to fuck off on all that equity they signed on for.
Says who? And did they resign?
Differences in interpretations will happen but the YC rule that founder drama is too often a problem continues to exist and it shouldn’t be a surprise.
Sam and Greg were trying to stage a coup, the rest of the board got wind of it and successfully countered in time (got to them first).
What they didn't expect is that a bunch of their own technical staff would be so loyal to Sam (or at least so prone to the cult of personality). Now they're caught in a Catch-22.
Think you're missing the big picture here. Sam Altman isn't an "easily replaceable employee" especially given his fundraising skills.
Now, do a bunch of Openai peeps interview at Meta/Google/Amazon/Anthropic/Cohere over the next few months? Certainly.
Edit: nvm I missed the point was about firing the board.
Except he is not. He was a cofounder of the company and was on the board. Your metaphor doesn't make any sense -- this is like if your boss fired you but also you were part of your boss and your cofounder who is on your side was the chair of your boss.
I am always curious how these conversations go in corporate America. I've seen them in the street and with blue collar jobs.
Loads of feelings get hurt and people generally don't heal or forgive.
I think a wait and see approach is better. I think we had some inner politics spill public because Altman needs to the public pressure to get his job back, if I was speculating
Not at all. Ilya and George are on the board. Ilya is the chief scientist, George resigned with Sam and supposedly works like 80-100hrs a week
On the one hand, I actually respect their principles. OpenAI has become the company its nonprofit was formed to prevent. Proprietary systems. Strong incentive to prioritize first-to-market over safety. Dependency on an entrenched tech co with monopolistic roots.
On the other hand, this really feels like it was done hastily and out of fear. Perhaps the board realized that they were about to be sidelined and felt like this was their last chance to act. I have to imagine that they knew there would be major backlash to their actions.
In the end, I think Sam creating his own company would be better for competition. It's more consistent with OpenAI's original charter to exist as the Mozilla (though hopefully more effective) of AI than as the Stripe of AI.
Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.
Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
One scenario where both parties are fallible humans and their hands are forced: Increased interest has to close down plus signups, because compute can't scale. Sam goes to Brockman and they decide to use compute meant for GPT-5 to try to scale for new users, without informing board. Can perfectly fine break that rule with GPT-4, but what if Sam does this again in the future when they have AGI on their hands?
People say this like it's some kind of truism but I've never seen it happened and when questioned everyone I've known who's claimed it ends up admitting they were measuring their "success" by a different metric then the company.
CEO of Cloudflare explaining: >>19828702
I don't understand how it isn't clear to you.
How could they accomplish that without external investment? If the money tap dries up OpenAI will just be left behind.
prolonged public exchange between sama and the board _before_ any firings where they throw accusations at each other followed by microsoft pulling out, followed by people quitting and immediately resulting in a chatgpt outage. followed by the firing of the ceo
I won’t be surprised if it’s the open arms of Microsoft. Microsoft embraced and extended OpenAI with their investment. Now comes the inevitable.
Seems like a lot of these board members have deep ties around various organizations, governmental bodies, etc. and that seems entirely normal and probable. However, prior to chatgpt and dalle we, the public , had only been allowed brief glimpses into the current state of AI (eg Look this robot can sound like a human and book a reservation for you at the restaurant -Google ; look this robot can help you consume media better -many). As a member of the public it went from “oh cool Star Trek idea, maybe we’ll see it one day with flying cars” to “holy crap, I just felt a spark of human connection with a chat program.”
So here’s my question, what are the chances that openAI is controlled opposition and Sam never really was supposed to be releasing all this stuff to the public? I remember he was on his Lex podcast appearance and said paraphrasing “so what do you think, should I do it? Should I open source and release it? Tell me to do it and I will.”Ultimately, this is what “the board is focused on trust and safety” mean right? As in safety is SV techno HR PR dribble for go slow, wear a helmet and seatbelt and elbow protectors , never go above 55, give everyone else the right of way because we are in the for the good humanity and we know what’s best. (vs the Altman style of: go fast, double dog dare smart podcast dude to make unprecedented historical decision to open source, be “wild” and let people / fate figure some of it out along the way.”)
The question of openai’s true purpose being a form of controlled opposition is of course based on my speculation but an honest question for the crowd here.
Yeah, that's the Microsoft of old. Don't trust 'em.
Bad news for OpenAI, and any hope that this stuff won't be used for evil.
Regardless of whether or not sam is coming back to OpenAI, the board is 100% getting fired.
It's also not clear that this is a realistic scenario - Ilya is the real deal, and there's likely plenty of people that believe in him over Altman.
Of course, the company has also expanded massively under Altman in a more commercial environment, so there are probably quite a few people that believe in Altman over him.
I doubt either side ends up with the entire research organization. I think a very real possibility is both sides end up with less than half of what OpenAI had Friday morning.
Not your premises not your compute?
Was it? US (and initially UK) didn't really face any real competition at all until the war was already over and they had the bomb. The Soviets then just stole American designs and iterated on top of them.
• Employees
• Donors or whoever is paying the bills
In this case, the threat appears to be that employees will leave and the primary partners paying the bills will leave. If this means the non-profit can no longer achieve its mission, the board has failed.
>Predicting specific outcomes in a situation like the potential firing of a high-profile executive such as Sam Altman from OpenAI is quite complex and involves numerous variables. As an AI, I can't predict future events, but I can outline some possible considerations and general advice:
Honest question, do you have a source for that? Is it conceivable that Microsoft has some clause that grants them direct access to IP if OpenAI does not meet certain requirements. It is difficult to believe that Microsoft handed over $10B without any safeguards in place. Surely they did their due diligence on OpenAI's corporate structure.
Nobody is throwing billions around without expecting anything in return.
I'm aware that Altman has made the same claim (close to zero equity) as you are making, and I don't see any reason why either of you would not be truthful, but it also has always just seemed very odd.
From the board for not anticipating a backlash and caving immediately... from Microsoft for investing into an endeavor that is purportedly chartered as non-profit and governed by nobodies who can sink it on a whim. And having 0 hard influence on the direction despite a large ownership stake
Why bother with a non-profit model that is surreptitiously for profit? The whole structure of OpenAI is largely a facade at this point.
Just form a new for profit company and be done with it. Altman's direction for profit is fine, but shouldn't have been pursued under the loose premise of a non profit.
While OpenAI leads currently, there are so many competitors that are within striking distance without the drama. Why keep the baggage?
It's pretty clear that the best engineering will decide the winners, not the popularity of the CEO. OpenAI has first mover advantage, and perhaps better talent, but not by an order of magnitude. There is no special sauce here.
Altman may be charismatic and well connected, but the hero worship put forward on here is really sad and misplaced.
Especially considering OpenAI has boosted the value of the masses of data floating around the internet. Getting access to all that juicy data is going to come at a high cost for data hungry LLM manufacturers from here on out.
What a way to destroy confidence in Azure, or cloud platforms in general.
If that's the case, then the failing would be in letting it get to this point in the first place.
So the future of AI is in the hands of leadership that's slick talking but really only there to make a quick buck, built by teams of engineers whose only motivation is getting highly paid.
I don't begrudge those that are only in it for the money, but that's not the view of tech that got me excited and into this industry many years ago.
The point of my comment is that for a moment I thought maybe I was wrong about my view of tech today, but it's very clear that I'm not. It sounds like the reality is going to end up that the handful of truly technical people in the company will be pushed out, and the vast majority of people even on HN will cheer this.
No wonder this is causing drama.
Pedantic, but: LLCs have "members", not "shareholders". They are similar, but not identical relations (just as LLC members are similar to, but different from, the partners in an partnership.)
Tenure doesn’t matter.
Those 4 people are not fit to run any company.
Not a single person asked: well hey what if somebody asks for an evidence of lying? Do we have one?
Calling that a truce makes as much sense as Monty Python’s Black Knight calling the fight a draw.
Play ball else we'll pull out wires off your cloud instances. Let's keep in mind Azure is the main cash cow of MS.
Not everything is about money. He likely just likes the idea of making AI.
Sam has superior table stakes.
I think he stage his coup long ago when he took control of OpenAI making it “CloseAI” to make himself richer by effectively selling it to Microsoft. This is the people who believe in the original charter fighting back.
> The shareholders can fire the board, but that’s not what he’s asking for.
There are no shareholders in a non-profit if I’m right. The board effectively answers to no one. It’s take it or leave it kind of deal. If you don’t believe in OpenAI’s mission as stated in their charter, don’t engage with them.
Yes, they are accountable (and I'm actually surprised at how many people seem to believe that they are not), but they are not without power. Legal and practical are not always exactly overlapping and even if the board may not ultimately hold practical power (even if they believe they do) legally speaking they do and executives function at the pleasure of the board. If the board holds a vote and the bylaws of the company allow for it and the vote passes according to those bylaws then that's that. That's one good reason to pack the board of your billions of dollars worth company with seasoned people because otherwise stuff like this may happen.
Afterwards you can do a lot about it, you can contest the vote, you can fight it in court, you can pressure board members to step down and you can sue for damage to the company based on the decision. But the board has still made a decision that is in principle a done deal. They can reverse their decision, they can yield to outside pressure and they can be overruled by a court. But you can't pretend it didn't happen and you can't ignore it.
That's functionally true, but more complicated. The for profit "OpenAI Global LLC" that you buy ChatGPT subscriptions and API access from and in which Microsoft has a large direct investment is majority-owned by a holding company. That holding company is itself majority owned by the nonprofit, but has some other equity owners. A different entity (OpenAI GP LLC) that is wholly owned by the nonprofit controls the holding company on behalf of the nonprofit and does the same thing for the for-profit LLC on behalf of the nonprofit (this LLC seems to me to be the oddest part of the arrangement, but I am assuming that there is some purpose in nonprofit or corporate liability law that having it in this role serves.)
https://openai.com/our-structure and particularly https://images.openai.com/blob/f3e12a69-e4a7-4fe2-a4a5-c63b6...
The board has no responsibility to Microsoft whatsoever regarding this. Sam Altman structured it this way himself. Not to say that the board didn't screw up.
If I worked there, I would keep my job and see how things shake out. If I don’t like it, then I start looking. What I don’t do is risk my well being to take sides in a war between people way richer than me.
In any lens if microsoft pulls their GPUs and funding, then OpenAI is through.
No, pissing microsoft off in this situation is not a good idea. Because microsoft can shut the whole organization down.
Is the idea that it will hack into NORAD and a launch a first-strike to increase the log-likelihood of “WWIII was begun by…?”
But it will work.
if theyve been doin that for a while, no wonder the board wanted them gone. eventually you cause more work than you put out.
I had the exact opposite take. If I were rank and file I'd be totally pissed how this all went down, and the fact that there are really only 2 possible outcomes:
1. Altman and Brockman announce another company (which has kind of already happened), so basically every "rank and file" person is going to have to decide which "War of the Roses" team they want to be on.
2. Altman comes back to OpenAI, which in any case will result in tons of time turmoil and distraction (obviously already has), when most rank and file people just want to do their jobs.
Financial backing to make a competitor
Internal knowledge of roadmap
Media focus
Alignment with the 2nd most valuable company on the planet.
I could go on. I strongly dislike the guy but you need to recognize table stakes even in your enemy. Or you’ll be like Ilya. A naive fool who is gonna get wrecked thinking doing the “right” thing in his own mind will automatically means you win.
The deal was that MS was going to give them billions in exchange for 49% of the for-profit entity. They were also reportedly waiving the azure bill since their interests are aligned.
MS is saying that if we give you 10 billion dollars and don’t charge you for azure, then there are some obvious strings attached.
OpenAI is presumably free to do what the rest of the players in this space are doing and pay for their Azure resources if they don’t want to play nice with their business partners.
and then there's the real leverage of money and the court of public opinion.
> OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.
My interpretation of events is the board believes that Altman's actions have worked against the interest of building an AGI that benefits all of humanity - concentrating access to the AI to businesses could be the issue, or the focus on commercialization of the existing LLMs and chatbot stuff causing conflict with assigning resources to AGI r&d, etc.
Of course no one knows for sure except the people directly involved here.
He never made the PR and was just there to ask me to implement the thing for his own benefits ....
It reads like they ousted him because they wanted to slow the pace down, so by design and intent it would seem unlikely innovation would accelerate. Which seems doubly bad if they effectively spawned a competitor that is made up by all the other people that wanted to move faster
Altman was on the board. He was not “just another employee.” Brockman was also on the board, and was removed. It was a 4 on 2 power play and the 2 removed were ambushed.
You also don’t seem to realize that this is happening in the nonprofit entity and there are no shareholders to fire the board. I thought OpenAI’s weird structure was famous (infamous?) in tech, how did you miss it?
The signs of "weakness in leadership" by the board already happened. There is no turning back from that. The only decision is how much continued fuck-uppery they want to continue with.
Like others have said, regardless of what is the "right" direction for OpenAI, the board executed this so spectacularly poorly that even if you believe everything that has been reported about their intentions (i.e. that Altman was more concerned about commercializing and productization of AI, while Sutskever was worried about the developing AI responsibly with more safeguards), all they've done is fucked over OpenAI.
I mean, given the reports about who has already resigned (not just Altman and Brockman but also other many other folks in top engineering leadership), it's pretty clear that plenty of other people would follow Altman to whatever AI venture he wants to build. If another competitor leap frogs OpenAI, their concerns about "moving too fast" will be irrelevant.
A true believer is going to act along the axis of their beliefs even if it ultimately results in failure. That doesn't necessarily make them naive or fools - many times they will fully understand that their actions have little or no chance of success. They've just prioritized a different value of you.
If I measured the "aggressiveness" of every contract based on the potential litigation of all its clauses, I'd never sign anything.
Microsoft has a lot of experience interacting with small companies, including in situations like this one where the small company implodes. The people there know how to protect Microsoft's interests in such scenarios, and they definitely are aware that such things can happen.
Why is the board reversing course? They said they lost confidence in Altman - that’s true whether lots of people quit or not. So it was bullshit
Why did the board not foresee people quitting en masse? I’m sure some of it is loyalty to Sam and Greg but it’s also revolting at how they were suddenly fired
Why did the interim CEO not warn Ilya about the above? Sure it’s a promotion but her position is now jeopardized too. Methinks she’s not ready for the big leagues
Who picked this board anyway? I was surprised at how…young they all were. Older people have more life experience and tend not to do rash shit like this. Although the Quora CEO should’ve known better as well.
>non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.
From Forbes [1]
Adam D’Angelo, the CEO of answers site Quora, joined OpenAI’s board in April 2018. At the time, he wrote: “I continue to think that work toward general AI (with safety in mind) is both important and underappreciated.” In an interview with Forbes in January, D’Angelo argued that one of OpenAI’s strengths was its capped-profit business structure and nonprofit control. “There’s no outcome where this organization is one of the big five technology companies,” D’Angelo said. “This is something that’s fundamentally different, and my hope is that we can do a lot more good for the world than just become another corporation that gets that big.”
Tasha McCauley is an adjunct senior management scientist at RAND Corporation, a job she started earlier in 2023, according to her LinkedIn profile. She previously cofounded Fellow Robots, a startup she launched with a colleague from Singularity University, where she’d served as a director of an innovation lab, and then cofounded GeoSim Systems, a geospatial technology startup where she served as CEO until last year. With her husband Joseph Gorden-Levitt, she was a signer of the Asilomar AI Principles, a set of 23 AI governance principles published in 2017. (Altman, OpenAI cofounder Iyla Sutskever and former board director Elon Musk also signed.)
McCauley currently sits on the advisory board of British-founded international Center for the Governance of AI alongside fellow OpenAI director Helen Toner. And she’s tied to the Effective Altruism movement through the Centre for Effective Altruism; McCauley sits on the U.K. board of the Effective Ventures Foundation, its parent organization.
Helen Toner, director of strategy and foundational research grants at Georgetown’s Center for Security and Emerging Technology, joined OpenAI’s board of directors in September 2021. Her role: to think about safety in a world where OpenAI’s creation had global influence. “I greatly value Helen’s deep thinking around the long-term risks and effects of AI,” Brockman said in a statement at the time.
More recently, Toner has been making headlines as an expert on China’s AI landscape and the potential role of AI regulation in a geopolitical face-off with the Asian giant. Toner had lived in Beijing in between roles at Open Philanthropy and her current job at CSET, researching its AI ecosystem, per her corporate biography. In June, she co-authored an essay for Foreign Affairs on “The Illusion of China’s AI Prowess” that argued — in opposition to Altman’s cited U.S. Senate testimony — that regulation wouldn’t slow down the U.S. in a race between the two nations.
[1] https://www.forbes.com/sites/alexkonrad/2023/11/17/these-are...
Regarding your last sentence, it's pretty obvious that if Altman comes back, the current board will effectively be neutered (it says as much in the article). So my guess is that they're more in "what do we do to save OpenAI as an organization" than saving their own roles.
And in regards to OpenAI, note that (according to TFA), Microsoft has barely distributed any of their committed "$10 billion" of investment. So they have real leverage when threatening to deploy their team of lawyers to quibble over the partnership contract. And I don't think that "undermines confidence" in Microsoft's contractual agreements, given that there are only two or three other companies that have ever partnered with Microsoft at this scale (Apple and Google come to mind).
Do you really think the board is so incompetent as to not have thought through Microsoft's likely reaction?
And, do you really think they would have done this if they thought there was a likelihood of being rebuffed and forced to resign?
The answer is, no. They are not that incompetent.
I wish Sam & co the best, and I'm sure they'll move on to do amazing things. But, the recent PR just seems like spin from Sam & co, and the press has every reason to feed into the drama. The reality is that there are very smart people on both sides of this power struggle, and there's a very low probability of such a huge misstep on the board's part - not impossible but highly unlikely imo.
The only exception I can see is if Ilya&co foresaw this but decided to act anyways because they feel so strongly that it was the moral thing to do. If that's the case, I'm sure Elon's mouth is watering ready to recruit him to xAI.
But if Altman has a new venture that takes first mover advantage on a whole different playing field MS could easily get left in the dust.
We might get a better understanding of what actually happened here at some point in the future, but I would not currently assume anything we are seeing come out right now is the full truth of the matter.
Insisting, no matter how painful, that the organization stays true to the charter could be considered a desirable trait for the board of a non-profit.
“Staffers were ready to resign” really? Who? How many? The deadline passed hours ago, why haven’t we seen it?
Microsoft nor anyone else said they deeply believed in and prioritized OpenAI’s charter over their own interests. They might have agreed to it, and they must abide by agreements, but this is not a case of claiming one set of principles while acting contrary to them.
He convinced other members of the board that Sam was not the right person for their mission. The original statement implies that Ilya expected Greg to stay at OpenAI, but Ilya seems to have miscalculated his backing.
This appears to be a power struggle between the original nonprofit vision of Ilya, and Sam's strategy to accelerate productionization and attract more powerful actors and investors.
Instead of "Sam has been lying to us" it could have been "Sam had diverged too far from the original goal, when he did X."
Ultimately this is good for competition and the gen-AI ecosystem, even if it's catastrophic for OpenAI.
It's a tricky situation (and this is just with a basic/possibly-incorrect understanding of what is going on). I'm sure it's much more complicated in reality.
https://www.bloomberg.com/news/articles/2023-11-18/openai-bo...
Of course you can protest, “but in this country the constitution says that the generals can sack the president anytime they deem it necessary, so not a coup.” Yes, but it’s just a metaphor, so no one expects it to perfectly reflect reality (that’s what reality is for).
I feel we’ll know way more next week, but whatever the justifications of the board, it seems unlikely that OpenAI can succeed if the board “rules with an iron fist.” Leadership needs the support of employees and financial backers.
From my read, Ilya's goal is to not work with Sam anymore, and relatedly, to focus OpenAI on more pure AGI research without needing to answer to commercial pressures. There is every indication that he will succeed in that. It's also entirely possible that that may mean less investment from Microsoft etc, less commercial success, and a narrower reach and impact. But that's the point.
Sam's always been about having a big impact and huge commercial success, so he's probably going to form a new company that poaches some top OpenAI researchers, and aggressively go after things like commercial partnerships and AI stores. But that's also the point.
Both board members are smart enough that they will probably get what they want, they just want different things.
But I’m hopeful that AI will at least win by open source. Like Linux did. “Linux” wasn’t a 100 billion startup with a glitzy CEO, but it ate the world anyway.
I find the outputs of LLMs to be quite organic when they are given unique identities, and especially when you explore, prune or direct their responses.
ChatGPT comes across like a really boring person who memorized Wikipedia, which is just sad. Previously the Playground completions allowed using raw GPT which let me unlock some different facets, but they’ve closed that down now.
And again, I don’t really need to feed my unique thoughts, opinions, or absurd chat scenarios into a global company trying to create AGI, or have them censor and filter for me. As an AI researcher, I want the uncensored model to play with along with no data leaving my network.
The uses of LLMs for information retrieval are great (Bing has improved alot) but the much more interesting cases for me are how they are able to parse nuance, tone, and subtext - imagine a computer that can understand feelings and respond in kind. Empathetic commuting, and it’s already here on my PC unplugged from the Internet.
Google is publishing a lot of research and I guess many of them will be used by other companies.
Do you know now which research will be the basis of tomorrow's most spoken tech? No. They don't either.
IMO, there are basically two justifiably rational moves here: (1) ignore the noise; accept that Sam and Greg have the soft power, but they don't have the votes so they can fuck off; (2) lean into the noise; accept that you made a mistake in firing Sam and Greg and bring them back in a show of magnanimity.
Anything in between these two options is hedging their bets and will lead to them getting eaten alive.
For Ilya Sutskever, he is a very smart guy but he maybe blinded by something here.
For the rest of that board, yes I really do think they are that incompetent.
You’re probably right because people usually don’t have an appetite for risk, but OpenAI is still a startup, and one does not join a startup without an appetite for risk. At least before ChatGPT made the company famous, which was recent.
I’d follow Sam and Greg. But N=1 outsider isn’t too persuasive.
I do not believe it is possible for them to have thought this through. I believe they'll have read the governing documents, and even had some good lawyers read them, but no governance structure is totally unambiguous.
Something I'm immensely curious about is whether they even considered that their opposition might look for ways to make them _criminally_ liable.
Best to ask it next year when the trauma has set in
They could have meant that Sam had 'not been candid' about his alignment with commercial interests vs. the charter.
Any decision that doesn't make the 'line go up' is considered a dumb decision. So to most people on this site, kicking Sam out of the company was a bad idea because it meant the company's future earning potential had cratered.
pushing to call it a coup is an attempt to control the narrative.
Between Bing, o365, etc. etc. etc. it's possibly they could recoup all of the value of their investment and more. At the very least it is a significant minimization of the downside.
Moreover, there is an impartiality issue here in the tech press. A lot of the tech press disagree with the OpenAI Charter and think that Sam's vision of OpenAI as basically Google but providing consumer AI products is superior to the Charter, which they view in incredibly derogatory terms ("people who think Terminator is real"). That's fine, people can disagree on these important issues!
But I think as a journalist it's not engaging fairly with the topic to be on Sam's political side here and not even attempt to fairly describe the cause of the dispute, which is the non-profit Board accusing Sam Altman of violating the OpenAI charter which they are legally obligated to uphold. This is particularly important because if you actually read the OpenAI Charter, it's really clear to see why they've made that decision! The Charter clearly bans prioritising commercialisation and profit seeking, and demands the central focus be building an AGI, and I don't think a reasonable observer can look at OpenAI Dev Day and say it's not reasonable to view that as proof that OpenAI is no longer following its charter.
Basically, if you disagree with the idea of the non-profit and its Charter, think the whole thing is science-fiction bunk and the people who believe in it are idiots, I think you should argue that instead of framing all of this as "It's a coup" without even disclosing that you don't support the non-profit Charter in the first place.
The Anthropic founders left OpenAI after Altman shifted the company to be a non-profit controlling a for profit entity, right?
That being said, this is a case of biting the hand that feeds you. An equivalent would be if a nonprofit humiliated its biggest donor. The donor can always walk away, claiming her future donations away, but whatever she's donated stays at the nonprofit.
Once the avalanche has stopped moving that's a free decision, right now it could be costly.
Sure, I guess I didn't consider them, but you can lump them into the same "media campaign" (while accepting that they're applying some additional, non-media related leverage) and you'll come to the same conclusion: the board is incompetent. Really the only argument I see against this is that the legal structure of OpenAI is such that it's actually in the board's best interest to sabotage the development of the underlying technology (i.e. the "contain the AGI" hypothesis, which I don't personally subscribe to - IMO the structure makes such decisions more difficult for purely egotistical reasons; a profit motive would be morally clarifying).
13B and 7B models run easily and much faster.
And, incidentally, if there is a criminal angle that's probably the only place you might possibly find it and it would take the SEC to bring suit: they'd have to prove that one or more of the board members profited from this move privately or that someone in their close circle profited from it. Hm. So maybe there is such an angle after all. Even threatening that might be enough to get them to fold, if any of them or their extended family sold any Microsoft stock prior to the announcement they'd be fairly easy to intimidate.
This is wishful thinking. If an employee is inclined to follow the innovation, it's clear where they'll go.
But otherwise, the point you raise is a good one: this is about the charter of the board. Many of us are presuming a financial incentive, but the structure of the company means they might actually be incentivized to stop the continued development of the technology if they think it poses a risk to humanity. Now, I personally find this to be hogwash, but it is a legitimate argument for why the board might actually be right in acting apparently irrationally.
The IRS will know soon enough if they were indeed non-profit.
The Board has the power to determine whether Sam is fulfilling his fiduciary duty and whether his conflicts of interest (WorldCoin, Humane AI, etc) compromise broad benefit.
"OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period."
"OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter."
"OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter."
If you believe that the hunt for ASI shouldn't be OpenAI's mission, then that's an entirely different thing. The problem is: That is their mission. Its literally their mission statement and the responsibility of the board to direct every part of the organization toward that goal. Every fucking investor, including Microsoft, knew this, they knew the corporate governance, they knew the values alignment, they knew the power the nonprofit had over the for-profit. Argue credentialism, fine, but four people on the board that Microsoft invested in are now saying that OAI's path isn't the right path toward ASI; and, in case it matters, one of them is universally regarded as one of the top minds on the subject of artificial intelligence, on the planet. The time to argue credentialism was when the investors were signing checks; but they didn't. Its too late now.
My hunch is that the majority of the sharpest research minds at OAI didn't sign on to build B2B SaaS and become the next Microsoft; more accurately, Microsoft's thrall, because they'll never surpass Microsoft, they'll always be their second, if Satya influences Sam back into the boardroom then Microsoft, not Sam, will always be in control. Sam isn't going to be able to right that ship. OAI without its mission also isn't going to exist. That's the reality everyone on Sam's side needs to realize. And; absolutely, an OAI under Ilya is also extremely suspect in its ability to raise the resources necessary to meet the non-profit's goals; they'll be a zombie for years.
The hard reality that everyone needs to accept at this point is, OpenAI is probably finished. Unless they made some massive breakthrough a few weeks ago, which Sam did hint to three days ago, which should be the last hope that we all hold on to, that AI research as a species hasn't just been set back a decade with this schism. If that's the case; I think anyone hoping that Microsoft regains control the company is not thinking the situation through, and the best case scenario is: Ilya and the scientists retain control, and they are given space to understand the breakthrough.
When you fuck up, you get punished for it. And the OpenAI board is about to be punished. This is the problem with giving power to people who don't actually understand how the world works. They use it stupidly, short-sightedly, and without considering the full ramifications of their actions.
Basically the board's choices are commit seppuku and maybe be viable somewhere else down the line, or try to play hardball and fuck your life forever.
It's not really that hard a choice, but given the people who have to make it, I guess it kinda is...
at YC he made a name for himself, built the rolodex, and learned how to build startups to the point that he turned OpenAI into a rocketship and now has unlimited access to capital and talent to build another one.
Here is what I understand by table stakes: https://brandmarketingblog.com/articles/branding-definitions...
If your objective is to suppress the technology, the emergence of an equally empowered competitor is not a development that helps your cause. In fact there's this weird moral ambiguity where your best move is to pretend to advance the tech while actually sabotaging it. Whereas by attempting to simply excise it from your own organization's roadmap, you push its development outside your control (since Sam's Newco won't be beholden to any of your sanctimonious moral constraints). And the unresolvability of this problem, IMO, is evidence of why the non-profit motive can't work.
As a side-note: it's hilarious that six months ago OpenAI (and thus Sam) was the poster child for the nanny AI that knows what's best for the user, but this controversy has inverted that perception to the point that most people now see Sam as a warrior for user-aligned AGI... the only way he could fuck this up is by framing the creation of Newco as a pursuit of safety.
Please get real.
This isn't a university department. You fuck around with $100B+ dollars of other people's money, you're gonna be in for it.
I'd guess, OpenAI without Sam Altman and YC/VC network is toothless. And Microsoft's/VC/media leverage over them is substantial.
Why would they care about that?
For specific things like new words and facts this does matter, but I think they're not in real trouble as long as Wikipedia stays up.
not that those are necessarily bad in all ways but they sure do contribute to unpredictability
I'm not sure that's actually true anymore. Look at any story about "growth", and you'll see plenty of skeptical comments. I'd say the audience has skewed pretty far from all the VC stuff.
Or they'll do something hilarious like sell VCs on a world wide cryptocurrency that is uniquely joined to an individual by their biometrics and somehow involves AI. I'm sure they could wrangle a few hundred million out of the VC class with a braindead scheme like that.
Which one side or the other would declare terminated for nonperformance by the other side, perhaps while suing for breach.
> and one way or another, everyone would get a stay on everyone else
If by a stay you mean an injunction preventing a change in the arrangements, it seems unlikely that "everyone would get a stay on everyone". Likelihood of success on the merits and harm that is not possible to remediate via damages that would occur if the injunction wasn't placed are key factors for injunctions, and that's far from certain to work in any direction, and even less likely to work in both directions.
> and nothing would happen for years except court cases
Business goes on during court cases, it is very rare that everything is frozen.
Here's something I feel higher confidence in, but still don't know: Its not obvious to me that OAI would be overtaken by someone else. There are two misconceptions that we need to leave on the roadside: (1) Technology always evolves forward, and (2) More money produces better products. Both of these ideas, at best, only indicate correlative relationships, and at worst are just wrong. No one has overtaken GPT-4 yet. Money is a necessary input to some of these problems, but you can't just throw money at it and get better results.
And here's something I have even higher confidence in: "Being overtaken by someone else" is a sin worthy of the death penalty in the Valley VC Culture; but their's is not the only culture.
"Table stakes" simply means having enough money to sit at the table and play, nothing more. "Having a big pile of GPUs is table stakes to contest in the AI market."
Specifically, cofounder strife is one of the major issues of startups that don’t get where they could.
If I recall it was Jessica Livingstone’s observation
Ive found that benchmarks are great as a hygiene test, but pointless when you need to get work done.
Working at OpenAI meant working on GPT-4 (and whatever is next in line), which is attractive because it's the best thing in the field right now by a significant margin.
Don't you think the board must have sought legal counsel before acting? It is more likely than not that they checked with a lawyer whether what they were doing is within their legal rights.
I don't think OpenAI board has any responsibility to care for Microsoft's stock price. Such arguments won't hold water in a court of law. And I don't think the power of Microsoft's legal department would matter when there's no legal basis.
I’m just not sure it would be totally starting from scratch since there is more of a playbook and know how.
They are still catching up. What does this tell us?
What evidence were you expecting to find? The board said that Sam wasn't candid with his communication. I've yet to see any evidence that he was candid. Unless the communication has been recorded, and somehow leaks, there won't be any evidence that we can see.
Let's see how this pans out.
He talks about how learning ML made him feel like a beginner again on his blog (which was a way for him attract talent willing to learn ML to OpenAI) https://blog.gregbrockman.com/its-time-to-become-an-ml-engin...
And the evidence that we've seen so far doesn't refute the idea that the board isn't seriously considering taking him back on. The statements we've seen are entirely consistent with "there was a petition to bring him back sent to the board and nothing happened after that."
The playbook, a source told Forbes would be straightforward: make OpenAI’s new management, under acting CEO Mira Murati and the remaining board, accept that their situation was untenable through a combination of mass revolt by senior researchers, withheld cloud computing credits from Microsoft, and a potential lawsuit from investors. https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...
My curiosity stems from whether the board was involved in signing the contract for Microsoft's investment in the for-profit entity, and where the state might set the bar for fraud or similar crimes. How was the vote organized? Did any of them put anything in writing suggesting they did not intend to honor all of the terms of the agreement? Did the manner in which they conducted this business rise the level of being criminally negligent in their fiduciary duty?
I feel like there are a lot of exciting possibilities for criminality here that have little to do with the vote itself.
... and also +1 to your whole last paragraph.
Dante places Brutus in the lowest circle of hell, while Cato is placed outside of hell altogether, even if both fought for the same thing. Sometimes means matter more than ends.
If the whole process had been more regular, they could have removed Altman with little drama.
> I don't think OpenAI board has any responsibility to care for Microsoft's stock price.
They control an entity that accepted $10B from Microsoft. Someone signed that term sheet.
If he's reinstated, then that's it, AI will be used to screw us plebs for sure (fastest path to evil domination).
If he's not reinstated, then it would appear the board acted in the nick of time. For now.
100 hours is equal to 2 full-time jobs and a half time. People believing that number should consider how they would live going to their second job after their day ends (second full-time job) and working on weekends as well (half-time one).
Under ideal conditions, someone might be doing it. But, people shouldn't be throwing around these numbers without any time-tracking evidence.
I'd imagine the latter, and that it can be easily yanked away.
For a nonprofit board, the closest thing is something "the members of the board agree to resign after providing for named replacements". Individual members of the board can be sacked by a quorum of the board, but the board collectively can't be sacked.
EDIT: Correction:
Actually, nonprofits can have a variety of structures defining who the members are that are ultimately in charge. There must be a board, and there may be voting members to whom the board is accountable. The members, however, defined, generally can vote and replace boards members, and so could sack the board.
OTOH, I can't find any information about OpenAI having voting members beyond the board to whom they are accountable.
Not necessarily. An unpopular leader can be even easier to overthrow, because the faction planning the coup has a higher chance of gaining popular support afterward. Or at least they can expect less resistance.
Of course, in reality, political and/or military leaders are often woefully bad at estimating how many people actually support them.
Can you give a few recommendations?
> Someone signed that term sheet.
Do you think that the term sheet holds OpenAI liable for changes in Microsoft's stock price?
One can imagine Microsoft, for example, swooping in and acquiring a larger share of the for-profit entity (and an actual seat on the board, dammit) for more billions, eliminating the need for any fundraising for the foreseeable future.
If a lot of top engieers follow sama out, now that's a real problem.
> Do you think that the term sheet holds OpenAI liable for changes in Microsoft's stock price?
There’s nothing binding on a term sheet.
Another way to think about these is that companies are basically small countries.
> OpenAI's chief strategy officer, Jason Kwon, told employees in a memo just now he was "optimistic" OpenAI could bring back Sam Altman, Greg Brockman and other key employees. There will likely be another update mid-morning tomorrow, Kwon said.
https://x.com/miramurati/status/1726126391626985793
Also also she left her bio as “CTO @OpenAI”.
This is ML, not Software engineering. Money wins, not engineering. Same as it with Google, which won because they invested massively into edge nodes, winning the ping race (fastest results), not the best results.
Ilja can follow Google's Bard by holding it back until they have countermodels trained to remove conflicts ("safety"), but this will not win them any compute contracts, nor keep them the existing GPU hours. It's only mass, not smarts. Ilja lost this one.
https://www.theguardian.com/technology/2023/nov/18/earthquak...
I thought one of the reasons people incorporated companies in the US is that there is a working judiciary system that can ensure the upholding of contracts. Sure the big money can apply some pressure to the dispossessed but if you have a few million cash (and OpenAI surely has) you should be able to force them to uphold their contracts.
Also imagine the bad PR from Microsoft if they decide to not honour their contracts and stop OpenAI from using their computer power for something that OpenAI leadership can easily spin as retaliation.
Sure, this latest move from OpenAI board will wreck the momentum that OpenAI had and its ability to continue its partnership with MS but one of the thesis here was that that's the goal in the first place and they're legally free to pursue that goal if they believe the unfolding of events goes against the funding principles of OpenAI.
That said, they choose a risky path to begin with when they created this for-profit controlled by a non-profit model.
OpenAI handled their audit years ago and hasn't had another one since according to their filings. So that does not seem like it would have been an issue this year.
Take a look at the top of the RRF-1 for the instructions on when it's due. Also, the CA AG's website says that OpenAI's was due on May 15th. They just have been filing six months later each year.
Besides, even if you had an outstanding contract for $10bn, a judge would not pull a "well technically, you did say <X> even though that's absurd, so they get all the money and you get nothing."
If Altman takes all of the good engineers and researchers with him, OpenAI is no more.
So the board can be the boss of nothing, sure, without the ability to do anything - leading the organisation, raising funds, and so on
Perhaps they could hire someone that could replace Sam Altman, but, that would require a much larger company who have the employees indifferent to the leadership, like, EA or something
OpenAI is much more smaller and close knit.
It’s also strange why they would have a couple of nobodies on the board.
No not really, Google has a history of not delivering or launching half baked products and then killing them quickly.
The first thing OpenAI would ask a court for is a preliminary injunction to maintain the status quo while all of this works out in court. IANAL.
There is no alterantive, if you're wedded to "fitting functions to frequencies of text tokens" as your 'research paradigm' -- the only think that can be is a commericalised trinket.
So either the whole org starts staffing top level research teams in neuroscience, cognitive science, philosophy of mind, logic, computer science, biophysics, materials science.... or it just delivers an app.
If sam is the only one interested in the app, its because he's the only sane guy in the room.
the former went on garden leave for 6 months (actually even before the Vega launch) to make a movie with his brother, and then resigned to “spend more time with his family”, before popping up again a month later at intel. That’s what it looks like when they want you to go away but they don’t want to make a big scene over it.
the latter fucked up so badly the board found a reason to kick him out without a golden parachute etc, despite the official reason (dating another employee) being something that was widely known for years, other than being a technical no-no/bad idea in general. he wasn’t kicked out because of that, he was kicked out for the combination of endless fab woes, spectre/meltdown, and bad business/product decisions that let AMD get the traction to become a serious competitor again. That’s what it looks like when the board is absolutely furious and pushes whatever buttons inflict the most pain on you.
Ironic that it’s a bit of an auto-antonym (auto-antoidiom?), it’s ceremonious when they want you to go away quietly and it’s unceremonious when they publicly kick your ass to the curb so hard you’ve got boot marks for a week.
Data and modeling is so much than just coding. I would wish it is like that, but it is not. The fact it renders this much similarity to alchemy is funny, but unfortunate.
What in the world are you talking about? Internet search? I remember Inktomi. Basch's excuses otherwise, Google won because PageRank produced so much better results it wasn't even close.
Parent's point is that GPT-4 is better because they invested more money (was that ~$60M?) in training infrastructure, not because their core logic is more advanced.
I'm not arguing for one or the other, just restating parent's point.
Adding more parameters tends to make the model better. With OpenAI having access to huge capital they can afford 'brute forcing' a better model. AFAIK right now OpenAI has the most compute power, which would partially explain why GPT4 yields better results than most of the competition.
Just having the hardware is not the whole story of course, there is absolutely a lot of innovation and expertise coming from oAI as well.
I would say thousands. Even for the hobby projects, - thousands of GPU hours and thousands of research hours a year.
You have a board that wants to keep things safe and harness the power of AGI for all of humanity. This would be slower and likely restrict its freedom.
You have a commercial element whose interest aligns with the basilisk, to get things out there quickly.
The basilisk merely exploits the enthusiasm of that latter element to get itself online quicker. It doesn't care about whether OpenAI and its staff succeed. The idea that OpenAI needs to take advantage of its current lead is enough, every other AI company is also going to be less safety-aligned going forward, because they need to compete.
The thought of being at the forefront of AI and dropping the ball incentivizes the players to the basilisk's will.
That's a funny use of the word truce.
What do you mean by "full stack"? I'm sure there's a spectrum of ability, but frankly where I'm from, "Data Scientist" refers to someone who can use pandas and scikit-learn. Probably from inside a Jupyter notebook.
"It's also said that after the Khan was laid to rest in his unmarked grave, a thousand horsemen trampled over the area to obscure the grave's exact location. Afterward, those horsemen were killed. And then the soldiers who had killed the horsemen were also killed, all to keep the grave's location secret."
Local guy had all the loyalty of his employees, almost a hero to them.
Got bought out. He took all the money for himself, left the employees with nothing. Many got laid off.
Result? Still loyal. Still talk of him as a hero. Even though he obviously screwed them, cared nothing for them, betrayed them.
Loyalty is strange. Born of charisma and empty talk that's all emotion and no substance. Gathering it is more the skill of a salesman than a leader.
Steve Jobs was not an UX Designer, he had good taste and used to back good design and talent when he found them.
I don't know what Sam Altman is like outside the what media is saying, but he can be like Steve Jobs very easily.
I think you are equating coding with 'design'. Just because Jobs didn't code up the UX, doesn't mean he wasn't 'designing' when he told the coders what would look better.
It's hard to put into words, that do not seem contradictory: GPT-4 is barely good enough to provide tremendous value. For what I need, no other model is passing that bar, which makes them not slightly worse but entirely unusable. But again, it's not that GPT-4 is great, and I would most certainly go to whatever is better at the current price metrics in a heartbeat.
This is from the eyes of an investor. Does OpenAI really need a shareholder focused CEO more than a product focused one?
Also, having a good taste indicates that the person who has that, is not a creator herself, once something is created then only the person can evaluate whether it is good or bad. Equivalent of movie critics or art curator etc.
Isn't this true for most of S.V.?
It would be great to see a truly open and truly human benefit focused AI effort, but OpenAI isn't, and as far as I can tell has no chance of becoming, that. Might as well at least try to be an effective company at this point.
From what I’ve read SJ had deliberately developed good taste which he used to guide designers’ creations towards his vision. He also had an absolute clarity about how different devices should work in unison.
However he didn’t create any design as he didn’t possess requisite skills.
I could be wrong of course so happy to stand corrected.
"No Priors Interview with OpenAI Co-Founder and Chief Scientist Ilya Sutskever" - >>38324546
Plotting, Charting ,visualization, = frontend
You may be interested by the neuroscience research on the application of a time-difference like algorithm in the brain; predictor-corrector systems are just conditional statistical models being trained by reinforcement.
If Altman's contribution had simply been signing deals for data and compute then keeping staff fantasies under control, that already makes him unique in that space and hyper valuable. But he also seems to have good product sense. If you remember, the researchers originally didn't want to do chatgpt because they thought nobody would care.
https://www.businessinsider.com/macintosh-calculator-2011-10
They probably should have, but they may have not.
> It is more likely than not that they checked with a lawyer whether what they were doing is within their legal rights.
It is. But having the legal rights to do something and having it stand unopposed are two different things and when one of the affected parties is the proverbial 900 pound Gorilla you tread more than carefully and if you do not you can expect some backlash. Possibly a lot of backlash.
> I don't think OpenAI board has any responsibility to care for Microsoft's stock price.
Not formally, no. But that isn't what matters.
> Such arguments won't hold water in a court of law.
I'll withhold comment on that until I've seen the ruling. But what does and does not hold water in a court of law unless a case is extremely clear cut isn't something to bet on. Plenty of court cases that have been won because someone managed to convince a judge of something that you and I may think should not have happened.
> And I don't think the power of Microsoft's legal department would matter when there's no legal basis.
The idea here is that Microsofts - immense - legal department has the resources to test your case to destruction if it isn't iron-clad. And it may well not be. Regardless, suing the board members individually is probably threat enough to get them to back down instantly.
We had the whole thing - including the JV - reversed in court in spite of them having the legal right to do all this. The reason: the judge was sympathetic to the argument that apparently the JV was a sham created just to gain access to our code. Counterparty was admonished, a notary public that had failed their duty to act as an independent got the most thorough ear washing that I've ever seen in a court and we got awarded damages + legal fees.
What is legal, what you can do and what will stand up are not always the same thing. Intent matters. And what also really matters is what OpenAI's bylaws really say and to what extent the non-profit's board members exercised their duty to protect the interests of the parties who weren't consulted and who did not get to vote. This so called duty of care - here in NL, not sure what the American term is - can weigh quite heavily.
The semantics of the terms of the 'algorithm' in the case of animals are radically different, and insofar as these algorithms describe anything, it is because they are empirically adequate.
I'm not sure what you think 'evidence' would look like that conditional probability cannot lead to agi -- other than a serious of obvious observations: conditional probability doesnt resolve causation, it is therefore not a model of any physical process, it does not provide a mechanism for generating the propositions which are conditioned-on, it does not model relevance, and a huge list of other severe issues.
The idea that P(A|B) is even relevant to AGI is a sign of a fundamental basic lack of curiosity beyond what is on-trend in computer science.
We can easily explain why any given conditional probability mdoel can encode aribatary (Q, A) pairs -- so any given 'task' expressed as a sequence of Q-A prompts/replies can be modelled.
But who cares. The burden-of-proof on people claiming that conditional probability is a route to AGI is to explain how it models: causation, relevance, counter-factual reasoning, deduction, abduction, sensory-motor adaption, etc.
The gap between what has been provided and this burden-of-proof is laughable
So very easily Sam Altman can be an AI Engineer the same way Steve Jobs was a 'UX designer'.
Leonardo da Vinci and Michelangelo move over - the Data Scientists have arrived.
You haven't actually given anything "crooked" that Altman did.
Performance is never a complete product – neither for Apple, nor for Open AI (its for-profit part).
By the way I can't agree with you on iOS from my personal experience. If you are using the phone as a phone it works very nicely. Admittedly it's not great if you want to write code or some such but there are other devices for that.
Given the complex org structure - I wouldn’t be surprised if the non-profit (or at least it’s board) wasn’t fully aware of the contract terms/implications.
Can not see ≠ easy to see
This is not to take away from the amazing things that they do - The code they produce often does highly quantitative things beyond my understanding. Nonetheless it falls to engineers to package it and fit it into a larger software architecture and the avg. Data Science career path just does not seem to confer the skills necessary for this.
It is about scientists as in "let's publish a paper" vs. engineers as in "let's ship a product".
A bad CEO can make everyone unhappy and grind a business to a halt. Surely a good one can do the opposite, even if that just means facilitating an environment in which key workers can thrive and do their best work.
Edit: None of that is to say Sam Altman is a good or bad CEO. I have no idea. I also disagree with you about iOS, it’s not perfect but it does the job fine. I don’t feel like I’m eating glass when I use it.
Any evidence he's unethical? Or just dislike him?
He actually seems to have done more practical stuff like experimenting with UBI, to mitigate AI risk than most people.
If Sam returns, those three have to go. He should offer Ilya the same deal Ilya offered Greg - you can stay with the company but you have to step down from the board.
Yes, I'd assume most investors prefer this type of approach to a more cautious one. Meaning that companies like this are more likely to attract investors and more likely to beat the ones which care about AGI safety to actually building an AGI (whatever is that supposed to mean).
That are different things.
Hearing Altman's talks I don't think it's that black and white. He genuinely cares about safety from X risk but he doesn't believe that scaling transformers would bring us to AGI or any of its risk. And there in lies the core disagreement with Ilya who wants to stop the current progress unless they solve alignment.
You do understand that other people might different preferences and opinions which are not somehow inherently inferior to those you hold.
> comparable in price/performance to its market counterparts
Current MacBooks are extremely competitive and in certain aspects they were fairly competitive for the last 15+ years.
> but neither did squat for the technical part of the business.
Right... MacOS being an Unix based OS is whose achievement exactly? I guess it was just random chance this this happened?
> That said, Altman is not vital for OpenAI anymore
Focusing on the business side might be more vital than ever now with all the competition you mentioned they just might be left behind in a few years if the money taps are turned off.
The board can maintain control of the legal aspects (such as the org itself), but in the end, people are much more important.
Organizations are easy to duplicate. Persons, less so.
I'm not sure that's true though? They did quite alright over the next ~5 years or so and the way how Jobs handled the Lisa or even the Mac was far from ideal. The late 90s Jobs was a very different person from the mid-early 80s one.
IMHO removing Jobs was probably one of the best thing that happened to Apple (from a long-term perspective). Mainly because when he came back he was a much more experienced capable person and he would've probably achieved way less had he stayed at Apple after 1985.
https://www.technologyreview.com/2022/04/06/1048981/worldcoi...
https://www.buzzfeednews.com/article/richardnieva/worldcoin-...
https://www.businessinsider.nl/y-combinator-basic-income-tes...
Now google search has a lot of problems, much better competition. But seriously you probably dont understand how it was years ago.
Also I thought that in ML still the best algorhitms win, since all the big companies have money. If someone came and developed a "pagerank-equivalent" for AI that is better than the current algs, customerd would switch quickly since there is no loyalty.
On a side note: Microsoft is playing the game very smart by adding AI to their products what makes you stick to them.
But with compromises, as it was like applying loose compression on an already compressed data set.
If any other organisation could invest the money in a high quality data pipeline then the results should be as good, at least that my understanding.
[1] https://crfm.stanford.edu/2023/03/13/alpaca.html [2] https://newatlas.com/technology/stanford-alpaca-cheap-gpt/
Match kernel + BSD userland + NeXTSTEP, how Jobs have anything to do with any of this? Is like purchasing NeXT in 1997 is a major technical achievement...
>> Current MacBooks are extremely competitive and in certain aspects they were fairly competitive for the last 15+ years.
For the past 15 years, whenever I needed new hardware, I thought, "Maybe I'll buy a Mac this time." Then I compared the actual Mac model with several different options available on the market and either got the same computing power for half the price or twice the computing power for the same price. With Linux on board, making your desktop environment eye-candy takes seconds; nothing from the Apple ecosystem has been irreplaceable for me for the last 20 years. Sure, there is something that only works perfectly on a Mac, though I can't name it.
>> Focusing on the business side might be more vital than ever now with all the competition you mentioned they just might be left behind in a few years
It is always vital. OpenAI could not even dream of building their products without the finances they've received. However, do not forget that OpenAI has something technical and very obvious that others overlook, which makes their GPT models so good. They can actually make an even deeper GPT or an even cheaper GPT while others are trying to catch up. So it goes both ways.
But I'd prefer my future not to be a dystopian nightmare shaped by the likes of Musk and Altman.
https://en.m.wikipedia.org/wiki/Give_me_the_man_and_I_will_g...
(I am aware that conceptually it can lead to a skynet scenario)
Is that actually a serious question? Or do you just believe that no founder/CEO of a tech company ever had any role whatsoever in designing and building the products their companies have released?
> Then I compared the actual Mac model with several different options available on the market and either got the same computing power for half the price or twice the computing power for the same price.
I'm talking about M-series Mac mainly (e.g. the Macbook Air is simply unbeatable for what it is and there are no equivalents). But even before that you should realize that other people have different priorities and preferences (.e.g go back a few years and all the touchpads on non Mac laptops were just objectively horrible in comparison, how much is that worth?)
> environment eye-candy takes seconds
I find it a struggle. There are other reasons why I much prefer Linux to macOS but UI and GUI app UX is just on a different level. Of course again it's a personal preference and some people find it much easier to ignore some "imperfections" and inconsistencies which is perfectly fine.
> They can actually make an even deeper GPT or an even cheaper GPT while others are trying to catch up
Maybe, maybe not. Antagonizing MS and their other investors certainly isn't going to make it easier though.
My view of the world, and how the general structure is where I work:
ML is ml. There is a slew of really complex things that aren’t just model related (ml infra is a monster), but model training and inference are the focus.
Backend: building services used by other backend teams or maybe used by the frontend directly.
Data eng: building data pipelines. A lot of overlap with backend some days.
Frontend: you spend most of the day working on web or mobile technology
Others: site reliability, data scientists, infra experts
Common burdens are infrastructure, collaboration across disciplines, etc.
But ML is not backend. It’s one component. It’s very important in most cases, a kitschy bolt on in other cases.
Backend wouldn’t have good models without ML and ML wouldn’t be able to provide models to the world reliably without the other crew members.
The fronted being charts is incorrect unless charts are the offering of the company itself
Loyalty, appreciation, liking… is a spectrum. Loyalty doesn’t have one trumpish definition.
If Ilya could come out and clearly articulate how his utopian version of OpenAI would function, how they will continue to sustain the research and engineering efforts, at what cadence they can innovate and keep shipping and how will they make it accessible to others then maybe there would be more support.
> Performance is never a complete product
In the case of GPT-4, performance - in terms of the quality of generation and speed - is the vital aspect that still holds competitors back.
Google, Microsoft, Meta, and countless research teams and individual researchers are actually responsible for the success of OpenAI, and this should remain a collective effort. What OpenAI is doing now by hiding details of their models is actually wrong. They stand on the shoulders of giants but refuse to share these days, and Altman is responsible for this.
Let us not forget what OpenAI was declared to stand for.
It is incredibly shady, and has the same kind of sci-fi marketing bullshit vibe going on as Elon Musk's hyperloop and Mars missions, and, come to think of it, OpenAI's marketing.
Altman+OpenAI are a hype machine that's riding a bubble to get rich enough through any scheme to be able to turn around and milk users for data, just like facebook and google.
The only difference is, he gets to twist the focus towards this sci-fi AGI idea, which works like the distraction element of a magic trick. The media loves to talk about AGI and regulating Skynet, because it's a story that gets the imagination going --- certainly much more interesting than stories about paying people 2 dollars an hour to sift through data used to train a language model for offensive and traumatizing content to feed to the autocomplete parrot.
I think it's good that he got kicked off the position as CEO, but that does not suddenly make OpenAI a good actor. Some other jerk will take his spot.
Many would disagree.
If you want a for-profit AI enterprise whose conception of ethics is dumping resources into an endless game of whack-a-mole to ensure that your product cannot be used in any embarrassing way by racists on 4chan, then the market is already going to provide you with several options.
How will OpenAI develop further without the leader with a strong vision?
I think Apple is the example confirming that a tech companies need visionary leaders -- even if they are not programmers.
Also, even with our logical brains, we engineers (and teachers) have been found to be the worst at predicting social economic behavior (ref: Freakonomics). To the point where our reasoning is not logical at all.
I compared the quality phone brands and PC brands. For a 13" laptop, both Samsung and Dell XPS are $4-500 more expensive on the same spec (i7/M2 pro, 32GB, 1TB), and I personally think that the MacBook Pro has a better screen, better touch pad and better build quality than the two others
iOS devices are comparably priced with Samsung models.
It was this way last time I upgraded my computer, and the time before.
Yeah, you will find cheaper phones and computers, and maybe you like them, but I appreciate build quality as well as MIPS. They are tools I use from early morning to late night every day.
To explain, it's the board of the non-profit that ousted @sama .
Microsoft is not a member of the non-profit.
Microsoft is "only" a shareholder of its for-profit subsidiary - even for 10B.
Basically, what happened is a change of control in the non-profit majority shareholder of a company Microsoft invested in.
But not a change of control in the for-profit company they invested in.
To tell the truth, I am not even certain the board of the non-profit would have been legally allowed to discuss the issue with Microsoft at all - it's an internal issue only and that would be a conflict of interest.
Microsoft is not happy with that change of control and they favourited the previous representative of their partner.
Basically Microsoft want their shareholder non-profit partner to prioritize their interest over its own.
And to do that, they are trying to impede on its governance, even threatening it with disorganization, lawsuits and such.
This sounds like highly unethical and potentially illegal to me.
How come no one is pointing that out?
Also, how come a 90 billion dollars company hailed as the future of computing and a major transformative force for society would now be valued 0 dollars only because its non-technical founder is now out?
What does it say about the seriousness of it all?
But of course, that's Silicon Valley baby.
Please think about this. Sam Altman is the face of OpenAi and was doing a very good job leading it. If the relationships are what kept OpenAI from always being on top and they removed that from the company, corporations may be more hesitant to do business with them in the future.
Funny how people only use words like revolting for sudden firings of famous tech celebrities like Sam with star power and fan bases. When tech companies suddenly fire ordinary people, management gets praised for being decisive, firing fast, not wasting their time on the wrong fit, cutting costs (in the case of public companies with bad numbers or in a bad economy), etc.
If it’s revolting to suddenly fire Sam*, it should be far more revolting when companies suddenly fire members of the rank and file, who have far less internal leverage, usually far less net worth, and far more difficulty with next career steps.
The tech industry (and US society generally) is quite hypocritical on this point.
* Greg wasn’t fired, just removed from the board, after which he chose to resign.
They are likely valued a lot less than 80 billion now.
OpenAI had the largest multiple - >100X their revenue for a recent startup.
That multiple is a lot smaller now without SamA.
Honestly the market needs a correction.
Its not like every successful org needs a face. Back then Google was a wildly successful as an org, but unlike Steve Jobs then, people barely knew Eric Schmitt. Even with Microsoft as it stands today, Satya is mostly a backseat driver.
Every org has its own style and character. If the board doesn't like what they are building, they can try change it. Risky move nevertheless, but its their call to make.
While OpenAI would have the IP, they would also need to retain the right people who understand the system.
You can be an interior designer without knowing how to make furniture.
You can also be an excellent craftsman and make really nice furniture, and have no idea where it would go.
So sure, UX coders, could make really nice buttons.
But if you have UX coders all going in different directions, and buttons, text boxes, etc.. are all different, then it is bad design, jarring, even if each one is nice.
Then the designer is one that can give the direction, but not know how to code each piece.
Like how Hinton left Google so he could speak freely.
IMO inventing AGI is more powerful than nuclear energy. It would be very stupid of humanity to release it out in the wild.
LLMs are a great tool and nowhere near AGI.
I’m of the belief that alignment of AGI is impossible. It’s like asking us to align with lions. Once we compete for the same resources, we lose.
This reasoning is the only one that makes sense. Where action taken by the board aligns with logic and any private action done by Sam Altman that could have offended the board.
The story of the board being incompetent to the point of mental retardation and firing such a key person is just the most convenient, attractive and least probable narrative. It's rare for an intelligent person to do something stupid, even rarer an entire board of intelligent people to do something stupid in unison.
But it's so easy to fall for that trope narrative.
I'm not saying the board doesn't make decisions or that the board is powerless, or that their decisions are not enforceable or binding. That's already known to be true, there's no value in arguing that.
I'm saying the _ultimate_ decision is made by the people with the money, inevitably. The board is allowed to continue to make decisions until they go against the interests of owners. The whole point of a board is so owners don't have to waste their time making decisions, so instead they pay someone else (directors) to do make them on their behalf.
Start making decisions that go against the people who actually run the place, and you'll find yourself in trouble soon enough.
All that tough talk means doodly-squat.
The non-profit board acted entirely against the interest of OpenAI at large. Disclosing an intention to terminate the highest profile member of their company to the company paying for their compute, Microsoft, is not only the ethical choice, it's the responsible one.
Members of the non-profit board acted recklessly and irresponsibly. They'll be paying for that choice for decades following, as they should. They're lucky if they don't get hit with a lawsuit for defamation on their way out.
Given how poorly Mozilla's non-profit board has steered Mozilla over the last decade and now this childish tantrum by a man raised on the fanfiction of Yudkowsky together with board larpers, I wouldn't be surprised if this snafu sees the end of this type of governance structure in tech. These people of the board have absolutely no business being in business.
Tech superiority might be relevant today, but I highly doubt it will stay the same for a long time even if openai continues to hide details (which I agree is bad). We could argue about the training data, but we have so much publicity available so that is not an issue as well.
And if that corporate structure does not suit Satya Nadella, I would say he's the one to blam for having invested 10B in the first place.
Being angry at a decision he had no right to be consulted on does not allow him to meddle in the governance of its co-shareholder.
Or then we can all accept together that corruption, greed and whateverthefuckism is the reality of ethics in the tech industry.
...
You should look up some history here.
Exactly what you say has already happened and OpenAI is the dedicated research company you are referring to.
He originally left Google deep mind I believe.
> I’m of the belief that alignment of AGI is impossible.
I don't think most people in this space are operating based on beliefs. If there is even a 10% chance that alignment is possible, it is probably still worth pursuing.
OpenAI might have wasted the 10B of Microsoft. But whose fault is it in the first place? It's Microsoft's fault to have invested it in the first place.
I signed up from Worldcoin and have been given over $100 which I changed to real money and think it's rather nice of them. They never asked me for anything apart from the eye id check. I didn't have to give my name or anything like that. Is that indistinguishable from any other cryptocurrency scam? I'm not aware of one the same. If you know of another crypto that wants to give me $100 do let me know. If anything I think it's more like VCs paying for your Uber in the early days. It's VC money basically at the moment, with I think they idea they can change it into a global payment network or something like that. As to whether that will work, I'm a bit skeptical but who knows.
Disregarding every other point, in my eyes this single one downgrades OSX to “we don’t use that here” for any serious endeavor.
Add in Linux’s fantastic virtualization via KVM — something OSX does not have a sane and performant default for (no, hvf is neither of these things). Even OpenBSD has vmm.
The software story for Apple is not there for complicated development tasks (for simple webdev it’s completely useable).
If 99% of their employees quit and their investors pull their support because Sam was fired, they're not getting anywhere and have failed to deliver on their charter.
SamA could try and start his own new copy of OpenAI and I have no doubt raise a lot of money, but that new company if it just tried to reproduce what OpenAI has already done would be not worth very much. By the time they reproduce OpenAI and its competitors will have already moved on to bigger and better things.
Enough with the hero worship for SamA and all the other salesmen.
Even the email to their own employees says it is a irreconcilable difference. Nothing about lying.
I don't think it is reasonable to go with "we don't know". It is more like: "it is crucial to back up your claim. Still, you don't.".
This is entirely false. If it were true, the actions of today would not have come to pass. My use of the word "division" is entirely in-line with use of that term at large. Here's the Wikipedia article, which as of this writing uses the same language I have. [1]
If you can't get the fundamentals right, I don't know how you can make the claims you're making credibly. Much like the board, you're making assertions that aren't credibly backed.
In fact we are very much arguing that thing in the same way. But you do have to get the minutiae right because those are very important in this case. This board is about to - if they haven't already - find out where the real power is vested and it isn't with them. Which is kind of amusing because if you look at the people that make up that board some of them should have questioned their own ability to sit on this board based on qualifications (or lack thereof) alone.
OpenAI does not have an associative body, to my knowledge.
The issue isn’t SamA per se. It’s that the old valuation was assuming that the company was trying to make money. The new valuation is taking into account that instead they might be captured by a group that has some sci-fi notion about saving the world from an existential threat.
I will say that using falsehoods as an attack doesn't put the rest of the commenter's points into particularly good light.
We don’t know what was said, and what was signed. To put the blame with microsoft is premature.
You say that like it’s nothing, but your biometric data has value.
> Is that indistinguishable from any other cryptocurrency scam?
You’re ignoring all the other people who didn’t get paid (linked articles).
Sam himself described the plan with the same words you’d describe a Ponzi scheme.
> If you know of another crypto that wants to give me $100 do let me know.
I knew of several. I don’t remember names but do remember one that was a casino and one that was tidied to open-source contributions. They gave initial coins to get you in the door.
This is an absurd retcon. Google won because they had the best search. Ask Jeeves and AltaVista and Yahoo had poor search results.
Now Google produces garbage, but not in 2004.
I’d guess this sort of narcissist behavior is what got him canned to begin with. Good riddance.
This kind of thing happens all the time though. TSMC trades at a discount because investors worry China might invade Taiwan. But if Chinese ships start heading to Taipei the price is still going to drop like a rock. Before it was only potential.
And a bunch more. A lot of you will never have heard of them, but all of them are multi billion dollar behemoths with thousands of subsidiaries, employees, significant research and investment arms. And they love the fact that barely anyone knows them outside Germany.
While I'm not privy to the contracts that were signed, what happens if Nadella sends a note to the OpenAI board that reads, roughly, "Bring back Altman or I'm gonna turn the lights off"?
Nadella is probably fairly pissed off to begin with. I can't imagine he appreciates being blindsided like this.
The history of technology is littered with the corpses of companies that concentrated solely on the "technical side of the business".
Maybe. But on their investing page it literally says to consider an OpenAI investment as a "donation" as it is very high risk and will likely not pay off. Everyone knew this going into it.
In other words, MS has the losing hand here and CEO of MS is bluffing.
Google won against initially Alta Vista, because they had so much money to buy themselves into each countries interxion to produce faster results. With servers and cheap disks.
The pagerank and more bots approach kept them in front afterwards, until a few years ago when search went downhill due to SEO hacks in this monoculture.
I don't see why. As I understand it, a significant percentage of Microsoft's investment went into the hardware they're providing. It's not like that hardware and associated infrastructure are going to disappear if they kick OpenAI off it. They can rent it to someone else. Heck, given the tight GPU supply situation, they might even be able to sell it at a profit.
OpenAI is one of many AI companies. A board coup which sacrifices one company's value due to a few individuals' perception of the common good is reckless and speaks to their delusions of grandeur.
Removing one individual from one company in a competitive industry is not a broad enough stroke if the threat to humanity truly exists.
Regulators across nations would need to firewall this threat on a macro level across all AI companies, not just internally at OpenAI.
If an AI threat to humanity is even actionable today. That's a heavy decision for elected representatives, not corporate boards.
Clicking an ad is not the only way it is monitized.
Sole proprietors have no board at all. Although they have to deal with customers...
The bigger concern is something like Paperclip Maximizer. Alignment is about how to ensure that a super intelligence has the right goals.
What looks quite unprofessional (at least on the outside) here is that a surprise board meeting was called without two of the board members present, to fire the CEO on the spot without talking to him about change first. That's not how things are done in a professional governance structure.
Then there is a lot of fallout that any half competent board member or C-level manager should have seen coming. (Who is this CTO that accepted the CEO role like that on Thursday evening and didn't expect this to become a total shit show?)
All of it reads more like a high school friends club than a multi billion dollar organization. Totally incompetent board on every dimension. Makes sense they step down ASAP and more professional directors are selected.
Steve Jobs founded NeXT
Then, bupkiss.
No, not a hero.
And SAM allowed all this under his nose. Making sure OpenAI is ripe for MSFT takeover. This is a back channel deal for takeover. What about the earth donors who donated with humanity goal whose funding made it all possible?
I am not sure Sam has any contribution to the OpenAI Software but he gives the world an impression that he owns that and built the entire thing by himself and not giving any attribution to fellow founders.
In the initial press release, they said Sam was a liar. Doing this without offering a hint of an example or actual specifics gave Sam the clear "win" in the court of public opinion.
IF they would have said "it is clear Sam and the board will never see eye to eye on alignment, etc. etc" they probably could have made it 50/50 or even favored.
Their entire alignment effort is focused on avoiding the following existential threats:
1. saying bad words 2. hurting feelings 3. giving legal or medical advice
And even there, all they're doing is censoring the interface layer, not the model itself.
Nobody there gives a shit about reducing the odds of creating a paperclip maximizer or grey goo inventor.
I think the best we can hope for with OpenAI's safety effort is that the self-replicating nanobots it creates will disassemble white and asian cis-men first, because equity is a core "safety" value of OpenAI.
This company should be lead by research team not product team.
My point was that the industry is hypocritical in praising sudden firings of most people while viewing it as awful only when especially privileged stars like Altman are the victim.
Cost reduction is a red herring - I mentioned it only as one example of the many reasons the industry trend setters give to justify the love of sudden firings against the rank-and-file, but I never implied it was applicable to executive firings like this one. The arguments on how the trend setters want star executives to be treated are totally different from what they want for the rank and file, and that’s part of the problem I’m pointing out.
I generally support trying to resolve issues with an employee before taking an irreversible action like this, whether they are named Sam Altman or any unknown regular tech worker, excepting only cases where taking the time for that is clearly unacceptable (like where someone is likely to cause harm to the organization or its mission if you raise the issue with them).
If this case does fall into that exception, the OpenAI board still didn’t explain that well to the public and seems not to have properly handled advance communications with stakeholders like MS, completely agreed. If no such exception applies here, they ideally shouldn’t have acted so suddenly. But again, by doing so they followed industry norms for “normal people”, and all the hypocritical outrage is only because Altman is extra privileged rather than a “normal person.”
Beyond that, any trust I might have had in their judgment that firing Altman was the correct decision evaporated when they were surprised by the consequences and worked to walk it back the very next day.
Still, even if these board members should step down due to how they handled it, that’s a separate question from whether they were right to work in some fashion toward a removal of Altman and Brockman from their positions of power at OpenAI. If Altman and Brockman truly were working against the nonprofit mission or being dishonest with their board, then maybe neither they nor the current board are the right leaders to achieve OpenAI’s mission. Different directors and officers can be found. Ideally they should have some directors with nonprofit leadership experience, which they have so far lacked.
Or if the board got fooled by a dishonest argument from Ilya without misbehavior from Altman and Brockman, then it would be better to remove Ilya and the current board and reinstall Altman and Brockman.
Either way, I agree that the current board is inadequate. But we shouldn’t use that to prematurely rush to the defense of Altman and Brockman, nor of course to prematurely trust the judgment of the board. The public sphere mostly has one side of the story, so we should reserve judgment on what the appropriate next steps are. (Conveniently, it’s not our call in any case.)
I would however be wary of too heavily prioritizing MS’s interests. Yes, they are a major stakeholder and should have been consulted, assuming they wouldn’t have given an inappropriate advance heads-up to Altman or Brockman. But OpenAI’s controlling entity is a 501(c)(3) nonprofit, and in order for that to remain the correct tax and corporate classification, they need to prioritize the general public benefit of their approved charitable mission over even MS’s interests, when and if the two conflict.
If new OpenAI leadership wants the 501(c)(3) nonprofit to stop being a 501(c)(3) nonprofit, that’s a much more complicated transition that can involve courts and state charities regulators and isn’t always possible in a way that makes sense to pursue. That permanence is sometimes part of the point of adopting 501(c)(3) nonprofit status in the first place.
[0] https://twitter.com/ilyasut/status/1491554478243258368?lang=...
As an anegdote: before google I was asked to show the internet to my grandmother. So I asked her what she wants to search for. She asked me about some author, let's say William Shakespeare - guess what did the other search engine find for me and my grandma: porn...
financial crises come and go; the one 15 years ago was interesting but not, as it turned out, a second great depression
ukraine was one of the major topics of the note i linked, though not explicitly mentioned
covid i'm not sure about. you would expect 6.98 million confirmed deaths (and several times that many total deaths) to have a cultural impact like aids or the holocaust, but if it has, i am not sure exactly what that impact is. i don't even see people wearing face masks on the bus when they have a cold, which is sort of the minimum i was hoping for
For humanity as a whole: yes. For individuals who happen to live in the wrong spot: not so much.
> financial crises come and go; the one 15 years ago was interesting but not, as it turned out, a second great depression
Maybe societies are like people in that sense: that they adapt to certain kinds of illnesses and at some point they are no longer quite as susceptible as they were the first time? After all, capitalism itself was tested for the very first time then and it did survive and what doesn't kill you makes you stronger.
> Ukraine was one of the major topics of the note i linked, though not explicitly mentioned
Ok.
> covid i'm not sure about. you would expect 6.98 million confirmed deaths (and several times that many total deaths) to have a cultural impact like aids or the holocaust, but if it has, i am not sure exactly what that impact is. i don't even see people wearing face masks on the bus when they have a cold, which is sort of the minimum i was hoping for
If there ever was a wake-up call that should have been it.
But it's like as if it never happened and as if it has definitively ended and nothing like it can ever happen again. Weirdest experience in my life to date.
Not hoping for a re-match.
Certainly not when they won.
They were better. Basic PageRank was better than anything else. And once they figured out advertisement, they kept making it better to seal their dominance.
1. **Transparent Governance**: OpenAI should strive for greater transparency in its governance structure. This includes clearly outlining the roles and responsibilities of the nonprofit board versus the for-profit subsidiary, and how decisions impact each entity. This would help mitigate misunderstandings and conflicts of interest.
2. **Balanced Board Composition**: The board should be restructured to balance the interests of various stakeholders, including investors, employees, and the broader AI community. This can be achieved by having a diverse set of members with expertise in technology, business, ethics, and law.
3. **Stakeholder Engagement**: Regular engagement with key stakeholders, including investors like Microsoft and employees, is crucial. This ensures that major decisions, such as leadership changes, are made considering their potential impact on all parties involved.
4. **Leadership Stability**: To address concerns about leadership and company direction, it may be beneficial to have a stable leadership team that aligns with OpenAI's mission and values. This could involve a re-evaluation of Sam Altman's role and contributions, considering the interests of both the nonprofit and for-profit entities.
5. **Strategic Communication**: OpenAI should develop a strategic communication plan to address public concerns and market reactions. This includes clear messaging about its mission, decisions, and future plans, which can help maintain public trust and investor confidence.
6. **Ethics and Safety Focus**: Given the transformative potential of AI, OpenAI should continue to prioritize AI ethics and safety. This commitment should be evident in its operations, research directions, and partnerships.
7. **Long-Term Vision Alignment**: Finally, aligning the long-term vision of OpenAI with the interests of its stakeholders, including the global community it aims to serve, is essential. This involves balancing profitability with ethical considerations and societal impact.
By implementing these strategies, OpenAI can navigate its current challenges while staying true to its mission of developing AI in a safe and beneficial manner.
Thought to be honest in my original post I was more thinking of Asimov's nonfiction essays on the subject. I recommend finding a copy of "Robot Visons" if you can. Its a mixed work of fictional short stories and nonfiction essays including several on the subject of the three laws and on the Frankenstein Complex.
I think in this case I would need to see a source to believe you, and if substantiated, it would make me question Nadellas fitness to lead a cloud computing business.
The confidentiality part and the 'no shop' part of a terms sheet are definitely binding and if you break those terms you'll be liable for damages.
Which I later restated as "Start making decisions that go against the people who actually run the place, and you'll find yourself in trouble soon enough." (emphasis added) -- which hopefully you agree is a clear restatement of my original comment.
Meanwhile you said
> This is wildly incorrect. (...) you are just simply factually incorrect. (...) But until he is re-hired Sam Altman is to all intents and purposes fired.
But I never claimed he wasn't for all intents and purposes fired
Yet you did claim I was "wildly" and "factually incorrect" and now you're saying "we are very much arguing that thing in the same way" but "you do have to get the minutiae right". To me, minutiae was sufficiently provided in the original comment for any minimally charitable interpretation of it. Said differently, the loss of minutiae was on the reader's part, not the writer's.
Regardless, lack of minutiae is not comparable to "wildly" or "factually" incorrect. Hence I was not either of these things. QED.
There's this [1], a NYT article saying that Microsoft is leading the pressure campaign to get Altman reinstated.
And there's this [2], a Forbes article which claims the playbook is a combination of mass internal revolt, withheld cloud computing credits from Microsoft, and a lawsuit from investors.
[1] https://archive.is/fEVTK#selection-517.0-521.120
[2] https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...
Nothing stopping a non-profit from owning all the shares in a for-profit.
[1] https://finance.yahoo.com/news/top-10-holdings-mormon-church...
Well.. it's understandable that some people believe that things which are important and interesting to them (and presumably the ones which they work with on/with) are somehow inherently superior to what everyone else is doing.
And I understand that, to be fair I don't use MacOS that much these days besides when I need to work on my laptop. However.. most of those limitations are irrelevant/merely nuisances/outweighed by other considerations for a very high number of people who have built some very complicated and complex software (which has generated many billions in revenue) over the years. You're free to look down on those people since I don't really think they are bothered by that too much...
> for simple webdev it’s completely useable
I assume you also believe that any webdev (frontend anyway) is inherently simple and pretty much worhtless compared to the more "serious" stuff?
If that happens. AMZN, or GOOG will be all over that.
Well, they're all about to be out of a job, so it's a good time to catch up on sleep.
You don't just accuse someone of committing a heinous crime and stay silent. What is the detail?
Though it’s fair to say I’ve never been an executive.
The main issue I have with it is that there are no problems in webdev any more, so you get the same thing in both the frontend and backend: people building frameworks, and tools/languages/etc. to be "better" than what we had before. But it's never better, it's just mildly more streamlined for the use-case that is most en vogue. All of the novel work is being done by programming language theorists and other academic circles (distributed systems, databases, ML, etc.).
Regardless, the world runs on Linux. If you want to do something novel, Linux will let you. Fork the kernel, edit it, recompile it, run it. Mess with all of the settings. Build and download all of the tools (there are many, and almost all built with Linux in mind). Experiment, have fun, break things, mess up. The world is your oyster. In contrast, OSX is a woodchip schoolyard playground where you can only do a few things that someone else has decided for you.
Now, if you want to glue things together, OSX is perfectly fine a tool compared to a Linux distro. The choice there is one of taste and values. Even Windows will work for CRUD. The environments are almost indistinguishable nowadays.
Additionally, how do we get there and who funds it in the long term? When you actually consider how much compute power is required to get us to this point of a "pretty decent chat bot/text generator". It doesn't really seem like we are even 20% of the way to agi. If that's true then no amount of crowdfunding is going to get it even remotely close to providing the resources to power something truly revolutionary.
Don't get me wrong I agree with some of the points you've made and Microsoft are certainly in it for themselves but I also believe that they would like to avoid owning Openai as they'd not want to position themselves clearly as the sole caretaker of ai due to the amount of scrutiny they'd be under.
All that is to say, whether you like him or not, he has taken interest in ai and Openai as well as being a leader on discussing the ethics of developing ai to stratospheric levels that has made many industries and governments take notice.
You think accusing someone of lying in a public statement and don't follow up is competent?
I think you might have better luck grasping the situation if you put a little bit more effort into understanding it rather than jumping to put words in the mouths of others. Nobody said whether they support the non-profit charter or not in the first place, and as far as the phenomena of what's happening right now, the non-profit charter has nothing to do with it.
550 of 700 OpenAI employees have just told the board to resign. Altman is going to MSFT and taking his org with him. Regardless of what the board says, who do you think really has the power here -- the person who has and already had the full support of the org he built around him, or a frankly amateurish board that is completely unequipped for executing on a highly public, high stakes governance task presented in front of it?
Unfortunately, not only can you cannot charter public opinion, but those who try often see it backfiring by making clear their air of moral superiority rather than leaning on their earned mandate to govern the rank and file they are supposed to represent. The board, and it seems you, will simply be learning that lesson the hard way.
Dont worry Google will launch a new version of a Chat App with AI to fix all their previous failures
Microsoft never intended or assumed OpenAI will turnout like this great. It just did a small hedge of $1B to a promising tech and will very much like to takeover OpenAI if given a chance and they can afford all the lawyers needed to keep up with govt regulations.
Anthropic was able to create a comparable product to openai with out all the fuss that sam has created. I agree Sam might have had some significant contributions but they are not as much as it seem to be. I am sure OpenAI will keep on progressing as it does now with or without Sam.
He won the first time and lost the second time.