Well this probably disproves the theory that it was a power grab by Microsoft. It didn’t make too much sense anyway since they already have access to tech behind GPT and Microsoft doesn’t necessarily need the clout behind the OpenAI brand.
But seriously, this muddies the water even more. I assumed the Microsoft deal being based on some false pretense was the reason this was all happening. I guess that could still be true and the board is trying to protect themselves from whatever else is about to come out.
Edit: I just read he was fired, but the point remains.
In my opinion, I'd say the shortness and lack of details backs up the story that they had no idea. You'd see way more words if a marketing department had it's hands on something like this. This was 100% a get something out asap job.
They needed the Microsoft investment before GPT scaling was proven out. I imagine many entities would be willing to put money into a truly open research lab given OpenAI’s track record.
I assure you that sentience is a physical process, akin to HUMAN METABOLISM
You have nothing to be surprised about, Mr. Turing-jokester.
OpenAI’s biggest customer and investor starts buying AMD chips and simultaneously building their own chips
there are a lot of ignorable cracks in the armor that support any number of theories
let alone Altman himself, who knows
Just a rumor. Zero chance someone at MS wasn't already aware.
sama’s generic “looking forward to what’s next” response also doesn’t give me confidence it won’t be a bigger scandal
The Open AI board letter, representing just 4 people, screams butt hurt and personal disagreements. Microsoft, who just finished building OpenAI’s models into every core product, was blindsided. The chairman of the OpenAI board, Greg Brockman, another startup exec was pushed out at the same time. Eric Schmidt, with his own AI startup lab start singing Sam’s praises and saying “what’s next?”
My guess is that Microsoft is about to get fucked and Eric Schmidt is going to pop open a bottle of expensive champagne tonight.
"Im shocked to see gambling in this establishment! Shocked I tell you!"
OpenAI (the board that made this decision) is still ultimately a non-profit, so it's possible that interests might not be aligned.
* Exclusive access to resell OpenAI's technology and keep nearly all of that revenue for themselves, both cloud and services
* Receive 75% of OpenAI's profits up to $1 trillion
All they had to do is not rock the boat and let the golden goose keep laying eggs. A massive disruption like this, so soon after DevDay would not fit that strategy.My guess at this point is financial malfeasance, either failing to present a deal to the board or OpenAI has been in financial straits and he was covering it up.
I mean, if they don't come out and explain what happened, they can't be surprised when people "hallucinate" all kinds of random, preposterous explanations.
[1] https://en.wikipedia.org/wiki/Microsoft_engineering_groups#C...
Come on. No way Microsoft’s team does a deal like that with 0 power or knowledge in a situation like this. That’s ludicrous.
Edit: the only case I can see for MSFT being truly blindsided is as follows. Elon is behind it. Sam and Elon have their breakup. Sam seems to win. They close the deal with MSFT, all is good. But Elon is intimately familiar with the corporate structure and all moves made historically, maybe even has some evidence of wrong-doing. There is probably only 1 person in the valley that could pressure the non profit to oust Sam (and by extension Greg) AND provide the financial/legal/power backing to see it through. It takes a lot money and influence to do this from the outside. That is really the only scenario I could see MSFT truly being blindsided for getting out-maneuvered by a dinky non-profit board.
"screams butt hurt"? the board put out a blog post saying the CEO had lied so badly they'd insta-sacked him.
It's probably literally an exec on the OpenAI/Microsoft partnership.
The alternative to "a person familiar" is us as readers never get this information at all.
"Microsoft, which has invested billions in OpenAI, learned that OpenAI was ousting CEO Sam Altman just a minute before the news was shared with the world, according to a person familiar with the situation."
And then Greg being all "committed to safety" in his resignation statement makes me think this was a conflict between being an open OpenAI with global research or being closed and proprietary in the name of safety.
I would not be surprised if it turns out Microsoft has a multibillion dollar, complex financial instrument axe on the necks of these people by Monday forcing a sale or a new management structure that gives them more control.
Satya Nadella's Statement on OpenAI - >>38312355
and for completeness I suppose (though at the moment they're #1 and #2):
OpenAI's board has fired Sam Altman - >>38309611
Greg Brockman quits OpenAI - >>38312704
They 100% have leverage to exert influence on both, it’s just bad PR if word gets out.
Maybe almost had to since it was during market hours.
“we’re committed to delivering all of this to our customers ... We have ... full access to everything we need to deliver on our innovation agenda...”
I'd argue this is signaling they have the IP / source code / models / etc.
(Which, for what it's worth, is common in substantial partnership agreements.)
This is funny to me, as Twitter is the platform for "deliberately low-effort" posts, but you see it as a platform for official statements. How times change...
Perhaps it's all legal but I think it's very understandable to look at it and think it's a travesty.
(Assuming they have some plan that gives them the flexibility to trade shares directly on the market like that. I think $GME had something like this?)
Edit: And, of course, actually mean it, unlike Caroline Ellison and the $22 FTT: https://twitter.com/carolinecapital/status/15892874579753041...
I think that just signals that they have a firm business agreement with OAI regardless of what Altman might be doing.
with a product like chatGPT, especially given the nature of how it has been presented thus far (our servers, our API, your account on those servers) , it seems extraordinarily dangerous to treat it like a common partnership agreement.
> OpenAI is an American artificial intelligence (AI) organization consisting of the non-profit OpenAI, Inc.[4] registered in Delaware and its for-profit subsidiary corporation OpenAI Global, LLC.[5]
IKEA [0] and Rolex [1] are structured in a similar manner, although different since they’re not US based.
[0] https://en.m.wikipedia.org/wiki/Stichting_INGKA_Foundation
[1] https://en.m.wikipedia.org/wiki/Hans_Wilsdorf#Hans_Wilsdorf_...
Obviously thinking about it this way would cause me to miss or disbelieve a lot of true stories, but it doesn't seem right to say I should trust every outlet I see widely posted either.
those are some bold beliefs given the overall honesty in journalism and benefits to being first to publish.
The site is mercifully clean (just turn off your adblocker and see what I mean).
Their scoops are good.
The format is succinct and efficient but not "dumb" like a short twitter thread.
What's more, none of this has changed over the years, somehow avoiding enshittification.
If the deal goes sideways, the board of OpenAI (the nonprofit) could just dump everything onto the open internet. All M$ has is a substantial but minority stake in a company that the non-profit OpenAI owns all the beef of.
Honestly, I would expect more from Microsoft's attorneys, whether this was overlooked or allowed. Maybe OAI had superior leverage and MS was desperate to get in to AI.
Both masterminds of ChatGPT have left the company.
Feels like Nokia 2.0
The last this he said was: “I look forward to building AGI with you” or the like…
I’m betting that he insulted Satya at that event or upshowed him, etc. and that’s why he’s kicking rocks…
"Microsoft, which has invested billions in OpenAI, was blindsided by Altman’s firing, Axios reported and Semafor confirmed."
They just ended Ignite, their huge IT conference - and have revealed baking GPT into EVERYTHING they do. Everything.
The closing keynote was the massive engineering effort put in to running LLMs at scale - for MS themselves and customers.
MS is all in on GPT, including releasing a no code and low code custom GPT builder for orgs this week.
Nonprofits can already raise funds by e.g. selling T-Shirts, baked goods, ai services, etc…
There it is!!
“I look forward to building AGI with you.”
vs.
“Together, we will continue to deliver the meaningful benefits of this technology to the world.”
https://blogs.microsoft.com/blog/2023/11/17/a-statement-from...
Buying them (or getting de facto control) is clearly an easier way to achieve that, vs. replicating the technology in-house.
IMO this is the most important part of Nadella's blog post:
> Most importantly, we’re committed to delivering all of this to our customers while building for the future.
It's curious to me that they see the departure of Sam Altman as a reason to remind us that they are "building for the future" (which I take to mean: working toward independence from OpenAI). I think it actually lends credence to the theory that this was a failed power grab of some sort.
I guess they can't really migrate the users though. Maybe they will push more aggressively for people to use bing going forward.
----
A statement from Microsoft Chairman and CEO Satya Nadella Nov 17, 2023 | Microsoft Corporate Blogs
As you saw at Microsoft Ignite this week, we’re continuing to rapidly innovate for this era of AI, with over 100 announcements across the full tech stack from AI systems, models, and tools in Azure, to Copilot. Most importantly, we’re committed to delivering all of this to our customers while building for the future. We have a long-term agreement with OpenAI with full access to everything we need to deliver on our innovation agenda and an exciting product roadmap; and remain committed to our partnership, and to Mira and the team. Together, we will continue to deliver the meaningful benefits of this technology to the world.
I'm not saying I think it WAS that, but come on.
Through the public actions of Sam Altman in various places like the US congress it has become rather clear that his goals are to device and fear monger to create an environment of regulatory capture where due to misguided laws OpenAI will have an unfair competitive advantage.
This might be quite in line with what Microsoft tends to like. But it also can be a risk for MS if regulation goes even a step further.
This is also in direct opposition with the goals OpenAI set themself and which some of the other investors might have.
So MS being informed last minute to not give them any chance to change that decision is quite understandable.
At the same time it might have been pushed under the table by people in MS which where worried it poses to much risk, but which maybe e.g. might need an excuse why they didn't stop it.
Lastly is the question why Sam Altman acted the way he did. The simplest case is greed for money and power, in which case it would be worrying for business partners at how bad he was when it comes to public statements not making him look like a manipulative untranslatable **. The more complex case would be some twisted believe that a artificial pseudo monopoly is needed "because only they [OpenAI] can do it in the right way and other would be a risk". In that case he would be an ideologically driven person with a seriously twisted perception of reality, i.e. the kind of people you don't want to do large scale business with because they are too unpredictable and can generally not be trusted. Naturally there are also a lot of other options.
But one thing I'm sure about is that many AI researchers and companies doing AI products did not trust the person Sam Altman at all after his recent actions, so ousting him and installing a different CEO should help increasing trust into OpenAI.
This comment on the OpenAI DevDay video aged really well:
> @JustinHalford: Some odd tension coming from Sam. I’m sensing some tension in the Open AI/Microsoft partnership.
And maybe to allow choosing the right people for the right job. If the non-profit has an ideological purpose, its leadership should probably reflect that. At the same time, the for-profit subsidiary probably works better under professional management.
I don’t know why so many here are struggling to accept that this guy fucked up, lied to his boss, got caught, and got fired for it, and that’s all there is to it. boards will tolerate many things, but willfully lying to them about anything material is not one of them.
a ceo who won’t tell the board the truth is a ceo who thinks they are more important than the company. some boards don’t care, because they are already bought off with equity, but this board doesn’t get equity…
What I do know, having worked for many large organizations, is that reading the daily press (or listening to the news) is a terrible way to get accurate real-time facts about current corporate happenings.
Related: check out Gell-Man amnesia effect.
>OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.
Sam is gone, Greg is gone, this leaves: Ilya, Adam, Tasha, and Helen.
Adam: https://en.wikipedia.org/wiki/Adam_D'Angelo?useskin=vector
Tasha: https://www.usmagazine.com/celebrity-news/news/joseph-gordon... (sorry for this very low quality link, it's the best thing I could find explaing who this person is? There isn't a lot of info on her, or maybe google results are getting polluted by this news?)
Have you read any news about Mozilla's budget in the past 10 years or so?
> Robert Bosch GmbH, including its wholly owned subsidiaries, is unusual in that it is an extremely large, privately owned corporation that is almost entirely (92%) owned by a charitable foundation. Thus, while most of the profits are invested back into the corporation to build for the future and sustain growth, nearly all of the profits distributed to shareholders are devoted to humanitarian causes.
> [...] Bosch invests 9% of its revenue on research and development, nearly double the industry average of 4.7%.
(Source: Wikipedia)
I always considered this a wonderful idea for a tech giant.
You basically never have a person in the chain actually making decisions for anything but to maximize profit.
OpenAI has pretty much the "best" model and first mover advantage. They can lose the latter, and might struggle to keep the former.
If MSFT doesn’t like this move, why wouldn’t they just … not honor those credits? Or grant more to a successor entity? Does OAI have its own warehouse of GPUs separate from Azure?
Seems like a very dangerous game for Ilya to play.
One is the Joseph Gordon levitt's wife. You know. The actor from 500 days of summer.
Revenue/Expenses/Net Assets
2013: $314m/$295m/$255m
2018: $450m/$451m/$524m
2021: $600m/$340m/$1,054m
(Note: "2017 was an outlier, due in part to changes in the search revenue deal that was negotiated that year." 2019 was also much higher than both 2018 and 2020 for some reason.)
2018 to 2021 also saw their revenue from "Subscription and advertising revenue"— Representing their Pocket, New Tab, and VPN efforts to diversify away from dependence on Google— Increase by over 900%, from $5m to $57m.
https://foundation.mozilla.org/en/who-we-are/public-records/
Seriously, Mozilla gets shat on all the time, presumably because they're one of the few sources of hope and therefore disappointment in an overall increasingly problematic Internet landscape, and I wish they would be bigger too, but they're doing fine all things considered.
Certainly I wouldn't say their problems are due to this particular apsect of their legal structure.
Microsoft has an influence over fking governments. It doesn't need to have an official board seat. It doesn't even need to ask what they want directly. It's enough for people in power to be aligned with their interests.
I'm not saying that's the case here, just pointing that having no ownership or board member in an entity doesn't rule out having power or influence.
Here's an example of her work: AI safeguards: Views inside and outside China (Book chapter) https://www.taylorfrancis.com/chapters/edit/10.4324/97810032...
She's at roughly the same level of eminence as Dr. Eric Horvitz (Microsoft's Chief Scientific Officer), who has similar goals as her, and who is an advisor to Biden. Comparing the two, Horvitz is more well-connected but Toner is more prolific, and overall they have roughly equal impact.
There was literally no one at the level of Google when it came out. I still remember Infoseek and Yahoo, both were garbage compared to Goog.
I know they aren't doing their best right now, but there is no need to rewrite history. Google was always superior to their search competitors, which is why it is so sad to see their current situation.
Why? It’s hard to imagine anyone putting any significant amounts of money (in comparison to the MS deal anyway) without any exclusivity rights at least
Tough luck, considering all the obligations they have to MS now.
Which is going to be hard considering they promised MS a 49% stake in OpenAI.
Which is something AMD/Nvidia will have to take in to account before agreeing to any partnership.
MS doesn't care about how money it cost, they care about the fact it's their ticket back into the fight with google and apple.
Microsoft, while a large investor (who has already reaped large rewards from that investment) explicitly has no governance role in any of the OpenAI entities, including the one at the very bottom of the stack of four that they are invested in, and this was a decision by the board governs the nonprofit at the top of the stack about personnel matters, so there is no reason to think that Microsoft would be notified in advance.
Anyway, the idea that the Chinese military or leadership actually will sacrifice a potential advantage in the name of ai safety is absurd.
First, the “OpenAI" whose profits are being discussed isn't a 501(c)3 charity, but a for-profit LLC (OpenAI Global, LLC) with three other organizations between it and the charity.
Second, charities and other nonprofits can make profits (surplus revenue), they just can't return revenues (but they can have for profit subsidiaries that return profit to them and other investors in certain circumstances.)
> The whole umbrella for-profit corp they formed when they became popular should be illegal
The umbrella organization is a charity. The for profit organizations (both OpenAI Global LLC that Microsoft invests in, and its immediate holding company parent which has some other investors besides the charity) are subordinate to the charity and its goals.
> and is clearly immoral.
Not sure what moral principal and analysis you are applying to reach this conclusion.
Agreed. She’s not famous for her publications. She’s famous (and intimidating) for being a “power broker” or whatever the term is for the person who is participating in off-the-record one-on-one meetings with military generals.
> Anyway, the idea that the Chinese military or leadership actually will sacrifice a potential advantage in the name of ai safety is absurd.
The point of her work (and mine and Dr. Horvitz’s as well) is to make it clear that putting AI in charge of nuclear weapons and other existential questions is self-defeating. There is no advantage to be gained. Any nation that does this is shooting themselves in the foot, or the heart.
But they are similar in that both involve a nonprofit controlling subordinate for-profit entities.
Closing the huge fundraising gap OpenAI had as a nonprofit by returning profits from commercial efforts instrumental to, but distinct from, the nonprofits charitable purpose, without sacrififing any governance or control of the subordinate entity.
I think they get shat on all the time because of what you mentioned but also because they consistently fail to deliver a good browser experience for most of their still loyal users.
Most of the people I talk to who still use their product do so out of allegiance to the values of FOSS despite the dog-shit products they keep foisting upon us. You'd think we'd wise up several decades in by now.
so if he believed everything he sayed it means he would be incompetent, which just can't be true however I look at it (which means I'm 100% certain sure he acted dishonest in congress, and like I sayed before I'm not fully sure why but it's either way a problem as he lost the trust of a lot of other people involved through that and some other actions).
I'm not the parent, but I think it's clear: if I'm a charity, and I have a subordinate that is for profit, then I'm not a charity. I'm working for profit, and disguising myself for the benefits of being a charity.
Is part of the advocacy convincing nation-states that an AI arms-race is not similar to a nuclear arms-race amounting to a stalemate?
What's the best place for a non-expert to read about this?
The most obvious example is the corporate foundation, but if we believe the first result from a search you're right in they are controlled but not owned by the for-profit:
> A for-profit cannot own a nonprofit because a nonprofit has no owners. However, a for-profit can set up a structure in which it effectively has control over the nonprofit, subject to applicable laws, including those regarding private inurement, private benefit, and corporate self-dealing
> https://nonprofitlawblog.com/can-a-nonprofit-own-a-for-profi...
1) A long standing disagreement between AI-safety and AI-profiteering with Ilya on one side and Altman on the other. Ilya (board member) was the one who told Altman to attend video call, then told him he was fired.
2) Some side dealing from Altman - raising new VC funds - maybe in conflict of interest with OpenAI, that was the final straw.
There also appears to be a lot of rumors about Altman's personal conduct, but even if true that doesn't seem to jibe with the official statement over the reason for his firing, or Brockman and others resigning in unison - more reflection of the internal rift.
Obviously, the for profit subsidiary ooerates for profit—and where its not a wholly owned subsidiary, it may return some profit to investors that aren't the charity—but neither the subsidiary nor the outside investors getbthe benefits of charity status.
OpenAI only had resources because of Microsoft, and they bit the hand that feeds them.
A more reasonable claim from your epistemic state could be something like, "There was no major crash from the news, as might be seen in a general panic."
At that time, google was one big search bar and 10 links.
It was fast, had no ads. Until IPO, they didn’t want to do ads.
Now it’s just ads on the top links.
Google is absolutely vulnerable, if someone comes up with a better search engine.
Without the top 5 tech companies, S&P500 has lackluster growth.
Microsoft has added trillions to its cap. The statement “we have all the access we need” is a powerful statement. To both OpenAI board and investors.
OpenAI is built on Azure compute. MS has invested billions of their own, they’re building their own chips now.
Essentially Microsoft is saying you can burn OpenAI to the ground, “we have everything we need” to win! - the data, the compute, the algorithms, the engineers, the capital and the market.
This is a way bigger blow to OpenAI than Microsoft.
The girl guides are a non-profit; they teach kids about outdoor stuff, community, whatever, they do good works, visit old folks, etc.
If for some legal reasons they had a subsidiary that sold cookies (and made a profit), with all the profits returned to the non-profit parent, I think that'd be ....fine? Right?
Thank you for your interest! :) I'd recommend skimming some of the papers cited by this working group I'm in called DISARM:SIMC4; we've tried to collect the most relevant papers here in one place:
In response to your question:
At a high level, the academic consensus is that combining AI with nuclear command & control does not increase deterrence, and yet it increases the risk of accidents, and increases the chances that terrorists can "catalyze" a great-power conflict.
So, there is no upside to be had, and there's significant downside, both in terms of increasing accidents and empowering fundamentalist terrorists (e.g. the Islamic State) which would be happy to utilize a chance to wipe the US, China, and Russia all off the map and create a "clean slate" for a new Caliphate to rule the world's ashes.
There is no reason at all to connect AI to NC3 except that AI is "the shiny new thing". Not all new things are useful in a given application.