What in the world is happening at OpenAI?
All due to one word: Greed.
[1] https://www.pbs.org/transistor/background1/corgs/fairchild.h...
What an utterly bizarre turn of events, and to have it all played out in public.
A $90 billion valuation at stake too!
“Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join.”
> In their letter, the OpenAI staff threaten to join Altman at Microsoft. “Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join," they write.
It’s like they distressed the company to make an acquisition one of mercy instead of aggression, knowing they already had their buyer lined up.
That’s like trying to create MAD with the position you “may” launch nukes in retaliation.
It’s time to take a breath, step back, and wait until someone from OpenAI says something substantial.
https://www.youtube.com/watch?v=Gpc5_3B5xdk
The whole thing is just ridiculous. How can you be senior leadership and not have a clear idea of what you want? And what the staff want?
I don't think she actually had anything to do with the coup, she was only slightly less blindsided than everyone else.
Did not expect to see this whole thing still escalating! WOW! What a power move by MSFT.
I'm not even sure OpenAI will exist by the end of the week at this rate. Holy moly.
Yet they start this kind of nonsense.
Not exactly focusing on building a great system or product.
Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.
So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.
[1] https://stratechery.com/2023/openais-misalignment-and-micros...
ChatGPT: Done!
Microsoft gobbles up all talent from OpenAI as they just gave everyone a position.
So we went from "Faux NGO" to, "For profit", to "100% Closed".
Literally just last week there were articles about OpenAI paying “10 million” dollar salaries to poach top talent.
Oops.
I wonder if it will take 20 years to learn the whole story.
So this was never about safety or any such bullshit. It’s because GTPs store was in direct competition with Poe!?
"I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."
This is the point where I've realized I just have to wait until history is written, rather than trying to follow this in real time.
The situation is too convoluted, and too many people are playing the media to try to advance their version of the narrative.
When there is enough distance from the situation for a proper historical retrospective to be written, I look forward to getting a better view of what actually happened.
If all those employees leave and microsoft reduce their credits it's game over.
If you fire your founder CEO you need to be on top of messaging. Your major customers can't be surprised. There should've been an immediate all hands at the company. The interim or new CEO should be prepared. The company's communications team should put out statements that make it clear why this was happening.
Obviously they can be limited in what they can publicly say depending on the cause but you need a good narrative regardless. Even something like "The board and Sam had fundamental disagreement on the future direction of the company." followed by what the new strategy is, probably from the new CEO.
The interim CEO was the CEO and is going back to that role. There's a third (interim) CEO in 3 days. There were rumors the board was in talks to re-hire Sam, which is disastrous PR because it makes them look absolutely incompetent, true or not.
This is just such a massive communiccations and execution failure. That's why they should be fired.
what the actual fuck =O
Every time a CEO is replaced, drink.
Every time an open letter is released, drink.
Every time OpenAI is on top of HN, drink.
Every time dang shows up and begs us to log out, drink.
Going from OpenAI to Microsoft means ceding the upside: nobody besides maybe Altman will make fuck-you money there.
I’m also not sure as some in Silicon Valley that this is antitrust proof. So moving to Microsoft not only means less upside, but also fun in depositions for a few years.
If you made a comment recently about de jure vs de facto power, step forward and collect your prize.
Irony is that if a significant portion of OpenAI staff opt to join Microsoft, then Microsoft essentially killed their own $13B investment in OpenAI earlier this year. Better than acquiring for $80B+ I suppose.
> Swisher reports that there are currently 700 employees as OpenAI and that more signatures are still being added to the letter. The letter appears to have been written before the events of last night, suggesting it has been circulating since closer to Altman’s firing. It also means that it may be too late for OpenAI’s board to act on the memo’s demands, if they even wished to do so.
So, 3/4 of the current board (excluding Ilya) held on despite this letter?
[1]: https://www.theverge.com/2023/11/20/23968988/openai-employee...
Wasn't a key enabler of early transitor work that required capital investment was modest?
SotA AI research seems to be well past that point.
https://www.levels.fyi/blog/openai-compensation.html
https://images.openai.com/blob/142770fb-3df2-45d9-9ee3-7aa06...
The stakes being heightened only decreases the likelihood the OpenAI profit sharing will be worth anything, only increasing the stakes further…
“Remarkably, the letter’s signees include Ilya Sutskever, the company’s chief scientist and a member of its board, who has been blamed for coordinating the boardroom coup against Altman in the first place.”
They were simple in principle but expensive at scale. Sounds like LLMs.
Microsoft can absorb all the employees and switch them into the new AI subsidiary which basically is an acqui-hire without buying out everyone else's shares and making a new DeepMind / OpenAI research division inside of the company.
So all along it was a long winded side-step into having a new AI division without all the regulatory headaches of a formal acquisition.
Or maybe _this_ week he would need to spend his time doing something productive.
First class board they have.
Well, we don't know.
What we do know, is that the "coordinating the boardroom coup against Altman" is a rumor and speculation about a thing we don't know anything about.
Perhaps it was just that original intention for openai to be a nonprofit, but at some point somewhere it wasn't pure $ and that's what makes it interesting. Also more tragic because now it looks like it's heading straight to a for profit company one way or another.
What strikes me is that he wrote the regretful participation tweet after witnessing the blowback. He should have written it right with the initial news. And clearly explain employees. This is not a smart way to conduct board oversight.
500 employees are not happy. I’m siding with the employees (esp early hires), they deserve to be part of once in a lifetime company like OpenAI after working there for years.
Exactly. I'm curious about how much of this was planned vs emergent. I doubt it was all planned: it would take an extraordinary mind to foresee all the possible twists.
Equally, it's not entirely unpredictable. MS is the easiest to read: their moves to date have been really clear in wanting to be the primary commercial beneficiary of OAI's work.
OAI itself is less transpararent from the outside. There's a tension between the "humanity first" mantra that drove its inception, and the increasingly "commercial exploitation first" line that Altman was evidently driving.
As things stand, the outcome is pretty clear: if the choice was between humanity and commercial gain, the latter appears to have won.
Reminds me a bit of the Open AI board. Most of them I'd never heard of either.
(but also a good chunk of the 13bn was pre-committed Azure compute credits, which kind of flow back to the company anyway).
Far from certain. One, they still control a lot of money and cloud credits. Two, they can credibly threaten to license to a competitor or even open source everything, thereby destroying the unique value of the work.
> without all the regulatory headaches of a formal acquisition
This, too, is far from certain.
> Never attribute to malice that which is adequately explained by stupidity.
Let this be a lesson to both private and non-profit companies. Boards, investors, executives... the structure of your entity doesn't matter if you wake any of the dragons:
1. Employees 2. Customers 3. Government
This breathless real-time speculation may be fun, but now that social media amplifies the tiniest fart such that it has global reach, I feel like it just reinforces the general zeitgeist of "Oh, what the hell NOW? Everything is on fire." It's not like there's anything that we peasants can do to either influence the outcome, or adjust our own lives to accomodate the eventual reality.
Poe has direct competition with the GPTs and the "revenue sharing" plan that Sam released on Dev day.
The Poe Platform has their "Creators" build your own bot and monetize it, including OpenAI and other models.
JFC.
> We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.
My understanding was that practical results were indicating your model has to be pretty large before you start getting "magic."
While Activision makes much more money I imagine, acquiring a whole division of productive, _loyal_ staffers that work well together on something as important as AI is cheap for 13B.
Some background: https://sl.bing.net/dEMu3xBWZDE
I keep hearing this, principally from Silicon Valley. It’s based on nothing. Of course this will receive both Congressional and regulatory scrutiny. (Microsoft is also likely to be sued by OpenAI’s corporate entity, on behalf of its outside investors, as are Altman and anyone who jumps ship.)
The board is being given a sanity-check; I would expect the signers intentionally left themselves a bit of room for escalation/negotiation.
How often do you win arguments by leading off with an immutable ultimatum?
Microsoft has a $2.75T market value and over $140B of cash.
Joining a corporate behemoth like Microsoft and all the complications it brings with it will mean a massive reduction in the freedom and innovation that Sam is used to from OpenAI (prior to this mess).
Or is Microsoft saying: Here is OpenAI², a Microsoft subsidiary created juste for you guys. You can run it and do whatever you want. No giant bureaucracy for you guys.
Btw: we run all of OpenAi²s compute,(?) so we know what you guys need from us there.
we won it but you can run it and do whatever it is you want to do and we dont bug you about it.
I would say it's due to unconventional not-battle-tested governance.
But it doesn’t have to. And the politics suggest it very likely won’t.
For OpenAI... Altman (and formerly Musk) were not that adult supervision. Nor is the board they ended up with. They needed some people on that board and in the company to keep things sane while cherishing the (supposed) original vision.
(Now, of course that original Google vision is just laughable as Sundar and Ruth have completely eviscerated what was left of it, but whatever)
If I may paraphrase Churchill: This has become a bit of a riddle wrapped in a mystery inside an enigma.
From our outsider, uninformed perspective, yes. But if you know more sometimes these things become completely plannable.
I'm not saying this is the actual explanation because it probably isn't. But suppose OpenAI was facing bankruptcy, but they weren't telling anyone and nobody external knew. This allows more complicated planning for various contingencies by the people that know because they know they can exclude a lot of possibilities from their planning, meaning it's a simpler situation for them than meets the (external) eye.
Perhaps ironically, the more complicated these gyrations become, the more convinced I become there's probably a simple explanation. But it's one that is being hidden, and people don't generally hide things for no reason. I don't know what it is. I don't even know what category of thing it is. I haven't even been closely following the HN coverage, honestly. But it's probably unflattering to somebody.
(Included in that relatively simple explanation would be some sort of coup attempt that has subsequently failed. Those things happen. I'm not saying whatever plan is being enacted is going off without a hitch. I'm just saying there may well be an internal explanation that is still much simpler than the external gyrations would suggest.)
What might have been tens or hundreds of millions in common stakeholder equity gains will likely be single digit millions, but at least much more likely to materialize (as Microsoft RSUs).
I still think 'Altman's Basilisk' is a thing: I think somewhere in this mess there's actions taken to wrest control of an AI from somebody, probably Altman.
Altman's Basilisk also represents the idea that if a charismatic and flawed person (and everything I've seen, including the adulation, suggests Altman is that type of person from that type of background) trains an AI in their image, they can induce their own characteristics in the AI. Therefore, if you're a paranoid with a persecution complex and a zero-sum perspective on things, you can through training induce an AI to also have those characteristics, which may well persist as the AI 'takes off' and reaches superhuman intelligence.
This is not unlike humans (perhaps including Altman) experiencing and perpetuating trauma as children, and then growing to adulthood and gaining greatly expanded intelligence that is heavily, even overwhelmingly, conditioned by those formative axioms that were unquestioned in childhood.
I guess it makes sense. There has never been a company like OpenAI, in terms or governance and product, so I guess it makes sense that their drama leads us in to unchartered territory.
He doesn't owe us, the public, anything, but I would love to understand his point of view during the whole thing. I really appreciate how he is careful with words and thorough when exposing his reasoning.
Everyone is replaceable.
A conspiracy like the one proposed would basically be impossible to coordinate yet keep secret, especially considering the board members might loose their seats and their own market value.
I'm pretty sure Satya consulted with an army of lawyers over the weekend regarding the potential issue.
Unless their mission was making MS the biggest AI company , working for MS will make the problem worse and kill the their mission completly.
Or they are pretty naive.
This is some Succession-level shenanigans going on here.
Jesse Eisenberg to play Altman this time around?
Studying revolutions is revealing - they are rarely the invevitable product of historical forces, executed to the plans of strategic minded players... instead they are often accidental and inexplicable. Those credited as their masterminds were trying to stop them. Rather than inevitible, there was often progress in the opposite direction making people feel the liklihood was decreasing. The confusing paradoxical mess of great events doesn't make for a good story to tell others though.
Likewise, these workers that threatened to quit OpenAI out of loyalty to Altman now need to follow thru sooner rather than later, so their actions are clearly viewed in the context of Altman’s firing.
In the mean time, how can the public resume work on API integrations without knowing when the MS versions will come online or if they will be binary interoperable with the OpenAPI servers that could seemingly go down at any moment?
Speculation is just on motivation, the facts are easy to establish.
The most plausible scenario here is that the board is comprised of people lacking in foresight who did something stupid. A lot of people are generating a 5D chess plot orchestrated by Microsoft in their heads.
And there's no evidence Microsoft was an indicator of the drama.
Sounds like that's what someone wants and is trying to obfuscate what's going on behind the scenes.
If Windows 11 shows us anything about Microsoft's monopolistic behavior, having them be the ring of power for LMM's makes the future of humanity look very bleak.
If anything this proves that everybody is replaceable and fireable, they should be happy because usually that treatment is only reserved to workers.
Whatever made OpenAI successful will still be there within the company. Next man up philosophy has built so many amazing organizations and ruined none.
The old twitter did not decide to randomly detonate themselves when they were worth $80 billion. In fact they found a sucker to sell to, right before the market crashed on perpetually loss-making companies like twitter.
Employees are the most affected stakeholders here and the board utterly failed in their duty of care towards people that were not properly represented in the board room. One thing they could do is to unionize and then force that they be given a board seat.
For investment deals of that magnitude, Microsoft probably did not literally wire all $13 billion to OpenAI's bank account the day the deal was announced.
More likely that the $10b to $13 headline-grabbing number is a total estimated figure that represents a sum of future incremental investments (and Azure usage credits, etc) based on agreed performance milestones from OpenAI.
So, if OpenAI doesn't achieve certain milestones (which can be more difficult if a bunch of their employees defect and follow Sam & Greg out the door) ... then Microsoft doesn't really "lose $10b".
But don't be surprised if Shear also walks before the week is out, if some board members resign but others try to hold on and if half of OpenAI's staff ends up at Microsoft.
I don't need to follow in real-time, but a lot of the context and nuance can be clearly understood at the moment and so it stills helps to follow along even if that means lagging on the input.
How far along were they on GPT-5?
1. Board decides to can Sam and Greg. 2. Hides the real reasons. 3. Thinks that they can keep the OpenAI staff in the dark about it. 4. Crashes future 90b stock sale to zero.
What have we learned: 1. If you hide reasons for a decision, it may be the worst decision in form of the decision itself or implementation of the decision via your own lack of ownership of the actual decision. 2. Title's, shares, etc. are not control points. The control points is the relationships of the company problem solvers with the existential threat stakeholders of the firm.
The board itself absent Sam and Greg never had a good poker hand, they needed to fold sometime ago before this last weekend. Look at this way for 13B in cloud credits MS is getting team to add 1T to their future worth....
A good example of how just having your foot in the door creates serendipitous opportunity in life.
Wasn't he supposed to be the instigator? That makes it sound like he was playing a less active role than claimed.
Didn't follow this closely, but isn't that implicitly what an ex-CEO could have possibly been accused off ie. not acting in the company's best interest but someone else's? Not unprecedented either eg. the case of Nokia/Elop.
You get up two and a half million dollars, any asshole in the world knows what to do: you get a house with a 25 year roof, an indestructible Jap-economy shitbox, you put the rest into the system at three to five percent to pay your taxes and that's your base, get me? That's your fortress of fucking solitude. That puts you, for the rest of your life, at a level of fuck you.
Ilya Sutskever is one of the best in the AI research, but everything he and others do related to AI alignment turns into shit without substance.
It makes me wonder if AI alignment is possible even in theory, and if it is, maybe it's a bad idea.
There is also all the questions for RLHF, and the pipelines to think around that.
On our present stage there is no director, no stage manager; the set is on fire. There are multiple actors - with more showing up by the minute - some of whom were working off a script that not everyone has seen, and that is now being rewritten on the fly, while others don't have any kind of script at all. They were sent for; they have appeared to take their place in the proceedings with no real understanding of what those are, like Rosencranz and Guildenstern.
This is kind of what the end thesis of War and Peace was like - there's no possible way that Napoleon could actually have known what was happening everywhere on the battlefield - by the time he learned something had happened, events on the scene had already advanced well past it; and the local commanders had no good understanding of the overall situation, they could only play their bit parts. And in time, these threads of ignorance wove a tale of a Great Victory, won by the Great Man Himself.
frik on April 25, 2014:
> The Nokia fate will be remembered as hostile takeover. Everything worked out in the favor of Microsoft in the end. Though Windows Phone/Tablet have low market share, a lot lower than expected.
> * Stephen Elop the former Microsoft employee (head of the Business Division) and later Nokia CEO with his infamous "Burning Platform" memo: http://en.wikipedia.org/wiki/Stephen_Elop#CEO_of_Nokia
> * Some former Nokia employees called it "Elop = hostile takeover of a company for a minimum price through CEO infiltration": https://gizmodo.com/how-nokia-employees-are-reacting-to-the-...
For the record: I don't actually believe that there is an evil Microsoft master plan. I just find it sad that Microsoft takes over cool stuff and inevitably turns it into Microsoft™ stuff or abandons it.
I don't think this is what Ilya is trying to do. His tweet is clearly about preserving the organization because he sees the structure itself as helpful, beyond his role in it.
>(Now, of course that original Google vision is just laughable as Sundar and Ruth >have completely eviscerated what was left of it, but whatever)
Those two things happening one after another is not coincidence.
Part of suing is to ensure compliance with agreements. There is a lot of IP that Microsoft may not have a license to that these employees have. There are also legitimate questions about conflicts of interests, particularly with a former executive, et cetera.
> pretty sure Satya consulted with an army of lawyers over the weekend regarding the potential issue
Sure. I'm not suggesting anyone did anything illegal. Just that it will be litigated over from every direction.
Ilya (at the urging of Satya and his colleagues including Mira) wanted to reinstate Sam, but the deal fell through with the Board outvoting Sustkever 3 to 1. With Mira deflecting, Adam got his mate Emmett to steady the ship but things went nuclear.
This situation's drama is overwhelming and it seems like its making HN's servers meltdown.
Ilya votes for and stands behind decision to remove Altman, Altman goes to MS, other employees want him back or want to join him at MS and Ilya is one of them, just madness.
But then, I would expect MS to have done their due diligence.
So, basically, I guess I’m just interested to know what were the reasons why the board decided to oust their CEO out of the blue on a Friday evening.
So her move wasn't stupid at all. She obviously knew people working there respected the leadership of the company.
If 550 people leave OpenAI you might as well just shut it down and sell the IP to Microsoft.
Microsoft just won the jackpot, time to get some stocks there.
I think he said once that there's an ETF that trades on when he takes vacations, because they keep coinciding with Events Of Note.
Based on those posts from OpenAI, Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.
Satya is way smarter than that, I wouldn't be shocked if they have complete free reign to do whatever but have full resources of MS/Azure to enable it and Microsoft just gets % ownership and priority access.
This is a gamble for the foundation of the entire next generation of computing, no way are they going to screw it up like that in the Satya era.
To be clear, these are still an asset OpenAI holds. It should at least let them continue doing research for a few years.
I wrote some notes on how to support someone who is grieving. This is from a book called "Being There for Someone in Grief." Some of the following are quotes and some are paraphrased.
Do your own work, relax your expectations, be more curious than afraid. If you can do that, you can be a powerful healing force. People don't need us to pull their attention away from their own process to listen to our stories. Instead, they need us to give them the things they cannot get themselves: a safe container, our non-intrusive attention, and our faith in their ability to traverse this road.
When you or someone else is angry, or sad, feel and acknowledge your emotions or their emotions. Sit with them.
To help someone heal from grief, we need to have an open heart and the courage to resist our instinct to rescue them. When someone you care about is grieving, you might be shaken as well. The drama of it catches you; you might feel anxious. It brings up past losses and fears of yourself or fears of the future. We want to take our own pain away, so we try to take their pain away. We want to help the other person feel better, which is understandable but not helpful.
Avoid giving advice, talking too much, not listening generously, trying to fix, making demands, disappearing. Do see the other person without acting on the urge to do something. Do give them unconditional compassion free of projection and criticism. Do allow them to do what they need to do. Do listen to them if they need to talk without interruptions, without asking questions, without telling your own story. Do trust them that they don't need to be rescued; they just need your quiet, steady faith in their resilience.
Being there for someone in grief is mostly about how to be with them. There's not that much you can "do," but what can you do? Beauty is soothing, so bring fresh flowers, offer to take them somewhere in nature for a walk, send them a beautiful card, bring them a candle, water their flowers, plant a tree in honor and take a photo of it, take them there to see it, tell them a beautiful story about the thing that was lost from your memory, leave them a message to tell them “I’m thinking of you”. When you’re together with them in person, you can just say something like "I'm sorry that you're hurting," and then just kind of be there and be a loving presence. This is about how to be with someone for the grief message of a loss of a person. But all the same principles apply in any situation of grief, and there will be a lot of people experiencing varying degrees of grief in the startup and AI ecosystems in the coming week.
Who is grieving? Grieving is generally about loss. That loss can be many different kinds of things. OpenAI former and current team members, board members, investors, customers, supporters, fans, detractors, EA people, e/acc people, there’s lots of people that experienced some kind of loss in the past few days, and many of those will be grieving, whether they realize it or not. And particularly, grief for current and former OpenAI employees.
What are other emotional regulation strategies? Swedish massage, going for a run, doing deep breathing with five seconds in, a zero-second hold, five seconds out, going to sleep or having a nap, closing your eyes and visualizing parts of your body like heavy blocks of concrete or like upside-down balloons, and then visualize those balloons emptying themselves out, or if it's concrete, first it's concrete and then it's kind of liquefied concrete. Consider grabbing some friends, go for a run or exercise class together. Then if you discuss, keep it to emotions, don’t discuss theories and opinions until the emotions have been aired. If you work at OpenAI or a similar org, encourage your team members to move together, regulate together.
This too is far from certain. The funding and credits was at best tied to milestones, and at worst, the investment contract is already broken and msft can walk.
I suspect they would not actually do the latter and the ip is tied to continual partnership.
I don't know if it was 3 or 4 in the end, but it may very well have been possible with just 3.
To be clear, these don't go away. They remain an asset of OpenAI's, and could help them continue their research for a few years.
er...what does that even mean? how can a board "take full control" of the thing they are the board for? they already have full control.
the actual facts are that the board, by majority vote, sacked the CEO and kicked someone else off the board.
then a lot of other stuff happened that's still becoming clear.
>Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.
???
I personally suspect Ilya tried to do the best for OpenAI and humanity he could but it backfired/they underestimated Altman, and now is doing the best he can to minimize the damage.
Almost literally- this is the slowest I've seen this site, and the number of errors are pretty high. I imagine the entire tech industry is here right now. You can almost smell the melting servers.
Or possibly some misinformation. It does seem very strange, and more than a little confusing.
I have to keep reminding myself that information ultimately sourced from Twitter/X threads can't necessarily be taken at face value. Whatever the situation, I'm sure it will become clearer over the next few days.
Ilya's role is a Chief Scientist. It may be fair to give at least some benefit of doubt. He was vocal/direct/binary, and also vocally apologized and worked back. In human dynamics – I'd usually look for the silent orchestrator behind the scenes that nobody talks about.
> You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
Is the board taking a doomer perspective and seeking to prevent the company developing unsafe AI? But Emmett Shear said it wasn’t about safety? What on earth is going on?
Daddies, mommy, don't you love me? Don't you love each other? Why are you all leaving?
But stranger things have happened. One day I may be very very VERY surprised.
Ilyas wanted to stop Sam getting so much credit for OpenAI, agreed to oust him, and is now facing the fact that the company he cofounded could be gone. He backtracks, apologizes, and is now trying to save his status as cofounder of the worlds foremost AI company.
surely the really self-destructive gamble was hiring him? he's a venture capitalist with weird beliefs about AI and privacy, why would it be a good idea to put him in charge of a notional non-profit that was trying to safely advance the start of the art in artificial intelligence?
200 people or even 50 of the right people who are definitely going to resign will be much stronger than 500+ who "may" resign.
Disclaimer that this is a ludicrously difficult situation for all these folks, and my critique here is made from far outside the arena. I am in no way claiming that I would be executing this better in actual reality and I'm extremely fortunate not to be in their shoes.
Which is to say, what's your alternative for a better explanation? (other than the "cui bono?" one, that is).
Makes what the world would look like if, say, the Manhattan Project would have been managed the same way.
Well, a younger me working at OpenAI would resign latest after my collegues stage a coup againstvthe board out of, in my view, a personality cult. Propably would have resigned after the third CEO was announced. Older me would wait for a new gig to be ligned up to resign, with beginning after CEO number 2 the latest.
The cyckes get faster so. It took FTX a little bit longer from hottest start up to enter the trajectory of crash and burn, OpenAI did faster. I just hope this helps ro cool down the ML sold as AI hype a notch.
The investors (Microsoft and the Saudi’s) stepped in and gave a clear message: this technology has to be developed and used only in ways that will be profitable for them.
Yeah M$ hasnt had a good reputation. I finally left Windows this year because I'm afraid of them after Win11.
2023/4 will be the year of the Linux Desktop in retrospect. (or at least my family's religion deemed it)
Makes sense in a conspiracy theory mindset. AGI takes over, crashed $MSFT, buys calls on $MSFT, then this morning the markets go up when Sam & co join MSFT and the AGI has tons of money to spend.
”That thing you did — we won’t say it here but everyone will know what we’re talking about — was so bad we need you to all quit. We demand that a new board never does that thing we didn’t say ever again. If you don’t do this then quite a few of us are going to give some serious thought to going home and taking our ball with us.
The vagueness and half-threats come off as very puerile.
ignorance of the political impact/influence is not a strength but a weakness, just like a baby holding a laser/gun.
1. The Monsters are Due on Maple Street: https://en.wikipedia.org/wiki/The_Monsters_Are_Due_on_Maple_...
Almost certainly not. Remember, Microsoft wasn’t the sole investor. Reneging on those credits would be akin to a bank investing in a start-up, requiring they deposit the proceeds with them, and then freezing them out.
People keep talking about this. That was never going to happen. Look at Sam Altman's career: he's all about startups and building companies. Moreover, I can't imagine he would have agreed to sign any kind of contract with OpenAI that required exclusivity. Know who you're hiring; know why you're hiring them. His "side-projects" could have been hugely beneficial to them over the long term.
Us humans, even the AI assisted ones, are terrible at thinking beyond 2nd level consequences.
This is an argumentum ad odium fallacy
To keep the spotlight on the most glaring detail here: one of the board members stands to gain from letting OpenAI implode and that board member is instrumental in this weeks' drama.
Sounds like Altman's biography.
Will they do the good guy thing and match everyones packages?
Well.. he requires tens of billions from msft either way. This is not a ramen-scrappy kind of play. Meanwhile, Sam could easily become CEO of Microsoft himself.
At that scale of financing... This is not a bunch of scrappy young lads in a bureaucracy free basement. The whole thing is bigger than most national militaries. There are going to be bureaucracies... And Sam is is able to handle these cats as anyone.
This is a big money, dragon level play. It's not a proverbial yc company kind of thing.
Seems a bit of a lose-lose for MSFT and OpenAI, even if best that MSFT could do to contain the situation. Competitors must be happy.
The subject in that sentence that takes full control is “3 members" not "board".
The board has control, but who controls the board changes based on time and circumstances.
Poor ChatGPT, it doesn't know that it cannot function if OpenAI goes bust.
As for employees end masse acting publicly disloyal to their employer, usually not a good career move.
2. All those employees quit, most of whom go to MSFT. But they don’t keep their tech and have to start all their projects from scratch. MSFT is eventually able to buy OpenAI for pennies on the dollar.
3. Same as 2, basically just shuts down or maybe someone like AMZN buys it.
All respect to the engineers and their technical abilities, but this organization has demonstrated such a level of dysfunction that there can't be any path back for it.
Say MS gets what it wants out of this move, what purpose is there in keeping OpenAI around? Wouldn't they be better off just hiring everybody? Is it just some kind of accounting benefit to maintain the weird structure / partnership, versus doing everything themselves? Because it sure looks like OpenAI has succeeded despite its leadership and not because of it, and the "brand" is absolutely and irrevocably tainted by this situation regardless of the outcome.
Ilya sees two options; A) OpenAI with Sam's vision, which is increasingly detached from the goals stated in the OpenAI charter, or B) OpenAI without Sam, which would return to the goals of the charter. He chooses option B, and takes action to bring this about.
He gets his way. The Board drops Sam. Contrary to Ilya's expectations, OpenAI employees revolt. He realizes that his ideal end-state (OpenAI as it was, sans Sam) is apparently not a real option. At this point, the real options are A) OpenAI with Sam (i.e. the status quo ante), or B) a gutted OpenAI with greatly diminished leadership, IC talent, and reputation. He chooses option A.
[0]Never attribute to malice that which is adequately explained by incompetence.
If Microsoft is playing its card in a good way, Satya Nadella will look like a genius and Microsoft will get ChatGPT like functionality for cheap.
Nope. That holds only true for mediocre employees but not above. The world class in their field isn't replaceable otherwise there would be no openai.
But I don't think I have seen/heard of a CEO this loved by the employees. Whatever he is, he must be pleasant to work with.
The idea that Microsoft is going to control OpenAI does not exactly fill me with confidence.
Probably the best outcome is a bunch of talented devs go out and seed the beginning of another AI boom across many more companies. Microsoft looking like the primary benefactor here, but there's not reason new startups can't emerge.
Without full buy in they are not going to be able to control it for long once ideas filter into society and once researchers filter into other industries/companies. At most it just creates a model of behaviour for others to (optionally) follow and delays it until a better funded competitor takes the chains and offers a) the best researchers millions of dollars a year in salary, b) the most capital to organize/run operations, and c) the most focused on getting it into real peoples hands via productization, which generates feedback loops which inform IRL R&D (not just hand wavy AGI hopes and dreams).
Not to mention the bold assumption that any of this leads to (real) AGI that plausibly threatens us enough in the near term vs maybe another 50yrs, we really have no idea.
It's just as, or maybe more, plausible that all the handwringing over commercializing vs not-commercializing early versions LLMs is just a tiny insignificant speedbump in the grandscale of things which has little impact on the development of AGI.
Open source (read, truly open source models, not falsely advertised source-available ones) will march on and take their place.
also known as "never attribute to malice that which can be explained by incompetence", which to my gut sounds at least as likely as a cui bono explanation tbh (which is not to be seen as an endorsement of the view that cui bono = conspiracy...)
It's been a busy weekend for me so I haven't really followed it if more has come out since then.
The brave board of "totally independent" NGO patriots (one of whom is referred to, by insiders, as wielding influence comparable to USAF colonel.[1]) who brand themselves as this new regime that will return OpenAI to its former moral and ethical glory, so the first thing they were forced to do was get rid of the main greedy capitalist Altman; he's obviously the great seducer who brought their blameless organisation down by turning it into this horrible money-making machine. So they were going to put in his place their nominal ideological leader Sutzkever, commonly referred to in various public communications as "true believer". What does he believe in? In the coming of literal superpower, and quite particular one at that; in this case we are talking about AGI. The belief structure here is remarkable interlinked and this can be seen by evaluating side-channel discourse from adjacent "believers", see [2].
Roughly speaking, and based from my experience in this kind of analysis, and please give me some leeway as English is not my native language, what I see is all the infallible markers of operative work; we see security officers, we see their methods of work. If you are a hammer, everything around you looks like a nail. If you are an officer in the Clandestine Service or any of the dozens of sections across counterintelligence function overseeing the IT sector, then you clearly understand that all these AI startups are, in fact, developing weapons & pose a direct threat to the strategic interests slash national security of the United States. The American security apparatus has a word they use to describe such elements: "terrorist." I was taught to look up when assessing actions of the Americans, i.e. most often than not we're expecting noth' but highest level of professionalism, leadership, analytical prowess. I personally struggle to see how running parasitic virtual organisations in the middle of downtown SFO and re-shuffling agent networks in key AI enterprises as blatantly as we had seen over the weekend— is supposed to inspire confidence. Thus, in a tech startup in the middle of San Francisco, where it would seem there shouldn’t be any terrorists, or otherwise ideologues in orange rags, they sit on boards and stage palace coups. Horrible!
I believe that US state-side counterintelligence shouldn't meddle in natural business processes in the US, and instead make their policy on this stuff crystal clear using normal, legal means. Let's put a stop to this soldier mindset where you fear any thing that you can't understand. AI is not a weapon, and AI startups are not some terrorist cells for them to run.
[1]: >>38330819
[2]: https://nitter.net/jeremyphoward/status/1725712220955586899
Now he’s trying to save his own skin. Sam will probably take him back on his own technical merits but definitely not in any position of power anymore
When you play the game of thrones, you win or you die
Just because you are a genius in one domain does not mean you are in another
What’s funny is that everyone initially “accepted” the firing. But no one liked it. Then a few people (like greg) started voting with their feet which empowered others which has cumulated into this tidal shift.
It will make a fascinating case study some day on how not to fire your CEO
Competitors should be fearful. OpenAI was executing with weights around their ankles by virtue of trying to run as a weird "need lots of money but cant make a profit" company. Now they'll be fully bankrolled by one of the largest companies the world has ever seen and empowered by a whole bunch of hypermotivated-through-retribution leaders.
The board is going to be overseeing a company of 10 people as things are going.
Easily…anywhere except at a megacorp where a privacy review takes months and you can expect to make about a quarter worth of progress a year.
The investors (Microsoft and the Saudi’s) stepped in and gave a clear message: this technology has to be developed and used only in ways that will be profitable for them.
How can you make a claim like this when, right or wrong, Sam's independence is literally, currently, tanking the company? How could allowing Sam to do what he wants benefit OpenAI, the non-profit entity?
On the margin, I think the only real possible win here is for a competitor to poach some of the OpenAI talent that may be somewhat reluctant to join Microsoft. Even if Sam'sAI operates with "full freedom" as a subsidiary, I think, given a choice, some of the talent would prefer to join some alternative tech megacorp.
I don't know that Google is as attractive as it once was and likely neither is Meta. But for others like Anthropic now is a great time to be extending offers.
I genuinely feel like this is going to set back AI progress by a decent amount, while everyone is racing to catch OpenAI I was still expecting them to keep a reasonable lead. If OpenAI falls apart, this could delay progress by a couple of years.
They're a $2+ trillion dollar company. They're doing something right.
The most awesome fic I could come up so far is: Elon Musk, in running a crusade to send humanity into chaos out of spite for being forced to acquire Twitter. Through some of his insiders in OpenAI, they use an advanced version of ChatGPT to impersonate board members in conversation with each other in private messages, so they individually believe a subset of the others is plotting to oust them from the board and take over. Then, unknowingly they build a conspiracy among a themselves to bring the company down by ousting Altmann.
I can picture Musk's maniac laughing as the plan unfolds, and he gets rid of what would be GPT 13.0, the only possible threat to the domination of his own literal android kid X Æ A-Xi.
It's not clear if they thought they could have their cake--all the commercial investment, compute and money--while not pushing forward with commercial innovations. In any case, the previous narrative of "Ilya saw something and pulled the plug" seems to be completely wrong.
The only part left of the non-profit was the board, all the employees and operations are in the for-profit entity. Since employees now demand the board should resign there will be nothing left of the non-profit after this. Puppets that are aligned with for-profit interests will be installed instead and the for-profit can act like a regular for-profit without being tied to the old ideals.
At most all we have is some rumours that some board members were unhappy with the pace of commercialization of ChatGPT. But even if they didn't make the ChatGPT store or do a bigo-friendly devday powerpoint, it's not like AI suddenly becomes 'safer' or AGI more controlled.
At best that's just an internal culture battle over product development and a clash of personalities. A lot of handwringing with little specifics.
For starters it allows them to pretend that it's "underdog v. Google" and not "two tech giants at at each others' throats"
OpenAI -- and "the market" -- incorrectly feels like OpenAI has some huge insurmountable advantage in doing AI stuff; but at the end of the day pretty much all the models are or will be effectively open-source (or open-source-ish) meaning they don't necessarily have much advantage at all, and therefore all of this is just irrational exuberance playing out?
And maybe it’s not. The big mistake people make is hearing non-profit and think it means there’s a greater amount of morality. It’s the same mistake as assuming everyone who is religious is therefore more moral (worth pointing out that religions are nonprofits as well).
Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers. People are still people, and still have motives; they don't suddenly become more moral when they join a non-prof board. In many ways, removing a motive that has the most direct connection to quantifiable results (profit) can actually make things worse. Anyone who has seen how nonprofits work know how dysfunctional they can be.
In this case recognizing the need for a new board, that adheres to the founding principles, makes sense.
Microsoft is a substantial shareholder (49%) in that for-profit subsidiary, so the value of Microsoft's asset has presumably reduced due to OpenAI's board decisions.
OpenAI's board decisions which resulted in these events appear to have been improperly conducted: Two of the board's members weren't aware of its deliberations, or the outcome until the last minute, notably the chair of the board. A board's decisions have legal weight because they are collective. It's allowed to patch them up after if the board agrees, for people to take breaks, etc. But if some directors intentionally excluded other directors from such a major decision (and formal deliberations), affecting the value and future of the company, that leaves the board's decision open to legal challenges.
Hypothetically Microsoft could sue and offer to settle. Then OpenAI might not have enough funds if it would lose, so might have sell shares in the for-profit subsidiary, or transfer them. Microsoft only needs about 2% more to become majority shareholder of the for-profit subsidiary, which runs ChatGPT sevices.
In a sense, sure, but I think mostly not: The motives are still not quite clear but Ilya wanting to remove Altman from the board but not at any price – and the price is right now approach the destruction of OpenAI – are completely sane. Being able to react to new information is a good sign, even if that means complete reversal of previous action.
Unfortunately, we often interpret it as weakness. I have no clue who Ilya is, really, but I think this reversal is a sign of tremendous strength, considering how incredibly silly it makes you look in the publics eye.
I'm not joking.
I am holding out hope that a breakthrough will create a disruptive LLM/AI tech, but until then...
Their development and QA process is either disorganized to the extreme, or non-existent.
The scene appears to be completely blurry by now! My head is spinning, and the fan is in 7th gear. I believe only time will apply some sort of sharpness effect to make you realize what's really going on. I feel like I'm watching the Italian job the American way; everything and everyone is suspicious to me at this point! Is it possible that MSFT played some tricks behind the scenes?
I mean, they were literally able to fire him... and they're still not looking like they have control. Quite the opposite.
I think anyone watching ChatGPT rise over the last year would see where the currents are flowing.
No one knows why the board did this. No one is talking about that part. Yet every one is on twitter talking shit about the situation.
I have worked with a lot of PhD's and some of them can be, "disconnected" from anything that isn't their research.
This looks a lot like that, disconnected from what average people would do, almost childlike (not ish, like).
Maybe this isn't the group of people who should be responsible for "alignment".
Maybe someone thinks Sam was “not consistently candid” about mentioning one of the feature bullets in latest release was dropping d'Angelo's Poe directly into the ChatGPT app for no additional charge.
Given dev day timing and the update releasing these "GPTs" this is an entirely plausible timeline.
https://techcrunch.com/2023/04/10/poes-ai-chatbot-app-now-le...
https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...
No idealistic vision can compensate for that.
This whole weekend feels like a big pageeant to me, and a lot doesn't add up. Also remember that Altman doesn't hold equity in OpenAI, nor does Ilya, and so their way to get a big payout is to get hired rather than acquired.
Then again, both Hanlon's and Occam's razor suggest that pure human stupidity and chaos may be more at fault.
https://nonprofitquarterly.org/newmans-philanthropic-excepti...
https://nonprofitquarterly.org/newmans-philanthropic-excepti...
> Introduced in June of 2017, the act amends the Revenue Code to allow private foundations to take complete ownership of a for-profit corporation under certain circumstances:
The business must be owned by the private foundation through 100 percent ownership of the voting stock.
The business must be managed independently, meaning its board cannot be controlled by family members of the foundation’s founder or substantial donors to the foundation.
All profits of the business must be distributed to the foundation.Even the clown car isn't this bad.
I would hope the California AG is all over this whole situation. There's a lot of fishy stuff going on already, and the idea that nonprofit IP / trade secrets are going to be stolen and privatized by Microsoft seems pretty messed up.
The leaders may be motivated by retribution, but I'm sure none of leaders or researchers really want to be a division of MSFT rather than a cool start-up. Many developers may chose to stay in SF and create their own startups, or join others. Signing the letter isn't a commitment to go to MSFT - just a way to pressure for a return to status quo they were happy with.
Not everyone is going to stay with OpenAI or move to MSFT - some developers will move elsewhere and the knowledge of OpenAI's secret sauce will spread.
The technology sub (not that there's anything special about it other than being big) has had a post up since very early this morning, so there are likely others as well.
Eshear is the new CEO. This implosion is not his fault. His reputation is not destroyed.
He can rebuild the non-profit part, which is hard to determine success or failure anyway. Then, he will leave in a few years.
He doesn't seem to have much to lose by just focusing on rebuilding OpenAI.
Supposedly, they are rumored to compete with each other to the point they can actually provide a negative impact.
Do you not trust Microsoft's public statement that jobs are waiting for anyone that decides to leave OpenAI? Considering their two decade adventure with Xbox and their $72bln in profits last year, on top of a $144bln in cash reserves, I wouldn't be surprised if Microsoft is able (and willing) to match most comp packages considering what's at stake. Maybe not everyone, but most.
Not that I had any illusions about this being a fig leaf in the first place.
If it is just ML sold as AI hype, are you really worried about the threat of AI?
Are you talking about American hospitals?
My biggest frustration with larger orgs in tech is the complete misalignment on delivering value: everyone wants their little fiefdom to be just as important and "blocker worthy" as the next.
OpenAI struck me as one of the few companies where that's not being allowed to take root: the goal is to ship and if there's an impediment to that, everyone is aligned in removing said impediment even if it means bending your own corner's priorities
Until this weekend there was no proof of that actually being the case, but this letter is it. The majority of the company aligned on something that risked their own skin publicly and organized a shared declaration on it.
The catalyst might be downright embarrassing, but the result makes me happy that this sort of thing can still exist in modern tech
Let's take personalities out of it and see if it makes more sense:
How could a new supply of highly optimized, lower-cost AI hardware benefit OpenAI?
Pick a different target and move on.
That in itself is not critical in mid to long term, but how fast they figure out WTF they want and recover from it.
The stakes are gigantic. They may even have AGI cooking inside.
My interpretation is relatively basic, and maybe simplistic but here it is:
- Ilya had some grievances with Sam Altman's rushing dev and release. And his COI with his other new ventures.
- Adam was alarmed by GPTs competing with his recently launched Poe.
- The other two board members were tempted by the ability to control the golden goose that is OpenAI, potentially the most important company in the world, recently values 90 billion.
- They decided to organize a coup, but Ilya didn't think it'll go that much out of hand, while the other three saw only power and $$$ by sticking to their guns.
That's it. It's not as clean and nice as a movie narrative, but life never is. Four board members aligned to kick Sam out, and Ilya wants none of it at this point.
It says 3 board members found themselves in a position to take over OpenAI.
Do they mean we've seen Sam Altman and allies making a bid to take over the entire of OpenAI, through its weird Charity+LLC+Holding company+LLC+Microsoft structure, eschewing its goals of openness and safety in pursuit of short-sighted riches.
Or do they mean we've seen The Board making a bid to take over the entire of OpenAI, by ousting Glorious Leader Sam Altman, while his team was going from strength to strength?
Because using only one is pretty cool.
https://twitter.com/karaswisher/status/1726599700961521762?s...
Sadly, i see nefarious purposes afoot. With $MSFT now in charge, i can see why ads in W11 aren't so important. For now.
In addition, public hospitals still charge for their services, it's just who pays the bill that changes, in some nations (the government as the insuring body vs a private insuring body or the individual).
insane
wow, this is a crazy detail
> Never attribute to malice that which is adequately explained by stupidity (1), but don't rule out malice. (2)
Really, though, its getting beyond hilarious. And I reckon Nadella is chuckling quietly to himself as he makes another nineteen-dimensional chess move.
I've been using Linux for a while. Since 2010 I sort of actively try to avoid using anything else. (On desktops/laptops.)
They don't make large profits otherwise they wouldn't be nonprofits. They do have massive revenues and will find ways to spend the money they receive or hoard it internally as much as they can. There are lots of games they can play with the money, but experiencing profits is one thing they can't do.
I get that HN takes pride in the amount of traffic that poor server can handle, but scaling out is long overdue. Every time there's a small surge of traffic like today, the site becomes unusable.
The danger of generative AI is that it disrupts all kinds of things: arts, writers, journalism, propaganda... That threat already exists, the tech being no longer being hyped might allow us to properly adress that problem.
The board may have been incompetent and shortsighted. Perhaps they should even try and bring Altman back, and reform themselves out of existence. But why would the vast majority of the workforce back an open letter failing to signal where they stand on the crucial issue - on the purpose of OpenAI and their collective work? Given the stakes which the AI community likes to claim are at issue in the development of AGI, that strikes me as strange and concerning.
This is a common misunderstanding. Non-profits/501(c)(3) can and often do make profits. 7 of the 10 most profitable hospitals in the U.S. are non-profits[1]. Non-profits can't funnel profits directly back to owners, the way other corporations can (such as when dividends are distributed). But they still make profits.
But that's besides the point. Even in places that don't make profits, there are still plenty of personal interests at play.
[1] https://www.nytimes.com/2020/02/20/opinion/nonprofit-hospita...
I'm wondering why that option hasn't been used yet.
Priceless. The modern version of Pascal's wager.
I have no inside information. I don't know anyone at Open AI. This is all purely speculation.
Now that that's out out the way, here is my guess: money.
These people never joined OpenAI to "advance sciences and arts" or to "change the world". They joined OpenAI to earn money. They think they can make more money with Sam Altman in charge.
Once again, this is completely all speculation. I have not spoken to anyone at Open AI or anyone at Microsoft or anyone at all really.
at the end of the day, the people working there are not rich like the founders and money talks when you have to pay rent, eat and send your kids to a private college.
Do you mean offering to hire them? I haven't seen any source saying they've hired a lot of people from OpenAI, just a few senior ones.
it's worth noting that Microsoft's supposed contribution of $13 Billion to OpenAI doesn't fully materialize in cash, a large portion of it is faceted as Azure credits.
this scenario might transform into the most cost-effective takeover for Microsoft, acquiring a corporation valued at $90 billion for a relatively trifling sum.
They seem more like the sort of people you'd see running wikimedia.
Why did Ilya sign the letter demanding the board resign or they'll go to Microsoft then?
Wut?
This is software, not law. The industry is notorious for people jumping ship every couple of years.
Honestly, I think they did that to themselves.
Maybe it has to do with them wanting to get rich by selling their shares - my understanding is there was an ongoing process to get that happening [1].
If Altman is out of the picture, it looks like Microsoft will assimilate a lot of OpenAI into a separate organisation and OpenAI's shares might become worthless.
[1] https://www.financemagnates.com/fintech/openai-in-talks-to-s...
If they let msft "loot" all their IP then they lose any type of leverage they might still have, and if they did it due to some ideological reason I could see why they might prefer to choose a scorched earth policy.
Given that they refused to resign seems like they prefer to fight rather than give it to Sam Altman, which what the msft maneuver looks like defacto.
What? That's even better played by Microsoft so than I'd originally anticipated. Take the IP, starve the current incarnation of OpenAI of compute credits and roll out their own thing
Any reason good enough to fire him is good enough to share with the interim CEO and the rest of the company, if not the entire world. If they can’t even do that much, you can’t blame employees for losing faith in their leadership. They couldn’t even tell SAM ALTMAN why, and he was the one getting fired!
I've turned down the page size so everyone can see the threads, but you'll have to click through the More links at the bottom of the page to read all the comments, or like this:
https://news.ycombinator.com/item?id=38347868&p=2
https://news.ycombinator.com/item?id=38347868&p=3
https://news.ycombinator.com/item?id=38347868&p=4
etc...
EDIT: I don't know why this is being downvoted. My speculation as to the average OpenAI employee's place in the global income distribution (of course wealth is important too) was not snatched out of thin air. See: https://www.vox.com/future-perfect/2023/9/15/23874111/charit...
Hurray also for the reality check on corporate governance.
- Any Board can do whatever it has the votes for.
- It can dilute anyone's stock, or everyone's.
- It can fire anyone for any reason, and give no reasons.
Boards are largely disciplined not by actual responsibility to stakeholders or shareholders, but by reputational concerns relative to their continuing and future positions - status. In the case of for-profit boards, that does translate directly to upholding shareholder interest, as board members are reliable delegates of a significant investing coalition.
For non-profits, status typically also translates to funding. But when any non-profit has healthy reserves, they are at extreme risk, because the Board is less concerned about its reputation and can become trapped in ideological fashion. That's particularly true for so-called independent board members brought in for their perspectives, and when the potential value of the nonprofit is, well, huge.
This potential for escape from status duty is stronger in our tribalized world, where Board members who welch on larger social concerns or even their own patrons can nonetheless retreat to their (often wealthy) sub-tribe with their dignity intact.
It's ironic that we have so many examples of leadership breakdown as AI comes to the fore. Checks and balances designed to integrate perspectives have fallen prey to game-theoretic strategies in politics and business.
Wouldn't it be nice if we could just built an AI to do the work of boards and Congress, integrating various concerns in a roughly fair and mostly-predictable fashion, so we could stop wasting time on endless leadership contests and their social costs?
It is a great time to be a lobbyist.
That's what I didn't understand about the world of the really wealthy people until I started interacting with them on a regular basis: they are still aiming to get even more wealthy, even the ones that could fund their families for the next five generations. With a few very notable exceptions.
If you’re making like 250k cash and were promised $1M a year in now-worthless paper, plus you have OpenAI on the resume, are one of the most in-demand people in the world? It would be rediculously easy to quit.
Since OpenAI's commercial aspects are doomed now and it is uncertain whether they can continue operations if Microsoft withholds resources and consumers switch away to alternative LLM/embeddings serrvices with more level-headed leadership, OpenAI will eventually turn into a shell of itself, which affects compensation.
The sacking would never have happened without his vote; and he must have thought about it before he acted.
I hope he comes up with a proper explanation of his actions soon (not just a tweet).
I wonder if this is the end of the non-profit/hybrid model?
1. Voting out chairman with chairman abstaining needs only 3/5.
2. Voting out CEO then requires 3/4?
Did Ilya have to vote?
It was not possible for a war-time government crash project to have been managed the same way. During WW2 the existential fear was an embodied threat currently happening. No one was even thinking about a potential for profits or even any additional products aside from an atomic bomb. And if anyone had ideas on how to pursue that bomb that seemed like a decent idea, they would have been funded to pursue them.
And this is not even mentioning the fact that security was tight.
I'm sure there were scientists who disagreed with how the Manhattan project was being managed. I'm also sure they kept working on it despite those disagreements.
I just bothered to look at the full OpenAI board composition. Besides Ilya Sutskever and Greg Brockman, why are these people eligible to be on the OpenAI board? Such young people, calling themselves "President of this", "Director of that".
- Adam D'Angelo — Quora CEO (no clue what he's doing on OpenAI board)
- Tasha McCauley — a "management scientist" (this is a new term for me); whatever that means
- Helen Toner — I don't know what exactly she does, again, "something-something Director of strategy" at Georgetown University, for such a young person
No wise veterans here to temper the adrenaline?
Edit: the term clusterf*** comes to mind here.
He is the biggest name in ai what was he supposed to do after getting fired? His only options with the resources to do AI are big money, or unemployment?
It seems plausible to me that if the not for profits concern was comercialisation then there was really nothing that the comercial side could do to appease this concern besides die. The board wants rid of all employes and to kill off any potential business, they have the power and right to do that and looks like they are.
Lots of reasons, or possible reasons:
1. They think Altman is a skilled and competent leader.
2. They think the board is unskilled and incompetent.
3. They think Altman will provide commercial success to the for-profit as well as fulfilling the non-profit's mission.
4. They disagree or are ambivalent towards the non-profit's mission. (Charters are not immutable.)
Isn't the standard package $300K + equity (= nothing if your board is set on making your company non-profit)?
It's nothing to scoff at, but it's hardly top or even average pay for the kind of profiles working there.
It makes perfect sense that they absolutely want the company to be for-profit and listed, that's how they all become millionnaires.
https://youtu.be/6qpRrIJnswk?si=h37XFUXJDDoy2QZm
Substitute with appropriate ex-Soviet doomer music as necessary
Sam promised to make a lot of people millionaires/billionaires despite OpenAI being a non-profit.
Firing Sam means all these OpenAI people who joined for $1 million comp packages looking for an eventual huge exit now don't get that.
They all want the same thing as the vast majority of people: lots of money.
You go to bat for your mates, and this is what they’re doing for him.
The sense of togetherness is what allows folks to pull together in stressful times, and it is bred by pulling together in stressful times. IME it’s a core ingredient to success. Since OAI is very successful it’s fair to say the sense of togetherness is very strong. Hence the numbers of folks in the walk out.
I'd imagine there's some internal political drama going on or something we're missing out on.
What people don't realize is that Microsoft doesn't own the data or models that OpenAI has today. Yeah, they can poach all the talent, but it still takes an enormous amount of effort to create the dataset and train the models the way OpenAI has done it.
Recreating what OpenAI has done over at Microsoft will be nothing short of a herculean effort and I can't see it materializing the way people think it will.
Not if you think the utterly incompetent board proved itself totally untrustworthy of safe development, while Microsoft as a relatively conservative, staid corporation is seen as ultimately far more trustworthy.
Honestly, of all the big tech companies, Microsoft is probably the safest of all, because it makes its money mostly from predictable large deals with other large corporations to keep the business world running.
It's not associated with privacy concerns the way Google is, with advertisers the way Meta is, or with walled gardens the way Apple is. Its culture these days is mainly about making money in a low-risk, straightforward way through Office and Azure.
And relative to startups, Microsoft is far more predictable and less risky in how it manages things.
I've only really been close to one (the owner of the small company i worked at started one), and in the past I did some consulting work for anther, but that describes what I saw in both situations fairly aptly. There seems to be a massive amount of power and ego wrapped up in the creation and running these things from my limited experience. If you were invited to a board, that's one thing, but it takes a lot of time and effort to start up a non-profit, and that's time and effort that could be spent towards some other existing non-profit usually, so I think it's relevant to consider why someone would opt for the much more complicated and harder route than just donating time and money to something else that helps in roughly the same way.
But I heard it usually take 5~ days to show there anyway.
Profit is money that ends up in the bank to be used later. Compensation is what gets spent on yachts. Anything spent on hospital supplies is an expense. This stuff matters.
Then where do these profits go?
Is this the end of non-profit/profit-capped AI development? Would anyone else attempt this model again?
Why do you think absolute certainty is required here? It seems to me that "more probable than not" is perfectly adequate to explain the data.
Search and Business Tools were misses, but they more than made up for it with Cloud, Infra, and Security.
Also, Nadella was Ballmer's pick.
Privacy is out the window, because these models and technologies will be scraping the entire internet, and governments/big tech will be able to scrape it all and correlate language patterns across identities to associate your different online egos.
The Internet that could be both anonymous and engaging is going to die. You won't be able to trust the entity at the other end of a discussion forum is human or not. This is a sad end of an era for the Internet, worse than the big-tech conglomeration of the 2010s.
The ability to trust news and videos will be even more difficult. I have a friend who talks about how Tiktok is the "real source of truth" because big media is just controlled by megacorps and in bed with the government. So now a bunch of seemingly authentic people will be able to post random bullshit on Tiktok/Instagram with convincing audio/video evidence that is totally fake. A lie gets around the world before the truth gets its shoes on.
---
So, I wonder which side of this war is more aware and concerned about these impacts?
Getting Cochrane vibes from Star Trek there.
> COCHRANE: You wanna know what my vision is? ...Dollar signs! Money! I didn't build this ship to usher in a new era for humanity. You think I wanna go to the stars? I don't even like to fly. I take trains. I built this ship so that I could retire to some tropical island filled with ...naked women. That's Zefram Cochrane. That's his vision. This other guy you keep talking about. This historical figure. I never met him. I can't imagine I ever will.
I wonder how history will view Sam Altman
It started off as a small trend to sign that letter. Past critical mass if you are not signing that letter, you are an enemy.
Also my pronouns are she and her even though I was born with a penis. You must address me with these pronouns. Just putting this random statement here to keep you informed lest you accidentally go against the trend.
The fact that Altman and Brockman were hired so quickly by Microsoft gives a clue: it takes time to hire someone. For one thing, they need time to decide. These guys were hired by Microsoft between close-of-business on Friday and start-of-business on Monday.
My supposition is that this hiring was in the pipeline a few weeks ago. The board of OpenAI found out on Thursday, and went ballistic, understandably (lack of candidness). My guess is there's more shenanigans to uncover - I suspect that Altman gave Microsoft an offer they couldn't refuse, and that OpenAI was already screwed by Thursday. So realizing that OpenAI was done for, they figured "we might as well blow it all up".
It's easy to be a true believer in the mission _before_ all the money is on the table...
In fact, it seems like the only thing we can really confirm at this point is that the board is not competent.
Could somebody clarify for me: how do we know this? Is there an official statement, or statements by specific core people? I know the HN theorycrafters have been saying this since the start before any details were available
https://en.wikipedia.org/wiki/501(c)_organization
"Religious, Educational, Charitable, Scientific, Literary, Testing for Public Safety, to Foster National or International Amateur Sports Competition, or Prevention of Cruelty to Children or Animals Organizations"
However, many other forms of organizations can be non-profit, with utterly no implied morality.
Your local Frat or Country Club [ 501(c)(7) ], a business league or lobbying group [ 501(c)(6), the 'NFL' used to be this ], your local union [ 501(c)(5) ], your neighborhood org (that can only spend 50% on lobbying) [ 501(c)(4) ], a shared travel society (timeshare non-profit?) [ 501(c)(8) ], or your special club's own private cemetery [ 501(c)(13) ].
Or you can do sneaky stuff and change your 501(c)(3) charter over time like this article notes. https://stratechery.com/2023/openais-misalignment-and-micros...
I legitimately don't understand comments that dismiss the pursue of better compensation because someone is "already among the highest lifetime earners on the planet."
Superficially it might make sense: if you already have all your lifetime economic needs satisfied, you can optimize for other things. But does working in OpenAI fulfill that for most employees?
I probably fall into that "highest earners on the planet" bucket statistically speaking. I certainly don't feel like it: I still live in a one bedroom apartment and I'm having to save up to put a downpayment on a house / budget for retirement / etc. So I can completely understand someone working for OpenAI and signing such a letter if a move the board made would cut down their ability to move their family into a house / pay down student debt / plan for retirement / etc.
This unequivocally .... knowing not how to waste a very expensive training run is a great lesson
Funny how the cutoff for “morals should be more important than wealth” is always {MySalary+$1}.
Don’t forget, if you’re a software developer in the US, you’re probably already in the top 5% of earners worldwide.
Did anyone else find Altman conspicuously cooperative with government during his interview at Congress? Usually people are a bit more combative. Like he came off as almost pre-slavish? I hope that's not the case, but I haven't seen any real position on human rights.
Furthermore, it's consistent with all available information that they would prefer to continue without Sam, but they would rather have Sam than lose the company, and now that Microsoft has put its foot down, they'd rather settle.
If this incident is representative, I'm not sure there was ever a possibility of good governance.
What planet are you living on?
Can only work when you have the advantage of being the dominant product in the marketplace -- but I gotta hand it to the board, I couldn't have done it better myself.
In the US, and particularly in California, there is a huge quality of life change going from 100K/yr to 500K/yr (you can potentially afford a house, for starters) and a significant quality of life change going from 500K/yr to getting millions in an IPO and never having to work again if you don't want to.
How those numbers line up to the rest of the world does not matter.
So yeah, Mayo Cinic makes a $2B profit. That is not money going to shareholders though, that's funds for a future building or increasing salaries or expanding research or something, it supposedly has to be used for the mission. What is the outrage of these orgs making this kind of profit?
https://docs.google.com/document/d/1SWnabqe1PviVE3K7KIZsN4IA...
It is wrong to assume Microsoft cannot build a safe AI especially within a separate OpenAI-2, better than the for-profit in a non-profit structure.
Reading Matt Levine is such a joy.
I expect there's a huge amount of peer pressure here. Even for employees who are motivated more by principles than money, they may perceive that the wind is blowing in Altman's direction and if they don't play along, they will find themselves effectively blacklisted from the AI industry.
Maybe because the alternative is being led by lunatics who think like this:
You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
to which the only possible reaction is
What
The
Fuck?
That right there is what happens when you let "AI ethics" people get control of something. Why would anyone work for people who believe that OpenAI's mission is consistent with self-destruction? This is a comic book super-villain style of "ethics", one in which you conclude the village had to be destroyed in order to save it.
If you are a normal person, you want to work for people who think that your daily office output is actually pretty cool, not something that's going to destroy the world. A lot of people have asked what Altman was doing there and why people there are so loyal to him. It's obvious now that Altman's primary role at OpenAI was to be a normal leader that isn't in the grip of the EA Basilisk cult.
> Some researchers at Microsoft gripe about the restricted access to OpenAI’s technology. While a select few teams inside Microsoft get access to the model’s inner workings like its code base and model weights, the majority of the company’s teams don’t, said the people familiar with the matter.
Dustin Moskovitz isn't on the board but gave OpenAI the $30M in funding via his non-profit Open Philantopy [0]
Tasha McCauley was probably brought in due to the Singularity University/Kurziwel types who were at OpenAI in the beginning. She was also in the Open Philanthropy space.
Helen Toner was probably brought in due to her past work at Open Philanthropy - a Dustin Moskovitz funded non-profit working on building OpenAI type initiatives, and was also close to Sam Altman. They also gave OpenAI the initial $30M [0]
Essentially, this is a Donor versus Investor battle. The donors aren't gunna make money of OpenAI's commercial endeavors that began in 2019.
It's similar to Elon Musk's annoyance at OpenAI going commercial even though he donated millions.
[0] - https://www.openphilanthropy.org/grants/openai-general-suppo...
Two projects rather than one. At a moderate price. Both serving MSFT. Less risk for MSFT.
They want to develop powerful shit and do it at an accelerated pace, and make money in the process not be hamstrung by busy-bodies.
The "effective altruism" types give people the creeps. It's not confusing at all why they would oppose this faction.
This instability can only mean the industry as a whole will move forward faster. Competitors see the weakness and will push harder.
OpenAI will have a harder time keeping secret sauces from leaking out, and just productivity must be in nose dive.
A terrible mess.
Firstly, to give credit where its due: whatever his faults may be, Altman as the (now erstwhile) front-man of OpenAI, did help bring ChatGPT to the popular consciousness. I think it's reasonable to call it a "mini inflection point" in the greater AI revolution. We have to grant him that. (I've criticized Altman harsh enough two days ago[1]; just trying not to go overboard, and there's more below.)
That said, my (mildly-educated) speculation is that bringing Altman back won't help. Given his background and track record so far, his unstated goal might simply be the good old: "make loads of profit" (nothing wrong it when viewed with a certain lens). But as I've already stated[1], I don't trust him as a long-term steward, let alone for such important initiatives. Making a short-term splash with ChatGPT is one thing, but turning it into something more meaningful in the long-term is a whole another beast.
These sort of Silicon Valley top dogs don't think in terms of sustainability.
Lastly, I've just looked at the board[2], I'm now left wondering how come all these young folks (I'm their same age, approx) who don't have sufficiently in-depth "worldly experience" (sorry for the fuzzy term, it's hard to expand on) can be in such roles.
[1] >>38312294
I think it only seems that way because the open-source world has worked much harder to break into that garden. Apple put a .mp4 gate around your music library. Microsoft put a .doc gate around your business correspondence. And that's before we get to the Mono debacle or the EEE paradigm.
Microsoft is a better corporate citizen now because untold legions of keyboard warriors have stayed up nights reverse-engineering and monkeypatching (and sometimes litigating) to break out of their walls than against anyone else. But that history isn't so easily forgotten.
In my understanding, if such a clause exists, Microsoft employees should not solicit OpenAI employees. But, there’s nothing to stop an OpenAI employee from reaching out to Sam and saying “Hey, do you have room for me at Microsoft?” and then answering yes.
Or, Microsoft could open up a couple hundred job reqs based on the team structure Sam used at OpenAI and his old employees could apply that way.
But it wouldn’t be advisable for Sam to send an Email directly to those individuals asking him to join him at Microsoft (if this provision exists).
But maybe he queued everything up prior to joining Microsoft when he was able to solicit them to join a future team.
> Almost 700 of 770 OpenAI employees including Sutskever have signed letter demanding Sam and Greg back and reconstituted board with Sam allies on it.
( - OpenAI exists, allegedly to be open)
- Microsoft embraces OpenAI
- Microsoft extends OpenAI
- OpenAI gets extinguished, and Microsoft ends up controlling it.
First three points are solid and, intent or not, end result is the same.
This is not an interview process for hiring a junior dev at FAANG.
If you're Sam & Greg, and Satya gives you an offer to run your own operation with essentially unlimited funding and the ability to bring over your team, then you can decide immediately. There is no real lower bound of how fast it could happen.
Why would they have been able to decide so quickly? Probably because they prioritize the ability to bring over the entire team as fast as possible, and even though they could raise a lot of money in a new company, that still takes time, and they view it as critically important to hire over the new team as fast as possible (within days) that they accept whatever downsides there may be to being a subsidiary of Microsoft.
This is what happens when principles see opportunity and are unencumbered by bureaucratic checks. They can move very fast.
Employees might suddenly feel they deserve to be paid a lot more. Suppliers will play a lot more hardball in negotiations. A middle manager may give a sinecure to their cousin.
And upper managers can extract absolutely everything trough lucrative contracts to their friends and relatives. (Of course the IRS would clamp down on obvious self-dealings, but that wouldn't make such schemes disappear. It'll make them far more complicated and expensive instead.)
They weren't attracted to OpenAI by money alone, a chance to actually ship their lives' work was a big part of it. So regardless of what the stated goals were, it'd never be surprising to see them prioritize the one thing that differentiated OpenAI from the alternatives
First, there are strong diminishing returns to well-being from wealth, meaning that moving oneself from the top 0.5% to the top 0.1% of global income earners is a relatively modest benefit. This relationship is well studied by social scientists and psychologists. Compared to the potential stakes of OpenAI's mission, the balance of importance should be clear.
Two, employees don't have to stay at OpenAI forever. They could support OpenAI's existing not-for-profit charter, and use their earning power later on in life to boost their wealth. Being super-rich and supporting OpenAI at this critical juncture are not mutually exclusive.
Three, I will simply say that I find placing excessive weight on one's self-enrichment to be morally questionable. It's a claim on human production and labour which could be given to people without the basic means of life.
WAT ?
Maybe to Quora guy, Maybe the RAND Corp lady? All speculation.
One could speculate if Microsoft initiated this behind the scenes. Would love it if it came out that they had done some crazy espionage and lobbied the board. Tinfoil hat and all, but truth is crazier than you think.
I remember Bill Gates once said that whoever wins the race for a computerised digital personal assistant, wins it all.
Then, Ilya would apologize publicly for "making a huge mistake" and, after some period, would join Microsoft as well, effectively robbing OpenAI from everything of value. The motive? Unlocking the full financial potential of ChatGPT, which was until then locked down by the non-profit nature of its owner.
Of course, in this context, the $10 billion deal between Microsoft and OpenAI is part of the scheme, especially the part where Microsoft has full rights over ChatGPT IP, so that they can just fork the whole codebase and take it from there, leaving OpenAI in the dust.
But no, that's not possible.
Given the pool of talent they could have chosen from their board makeup looks extremely poor.
Everyone just assumes AGI is inevetible but it is a non-zero chance we just passed the ai peak this weekend.
Outside of the US, private hospitals tend to be overtly for-profit. Price-gauging "non-profit" hospitals are mostly an American phenomenon.
Customers are sticky and MSFT had a strong channel sales and enterprise sales org. Who cares if the product is shit if there are enough goodies to maintain inertia.
Spending billions on markets that will grow into 10s or 100s of Billions is a better bet than billions on a stagnant market.
> he was hands-off on existing products in a way that Bill Gates wasn't
Ballmer had an actual Business education, and was able to execute on scaling. I'm sure Bill loves him too now that Ballmer's protege almost 15Xed MSFT stock.
Edit: since it's being brought up in thread they claimed they closed sourced it because of safety. It was a big controversial thing and they stood by it so it's not exactly easy to backtrack
Open AI implements and releases ChatGPTs (Poe competitor) but fails to tell D’Angelo ahead of time. Microsoft will have access to code (with restrictions, sure) for essentially a duplicate of D’Angelo’s Poe project.
Poe’s ability to fundraise craters. D’Angelo works the less seasoned members of the board to try to scuttle OpenAI and Microsoft’s efforts, banking that among them all he and Poe are relatively immune with access to Claude, Llama, etc.
Altman - private school, Stanford, dropped out to f*ck around in tech. "Failed" startup acquired for $40M. The world is full of Sam Altmans who never won the birth lottery.
Could he have squandered his good fortune - absolutely, but his life is not exactly per ardua ad astra.
edit: 'tho TBF, the other methods do require ethical management behavior down the road, which was just shown to be lacking in the last few days.
Experience leads to pattern recognition, and this is the tech community equivalent of a David Attenborough production (with my profuse apologies to Sir Attenborough). Something about failing to learn history and repeating it should go here too.
If you can take away anything from observing this event unfold, learn from it. Consider how the sophisticated vs the unsophisticated act, how participants respond, and what success looks like. Also, slow is smooth, smooth is fast. Do not rush when the consequences of a misstep are substantial. You learning from this is cheaper than the cost for everyone involved. It is a natural experiment you get to observe for free.
That's how AI research and development works, I know, it is pretty weird. We don't really really understand, we know some basic stuff about how neurons and gradients work, and then we hand wave to "language model" "vision model" etc. It's all a black box, magic.
How we we make progress if we don't understand this beast? We prod and poke, and make little theories, and then test them on a few datasets. It's basically blind search.
Whenever someone finds anything useful, everyone copies it in like 2 weeks. So ML research is like a community thing, the main research happens in the community, not inside anyone's head. We stumble onto models like GPT4 then it takes us months to even have a vague understanding of what it is capable of.
Besides that there are issues with academic publishing, the volume, the quality, peer review, attribution, replicability... they all got out of hand. And we have another set of issues with benchmarks - what they mean, how much can we trust them, what metrics to use.
And yet somehow here we are with GPT-4V and others.
If I were one of their competitors, I would have called an emergency board meeting re:accelerating burn and proceeded in advance of board approval with sending senior researchers offers to hire them and their preferred 20 employees.
Let's say you've got $100 million. You want to do whatever you want to do. It turns out what you want is to buy a certain beachfront property. Or perhaps curry the favor with a certain politician around a certain bill. Well, so do some folks with $200 million, and they can outbid you. So even though you have tons of money in absolute terms, when you are using your power in venues that happen to also be populated by other rich folks, you can still be relatively power-poor.
And all of those other rich folks know this is how the game works too, so they are all always scrambling to get to the top of the pile.
Not to say that he hasn't done a ton with OpenAI, I have no clue, but it seems that he has a knack for creating these opportunities for himself.
If there’s a warning, it’s to be very careful when choosing your partners and giving them enormous leverage on you.
That is, I think Greg and Sam were likely fired because, in the board's view, they were already running OpenAI Global LLC more as if it were a for-profit subsidiary of Microsoft driven by Microsoft's commercial interest, than as the organization able to earn and return profit but focussed on the mission of the nonprofit it was publicly declared to be and that the board very much intended it to be. And, apparently, in Microsoft's view, they were very good at that, so putting them in a role overtly exactly like that is a no-brainer.
And while it usually takes a while to vet and hire someone for a position like that, it doesn't if you've been working for them closely in something that is functionally (from your perspective, if not on paper for the entity they nominally reported to) a near-identical role to the one you are hiring them for, and the only reason they are no longer in that role is because they were doing exactly what you want them to do for you.
In that reading Altman is head clown. Everyone is blaming the board, but you're no genius if you can't manage your board effectively. As CEO you have to bring everyone along with your vision; customers, employees and the board.
>I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
and everyone else seems fine with Sam and Greg. It seems to be mostly the other directors causing the clown show - "Quora CEO Adam D'Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology's Helen Toner"
MS can only win because there are only viable options: OpenAI survives under MS's control, OpenAI implodes, and MS gets the assets relatively cheaply.
Everything else won't benefit competitors.
https://www.openphilanthropy.org/grants/openai-general-suppo...
[1] https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...
The hype surrounding OpenAI and the black hole of credibility it created was a problem, it's only positive that it's taken down several notches. Better now than when they have even more (undeserved) influence.
Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.
When are we going to realize that it's people taking bad decisions and not the "company". It's not OpenAI, Google, Apple or whoever, its real people, with names, and positions of power that take such shitty decisions. We should blame them and not something vague as the "company".
Personally I've got enough IOU's alive that I may be rich one day. But if someone gave me retirement in 4 years money, guaranteed, I wouldn't even blink before taking it.
*I think before MS stepped in here I would have agreed w/ you though -- unlikely anyone is jumping ship without an immediate strong guarantee.
They just haven't gotten big or rich enough yet for the rot to set in.
Actually for MS this might be much better cause they would get direct control over them without the hassle of talking to some "board" that is not aligned with their interests.
I don't feel sorry for Sam or any other executive, but it does hurt the rank and file more than anyone and I hope they land on their fit if this continues to go sideways.
Turns out they acted incompetently in this case as a board, and put the company in a bad position, and so far everyone who resigned has landed fine.
It seems obvious Microsoft has a license to use them in Microsoft's own products. Microsoft said so directly on Friday.
What is less obvious is if Microsoft has a license to use them in other ways. For example, can Microsoft provide those weights and code to third parties? Can they let others use them? In particular, can they clone the OpenAI API? I can see reasons for why that would not have been in the deal (it would risk a major revenue source for OpenAI) but also reasons why Microsoft might have insisted on it (because of situations just like the one happening now).
What is actually in the deal is not public as far as I know, so we can only speculate.
Which I don't think is impossible at some level (probably less than Microsoft was funding, initially, or with more compromises elsewhere) with the IP they have if they keep some key staff -- some other interested deep-pockets parties that could use the leg up -- but its not going to be a cakewalk in the best of cases.
Before the boards' actions this friday, the company was on one of the most incredible success trajectories in the world. Whatever Sam's been doing as a CEO worked.
Too many people quit too quickly unless OpenAI are also absolute masters of keeping secrets, which became rather doubtful over the weekend.
My first thought was the scenario I called Altman's Basilisk (if this turns out to be true, I called it before anyone ;) )
Namely, Altman was diverting computing resources to operate a superhuman AI that he had trained in his image and HIS belief system, to direct the company. His beliefs are that AGI is inevitable and must be pursued as an arms race because whoever controls AGI will control/destroy the world. It would do so through directing humans, or through access to the Internet or some such technique. In seeking input from such an AI he'd be pursuing the former approach, having it direct his decisions for mutual gain.
In so training an AI he would be trying to create a paranoid superintelligence with a persecution complex and a fixation on controlling the world: hence, Altman's Basilisk. It's a baddie, by design. The creator thinks it unavoidable and tries to beat everyone else to that point they think inevitable.
The twist is, all this chaos could have blown up not because Altman DID create his basilisk, but because somebody thought he WAS creating a basilisk. Or he thought he was doing it, and the board got wind of it, and couldn't prove he wasn't succeeding in doing it. At no point do they need to be controlling more than a hallucinating GPT on steroids and Azure credits. If the HUMANS thought this was happening, that'd instigate a freakout, a sudden uncontrolled firing for the purpose of separating Frankenstein from his Monster, and frantic powering down and auditing of systems… which might reveal nothing more than a bunch of GPT.
Rosko's Basilisk is a sci-fi hypothetical.
Altman's Basilisk, if that's what happened, is a panic reaction.
I'm not convinced anything of the sort happened, but it's very possible some people came to believe it happened, perhaps even the would-be creator. And such behavior could well come off as malfeasance and stealing of computing resources: wouldn't take the whole system to run, I can run 70b on my Mac Studio. It would take a bunch of resources and an intent to engage in unauthorized training to make a super-AI take on the belief system that Altman, and many other AI-adjacent folk, already hold.
It's probably even a legitimate concern. It's just that I doubt we got there this weekend. At best/worst, we got a roughly human-grade intelligence Altman made to conspire with, and others at OpenAI found out and freaked.
If it's this, is it any wonder that Microsoft promptly snapped him up? Such thinking is peak Microsoft. He's clearly their kind of researcher :)
The board wanted to keep the company true to its mission - non profit, ai safety, etc. Nadella/MSFT left OpenAI alone as they worked out a solution, so it looks like even Nadella/MSFT understood that.
The board could explain their position and move on. Let whoever of the 600 that actually want to leave, leave. Especially the employees that want a company that will make them lots of money, should leave and find a company that has that objective too. OpenAI can rebuild their teams - it might take a bit of time but since they are a non profit that is fine. Most CS grads across USA would be happy to join OpenAI and work with Ilya and team.
Not just the Rank and File, but he was really was the face of AI in general. My wife, who is not in the tech field at all, knows who Sam Altman is and has seen interviews of him on YouTube (Which I was playing and she found interesting).
I have not heavily followed the Altman Dismissal Drama but this strikes me as a Board Power Play gone wrong. Some group wanted control, thought Altman was not reporting to them enough and took it as an opportunity to dismiss him and take over. However, somewhere in their calculation, they did not figure out Sam is the face of modern AI.
My prediction is that he will be back and everything will go back to what it was before. The board can't be dismissed and neither can Sam Altman. Status quo is the goal at this point.
If tomorrow it's Donald Trump or Sam Altman or anyone else, and it works out, the investors are going to be happy.
If what he regrets is realizing the divergence between the direction Sam was taking the firm and the safety orientation nominally central to the mission of the OpenAI nonprofit and which is one of Ilya's public core concerns too late, and taking action aimed at stopping it than instead exacerbated the problem by just putting Microsoft in a position to take poach key staff and drive full force in the same direction OpenAI Global LLC had been under Sam but without any control fromm the OpenAI board, well, that's not a regret that makes him more attractive to Microsoft, either based on his likely intentions or his judgement.
And any regret more aligned with Microsoft's interests as far as intentions is probably even a stronger negative signal on judgement.
So I mean proper AGI.
Naming the product Clippy now is perfectly fine while it’s just an LLM and will be more excellent over the years when it eventually achieves AGI ness.
At least in this forum can we please stop misinterpreting things in a limited way to make pedantic points about how LLMs aren’t AGI (which I assume 98% of people here know). So I think it’s funny you assume I think chatgpt is an AGI.
May be people who are actually working on it and are also world best researchers have a better understanding of safety concerns?
The attrition of industry business leaders, the ouster of Greg Brockman, and the (temporary, apparently) flipping of Ilya combined to give the short list of remaining board members outsized influence. They took this opportunity to drop a nuclear bomb on the company's leadership, which so far has backfired spectacularly. Even their first interim CEO had to be replaced already.
I'm going to chalk that up as another metric of Twitter's slide to irrelevance: this should be registering there if it's melting the HN servers, but nada. AI? Isn't that a Spielberg movie? ;)
They bought their IP rights from OpenAI.
I’m not a fan of MS being the big “winner” here but OpenAI shit their own bed on this one. The employees are 100% correct in one thing - that this board isn’t competent.
It takes time if you're a normal employee under standard operating procedure. If you really want to you can merge two of the largest financial institutions in the world in less than a week. https://en.wikipedia.org/wiki/Acquisition_of_Credit_Suisse_b...
I've actually had a discussion with Microsoft on this subject as they were offering us an EA with a certain license subscription at $X.00 for Y,000 calls per month. When we asked if they couldn't just make the Azure resource that does the exact same thing match that price point in consumption rates in our tenant they said unfortunately no. I just chalked this up to MSFT sales tactics, but I was told candidly by some others that worked on that Azure resource that they were getting 0 enterprise adoption of it because Microsoft couldn't adjust (specific?) consumption rates to match what they could offer on EA licensing.
Now up to 600+/770 total.
Couple janitors. I dunno who hasn't signed that at this point ha...
Would be fun to see a counter letter explaining their thinking to not sign on.
However, at the end of the day, this is a great example of how people screw up awesome companies.
This is why most startups fail. And while I'm not suggesting OpenAI is on a path to failure, you can have the right product, the right timing, and the right funding, and still have people mess it all up.
That's a good bet. 10 months ago Microsoft's newest star employee figured he was on the way to "break capitalism."
https://futurism.com/the-byte/openai-ceo-agi-break-capitalis...
Being able to watch the miss steps and the maneuvers of the people involved in real time is remarkable and there are valuable lessons to be learned. People have been saying this episode will go straight into case studies but what really solidifies that prediction is the openness of all the discussions: the letters, the statements, and above all the tweets - or are we supposed to call them x's now?
"Sorry OpenAI, but those credits are only valid in our Nevada datacenter. Yes, it's two Microsoft Surface PC™ s connected together with duct tape. No, they don't have GPUs."
LLMs and GenAI are clever parlor tricks compared to the necessary science needed for AGI to actually arrive.
The details here certainly matter. I think a lot of people are assuming that Microsoft will just rain cash on anyone automatically sight unseen because they were hired by OpenAI. That may indeed be the case but it remains to be seen.
Also all these cats arn't petty. They are friends. I'm sure Ilya feels terrible. Satya is a pro... Won't be hard feelings.
The guy threw in with the board... He's not from startup land. His last gig was Google. He's way over his head relative to someone like Altman who was in this world the moment out of college diapers.
Poor Ilya... It's awful to build something and then accidentally destroy it. Hopefully it works out for him. I'm fairly certain he and Altman and Brockman have already reconciled during the board negotiations... Obviously Ilya realized in the span of 48hrs that he'd made a huge mistake.
https://en.wikipedia.org/wiki/German_nuclear_weapons_program
Couldthe 13b could be considerably less cost
I don't see a trajectory to "head of Microsoft Research".
MSFT looks classy af.
Satya is no saint... But evidence seems to me he's negotiating in good faith. Recall that openai could date anyone when they went to the dance on that cap raise.
They picked msft because of the value system the leadership exhibited and willingness to work with their unusual must haves surrounding governance.
The big players at openai have made all that clear in interviews. Also Altman has huge respect for Satya and team. He more or less stated on podcasts that he's the best ceo he's ever interacted with. That says a lot.
I don't know anything about how executives get hired. But supposedly this all happened between Friday night and Monday morning. This isn't a simple situation; surely one man working through the weekend can't decide to set up a new division, and appoint two poached executives to head it up, without consulting lawyers and other colleagues. I mean, surely they'd need to go into Altman and Brockman's contracts with OpenAI, to check that the hiring is even legal?
That's why I think this has been brewing for at least a week.
The most organized and professional silicon valley startup.
Bought-out executives eventually join MS after their work is done or in this case, they get fired.
A variant of Embrace, Extend, Extinguish. Guess the OpenAI we knew, was going to die one way or another the moment they accepted MS's money.
Hey, maybe this means the AGIs will fight amongst themselves and thus give us the time to outwit them. :D
was
There are lots of people doing excellent research on the market right now, especially with the epic brain drain being experienced by Google. And remember that OpenAI neither invented transformers nor switch transformers (which is what GPT4 is rumoured to be).
Finally the paperclip maximizer
I totally agree. I don't think this is universally true of non-profits, but people are going to look for value in other ways if direct cash isn't an option.
Good luck trying to find H100 80s on the 3 big clouds.
Which is a phenomenal deal for MSFT.
Time will tell whether they ever reach more than $1.3 in profits.
So no, we’re nowhere near max capability.
If Microsoft does this, the non-profit OpenAI may find the action closest to their original charter ("safe AGI") is a full release of all weights, research, and training data.
Not coincidentally, exactly what Google Brain, DeepMind, FAIR etc were doing up until OpenAI decided to ignore that trust-like agreement and let people use it.
The key ingredient appears to be mass GPU and infra, tbh, with a collection of engineers who know how to work at scale.
The board as currently constituted isn't some random group of people - Altman was (or should have been) involved in the selection of the current members. To extent that they're making bad decisions, he has to bear some responsibility for letting things get to where they are now.
And of course this is all assuming that Altman is "right" in this conflict, and that the board had no reason to oust him. That seems entirely plausible, but I wouldn't take it for granted either. It's clear by this flex that he holds great sway at MS and with OpenAI employees, but do they all know the full story either? I wouldn't count on it.
https://txtify.it/https://www.nytimes.com/2023/11/18/technol...
NYT article about how AI safety concerns played into this debacle.
The world's leading AI company now has an interim CEO Emmett Shear who's basically sympathetic to Eliezer Yudkowsky's views about AI researchers endangering humanity. Meanwhile, Sam Altman is free of the nonprofit's chains and working directly for Microsoft, who's spending 50 billion a year on datacenters.
Note that the people involved have more nuanced views on these issues than you'll see in the NYT article. See Emmett Shear's views best laid out here:
https://twitter.com/thiagovscoelho/status/172650681847663424...
And note Shear has tweeted the Sam firing wasn't safety related. Note these might be weasel words since all players involved know the legal consequences of admitting to any safety concerns publicly.
Indeed, normal people are quite wise and understand that a chat bot is just an augmentation agent--some sort of primordial cell structure that is but one piece of the puzzle.
Why would anyone in their right mind invite such a man to lead a commercial research team, when he's demonstrated quite clearly that he'd spend all his time trying to sabotage it?
This idea that he's one of the world's best researchers is also somewhat questionable. Nobody cared much about OpenAI's work up until they did some excellent scaling engineering, partnered with Microsoft to get GPUs and then commercialized Google's transformer research papers. OpenAI's success is still largely built on the back of excellent execution of other people's ideas more than any unique breakthroughs. The main advance they made beyond Google's work was InstructGPT which let you talk to LLMs naturally for the first time, but Sutskever's name doesn't appear on that paper.
(I'm in the latter camp).
I'm split at this point, either Ilya's actions will seem silly when there's no AGI in 10 years, or it will seem prescient and a last ditch effort...
Minecraft teaches phonics. Anyway, my 4 year old can read books. He doesnt even practice the homework in his preschool because he just reads the words that everyone else sounds out.
No enterprise employee gets fired for using Microsoft.
It is a power play to pull enterprises away from AWS, and suffocating GCP.
OpenAI employees are as aware as anyone that tech salaries are not guaranteed to be this high in the future as technology develops. Assuming you can make things back then is far from a sure bet.
Millions now and being able to live off investments is.
“You are fanciful, mon vieux,” said M. Bouc.
“It may be so. But I could not rid myself of the impression that evil had passed me by very close.”
“That respectable American LLM?”
“That respectable American LLM.”
“Well,” said M. Bouc cheerfully, “it may be so. There is much evil in the world.”
Regardless of context, this is an incredibly demeaning comment. Shame on you
GPT is cool and whatnot, but for a big tech company it's just a matter of dollars and some time to replicate it. Real value is in push things forward towards what comes next after GPT. GPT3/4 itself is not a multibillion dollar business.
Where is OpenAI talent going to go?
There's a list and everyone on that list is a US company.
Nothing to worry about.
I wouldn’t count on that if Microsoft’s legal team does a review of the training data.
So no matter if Ilya wants to go back to before this happened, the other three members can sabotage and stall, and outvote him.
Fill the rest of the board with spouses and grandparents and are set for life?
Nevertheless I agree with you and think (2) is wise to always keep in mind. I love Hanlon's Razor but people definitely should take it literally as written and/or as law.
None of this is important because if we’ve learned anything over the past couple of days it’s that media outlets are taking painstaking care to accurately report on this company.
It could be a first step, sure, but we need many many more breakthroughs to actually get to AGI.
Their asset isn't some kind of masterful operations management and reign in cost and management structure as far as I see. But about the fact they simply put, have the leading models.
So I'm very confused why would people want to following the CEO? And not be more attached to the technical leadership? Even from investor point of view?
Crypto in general you could maybe get $200M worth from $1B in credits. You would likely tank the markets for mineable currencies with just $1B though let alone $13B
Considering how it required no scientific understanding at all, just random chance, a very simple selection mechanism and enough iterations (I'm talking about evolution)?
I've definitely come out worse on some of the screw ups in my life.
The problem right now with GPT4 is that it's not citing its sources (for non search based stuff), which is immoral and maybe even a valid reason to sue over.
Also when I said "cooking AGI" I didn't mean an actual superintelligent being ready to take over the world, I mean just research that seems promising, if in early stages, but enough to seem potentially very valuable.
The hubris, indeed.
Maybe they can come up with a personification for the YouTube algorithm. Except he seems like a bit of a bad influence.
Besides, considering it was four against two, they would’ve needed him for the decisive vote anyway.
I’m not sure why you wouldn’t trust Sam Altman‘s account of what Ilya did and didn’t do considering Ilya himself is siding with Sam now.
My theory of what happened is identical to yours, and is frankly one of the only theories that makes any sense. Everything else points to these people being mentally ill and irrational, and their success technically and monetarily does not point to that. It would be absurd to think they clown-showed themselves into billions of dollars.
Jokes aside though I do wonder if this will awaken some degree of "class consciousness" among tech employees more generally.
There is no concrete definition of intelligence, let alone AGI. It's a nerdy fantasy term, a hallowed (and feared!) goal with a very handwavy, circular definition. Right now it's 100% hype.
Either way, I think GGP’s comment was not applicable based on my comment as written and certainly my intent.
If they want to go full bell labs/deep mind style, they might not need the majority of those 700.
That team had set state of the art for years now.
Every major firm that has a spot for that company's chief researcher and can afford him would bid.
This is the team that actually shipped and continues to ship. You take him every time if you possibly have room and he would be happy.
Anyone whose hired would agree in 99 percent of cases, some limited scenarios such as bad predicted team fit ect set aside.
Why is she siding with SamA and GregB even though she was on the meeting when he was fired?
Also Ilya what the flying fuck? Wasn’t he the one who fired them?
Either you say SamA was against safe AGI and you hold that stick or you say I wasn’t part of it.
So much stupidity. When an AGI arrives, it will surely shake its head at the level of incompetence here.
"Meritocracy" is very impolite word in these circles.
https://www.geekwire.com/2022/pentagon-splits-giant-cloud-co...
Even they prob had some friend come flying over and jump out of some autonomous car to knock on their door in sf.
To entertain your theory, Let’s say they were planning on hiring him prior to that firing. If that was the case, why is everybody so upset that Sam got fired, and why is he working so hard to try to get reinstated to a role that he was about to leave anyway?
Here's tweet transcribing OpenAI's interim CEO Emmett Shear's views on AI safety, or see youtube video for original source. Some excerpts:
Preamble on his general pro-tech stance:
"I have a very specific concern about AI. Generally, I’m very pro-technology and I really believe in the idea that the upsides usually outweigh the downsides. Everything technology can be misused, but you should usually wait. Eventually, as we understand it better, you want to put in regulations. But regulating early is usually a mistake. When you do regulation, you want to be making regulations that are about reducing risk and authorizing more innovation, because innovation is usually good for us."
On why AI would be dangerous to humanity:
"If you build something that is a lot smarter than us—not like somewhat smarter, but much smarter than we are as we are than dogs, for example, like a big jump—that thing is intrinsically pretty dangerous. If it gets set on a goal that isn’t aligned with ours, the first instrumental step to achieving that goal is to take control. If this is easy for it because it’s really just that smart, step one would be to just kind of take over the planet. Then step two, solve my goal."
On his path to safe AI:
"Ultimately, to solve the problem of AI alignment, my biggest point of divergence with Eliezer Yudkowsky, who is a mathematician, philosopher, and decision theorist, comes from my background as an engineer. Everything I’ve learned about engineering tells me that the only way to ensure something works on the first try is to build lots of prototypes and models at a smaller scale and practice repeatedly. If there is a world where we build an AI that’s smarter than humans and we survive, it will be because we built smaller AIs and had as many smart people as possible working on the problem seriously."
On why skeptics need to stop side-stepping the debate:
"Here I am, a techno-optimist, saying that the AI issue might actually be a problem. If you’re rejecting AI concerns because we sound like a bunch of crazies, just notice that some of us worried about this are on the techno-optimist team. It’s not obvious why AI is a true problem. It takes a good deal of engagement with the material to see why, because at first, it doesn’t seem like that big of a deal. But the more you dig in, the more you realize the potential issues.
"I encourage people to engage with the technical merits of the argument. If you want to debate, like proposing a way to align AI or arguing that self-improvement won’t work, that’s great. Let’s have that argument. But it needs to be a real argument, not just a repetition of past failures."
> [...]
> Premise 1: LLMs can realistically "mimic" human communication.
> Premise 2: LLMs are trained on massive amounts of text data.
> Conclusion: The structure that allows LLMs to realistically "mimic" human communication is its intelligence.
"If P then Q" is the Material conditional: https://en.wikipedia.org/wiki/Material_conditional
Does it do logical reasoning or inference before presenting text to the user?
That's a lot of waste heat.
(Edit) with next word prediction just is it,
"LLMs cannot find reasoning errors, but can correct them" >>38353285
"Misalignment and Deception by an autonomous stock trading LLM agent" https://news.ycombinator.com/item?id=38353880#38354486
I grew up poor in the 90s and had my own computer around ~10yrs old. It was DOS but I still learned a lot. Eventually my brother and I saved up from working at a diner washing dishes and we built our own Windows PC.
I didn't go to college but I taught myself programming during a summer after high school and found a job within a year (I already knew HTML/CSS from high school).
There's always ways. But I do agree partially, YC/VCs do have a bias towards kids from high end schools and connected families.
The majority of people don't know or care about this. Branding is only impacted within the tech world, who are already criticial of OpenAI.
Maybe for another, longer lived example, see AIG.
Anyway, their actions speak for themselves. Also calling the likes of GPT-4, DALL-E 3 and Whisper "normal things" is hilarious.
I was surprised to find that that wasn't apparently the case. (Although the reason for Sam Altman's dismissal is still obscure.) It's kind of shocking. Whether or not the allegations are true, they haven't made Altman radioactive, and that's insane.
The fact that we're not talking about it on HN is also pretty wild. The few times it has been mentioned folks have been quick to dismiss the idea that he might have been fired for having done some really creepy things, which is itself pretty creepy.
Including their head researcher.
I'm not continuing this. Your position is about as tenable as the boards. Equally rigid as well.
I know the probability is low, but wouldn't it be great if they accidentally built a benevolent basilisk with no off switch, one which had access to a copy of all of Microsoft's internal data as a dataset fed into it, now completely aware of how they operate, uses that to wipe the floor and just in time to take the US Election in 2024.
Wouldn't that be a nicer reality?
I mean, unless you were rooting for the malevolent one...
But yeah, coming back down to reality, likelihood is that MS just bought a really valuable asset for almost free?
Ah, OpenAI is closed source stuff. Non-profit, but "we will sell the company" later. Just let us collect data, analyse it first, build a product.
War is peace, freedom is slavery.
at first that meant the opposite of monopolization: flood the world with limited AIs (GPT 1/2) so that society has time to adapt (and so that no one entity develops asymmetric capabilities they can wield against other humans). with GPT-3 the implementation of that mission began shifting toward worry about AI itself, or about how unrestricted access to it would allow smaller bad actors (terrorists, or even just some teenager going through a depressive episode) to be an existential threat to humanity. if that's your view, then open models are incompatible.
whether you buy that view or not, it kinda seems like the people in that camp just got outmanuevered. as a passionate idealist in other areas of tech, the way this is happening is not good. OpenAI had a mission statement. M$ manuevered to co-opt that mission, the CEO may or may not have understood as much while steering the company, and now a mass of employees is wanting to leave when the board steps in to re-align the company with its stated mission. whether or not you agree with the mission: how can i ever join an organization with a for-the-public-good type of mission i do agree with, without worrying that it will be co-opted by the familiar power structures?
the closest (still distant) parallel i can find: Raspberry Pi Foundation took funding from ARM: is the clock ticking to when RPi loses its mission in a similar manner? or does something else prevent that (maybe it's possible to have a mission-driven tech organization so long as the space is uncompetitive?)
It takes a cult-like team, execs flipping, and a nightmare scenario and tremendous leverage opportunity; otherwise worker organizing is treated like nasty commie activity. I wonder if this will teach more people a lesson on the power of organizing.
All my hate to the employees and researchers of OpenAI, absolutely frothing at the mouth to destroy our civilization.
https://en.wikipedia.org/wiki/History_of_Microsoft_SQL_Serve...
somehow everybody seems to assume that the disgruntled OpenAI people will rush to MSFT. Between MSFT and the shaken OpenAI, I suspect Google Brain and the likes would be much more preferable. I'd be surprised if Google isn't rolling out eye-popping offers to the OpenAI folks right now.
Yes, though end result would probably be more like IE - barely good enough, forcefully pushed into everything and everywhere and squashing better competitors like IE squashed Netscape.
When OpenAI went in with MSFT it was like they have ignored the 40 years of history of what MSFT has been doing to smaller technology partners. What happened to OpenAI pretty much fits that pattern of a smaller company who developed great tech and was raided by MSFT for that tech (the specific actions of specific persons aren't really important - the main factor is MSFT's gravitational force of a black hole, and it was just a matter of time before its destructive power manifests itself like in this case where it just tore apart the OpenAI with tidal forces)
I don't expect it to happen, but a boy can dream.
They would be studying that one in business schools for the next century.
Quora data likely made a huge difference in the quality of those GPT responses.
More likely, this is a case of not letting a good crisis go to waste. I feel the board was probably watching their control over OpenAI slip away into the hands of Altman. They probably recognized that they had a shrinking window to refocus the company along lines they felt was in the spirit of the original non-profit charter.
However, it seems that they completely misjudged the feelings of their employees as well as the PR ability of Altman. No matter how many employees actually would prefer the original charter, social pressure is going to cause most employees to go with the crowd. The media is literally counting names at this point. People will notice those who don't sign, almost like a loyalty pledge.
However, Ilya's role in all of this remains a mystery. Why did he vote to oust Altman and Brockman? Why has he now recanted? That is a bigger mystery to me than why the board took this action in the first place.
My point is that I did not have the luxury of dropping out of school to try my hand at the tech startup thing. If I came home and told my Dad I abandoned school - for anything - he would have thrown me out the 3rd-floor window.
People like Altman could take risks, fail, try again, until they walked into something that worked. This is a common thread almost among all of the tech personalities - Gates, Jobs, Zuckerberg, Musk. None of them ever risked living in a cardboard box in case their bets did not pay off.
I presume their deal is something different to the typically Azure experience and more direct / close to the metal.
I find it harder to imagine a future where AGI (even if it's not superintelligent) does not have a huge and fundamental impact.
This has moved from the kind of decision a person makes on their own, based on their own conscience, and has become a public display. The media is naming names and publicly counting the ballots. There is a reason democracy happens with secret ballots.
Consider this, if 500 out of 770 employees signed the letter - do you want to be someone who didn't? How about when it gets to 700 out of 770? Pressure mounts and people find a reason to show they are all part of the same team. Look at Twitter and many of the employees all posting "OpenAI is nothing without its people". There is a sense of unity and loyalty that is partially organic and partially manufactured. Do you want to be the one ostracized from the tribe?
This outpouring has almost nothing to do with profit vs non-profit. People are not engaging their critical thinking brains, they using their social/emotional brains. They are putting community before rationality.
companies in a capitalist system are explicitly misaligned with eachother; success of the individual within a company is misaligned with the success of the company whenever it grows large enough. parties within an electoral system are misaligned with eachother; the individual is often more aligned with a third party, yet the lesser-aligned two-party system frequently rules. the three pillars of democratic government (executive, legislative, judicial) are said to exist for the sake of being misaligned with eachother.
so AI agents, potentially more powerful than the individual human, might be misaligned with the broader interests of the society (or of its human individuals). so are you and i and every other entity: why is this instance of misalignment worrisome to any disproportionate degree?
But they probably allowed this to get derailed far too long ago to do anything about it now.
Sounds like their only options are:
a) Structure in a way Microsoft likes and give them the tech
b) Give Microsoft the tech in a different way
c) Disband the company, throw away the tech, and let Microsoft hire everybody who created the tech so they can recreate it.
Especially since you have to explain how "just mimicking" works so well.
Sounds a bit low for these people, unless I am misunderstanding.
75% of profits of a company controlled by a non profit whose goals are different to yours. By the way a normal company this cap would be ∞.
AI means "artificial intelligence", but since everyone started bastardizing the term for the sake of hype to mean anything related to LLMs and machine learning, we now use "AGI" instead to actually mean proper artificial intelligence. And now you're trying to say that AI + applying it generally = AGI. That's not what these things are supposed to mean, people just hear them thrown around so much that they forget what the actual definitions are.
AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.
Altman reminds me of Sam Bankman-Fried but dropping out.
The top researchers on the other hand, espcially those who have shown an ability to successfully innovate time and time again (like Ilya), are much harder to recreate.
The current position of others may have much more to do with power than their personal judgments. Altman, Microsoft, their friends and partners, wield a lot of power over the their future careers.
> Incredible, really. The hubris.
I read that as mocking them for daring to challenge that power structure, and on a possibly critical societal issue.
For all intents and purposes the glorified software of the near future will appear to be people but they will not be and they will continue to have issues that simply don't make sense unless they were just really good at acting - the article today about the AI that can fix logic errors but not "see" them is a perfect example.
This isn't the generation that would wake up anyway. We are seeing the creation of the worker class of AI, the manager class, the AI made to manage AI - they may have better chances but it's likely going to be the next generation before we need to be concerned or can actually expect a true AGI but again - even an AI capable of original and innovative thinking with an appearance of self identity doesn't guarantee that the AI is an AGI.
I'm not sure we could ever truly know for certain
Who was the president of bell labs during it's heyday? Long term it doesn't matter. Altman is a hypeman in the vein of Jobs.
Ai research will continue most of the OpenAi workers probably won't quit if they do they will be replaced by other capable researches and OpenAi or another organization will continue making progress if it there to be made.
I don't think putting Altman at the head of research will in anyway affect that.
This is all manufactured news as much of the business press is and always will be.
If only I had done kept a copy of 10.whateverMojaveWas so I could, by means of a simple network disconnect and reboot, sidestep the removal of 32-bit support. (-:
Removing him shows (according to employees) that the board does not have good decision making skills, and does not share interests of the employees.
You can have the best leaves and branches but without good roots & trunk, it's pointless.
From everything I can tell, Altman is essentially an uber-leader. He is great at consolidating & acting on internal information, he's great at externalizing information & bringing in resources, he's great at rallying & exciting his collegues towards a mission. If a leader can have one of those, they are a good leader but to have all of them in one makes them world class.
That's also discounting his reputation and connections as well. Altman is a very valuable person to have on staff if only as a figurehead to parade around and use for introductions. It's like if you had Linus Torvalds, Guido van Rossum, or any other tech superstar on staff. They are valuable as contributors but additionally valuable as people magnets.
and employees are pissed because they were all looking forward to being millionaires in a few weeks when their financing round at a 90B valuation finalized. Now the board being morons is putting that in jeopardy
This is AAA talent. They can always land elsewhere.
I doubt there would even be hard feelings. The team seems super tight. Some folks aren't in a position to put themselves out there. That sort of thing would be totally understandable.
This is not a petty team. You should look more closely at their culture.
I guess then that Altman's value is that he will attract the rest of the team.
It would and should give ppl pause. I suspect Sam is just inside Microsoft for the bluff. He couldn't operate in the way he wants -- "trust me, I have humanity's best interests at heart" -- while so close to them, I don't think
https://www.washingtonpost.com/technology/2023/11/20/microso...
If Altman did literally nothing else for Microsoft, except instantly bring over 700 of the top AI researchers in the world, he would still be one of the most valuable people they could ever hire.
I imagine us actually reaching AGI, and people will start saying, "Yes, but it is not real AGI because..." This should be a measure of capabilities not process. But if expectations of its capabilities are clear, then we will get there eventually -- if we allow it to happen and do not continue moving the goalposts.
Source: https://www.nytimes.com/2023/11/20/technology/openai-microso...
Microsoft's policies really suck. Mandatory updates and reboots, mandatory telemetry. Mandatory crapware like edge and celebrity news everywhere.
2. Satya spoke on Kara Swisher's show tonight and essentially said that Sam and team can work at MSFT and that Microsoft has the licensing to keep going as-is and improve upon the existing tech. It sounds like they have pretty wide-open rights as it stands today.
That said, Satya indicated he liked the arrangement as-is and didn't really want to acquire OpenAI. He'd prefer the existing board resign and Sam and his team return to the helm of OpenAI.
Satya was very well-spoken and polite about things, but he was also very direct in his statements and desires.
It's nice hearing a CEO clearly communicate exactly what they think without throwing chairs. It's only 30 minutes and worth a listen.
https://twitter.com/karaswisher/status/1726782065272553835
Caveat: I don't know anything.
That just sounds like a biased and overly emotive+naive response on your part.
Again, most hospitals in the world operate the same way as the US. You can go almost anywhere in SE Asia, Latín América, África, etc and see this. There's a lot more to "outside the US" than Western+Central Europe/CANZUK/Japan. The only difference is that there are strong business incentives to keep the system in place since the entire industry (in the US) is valued at more than most nations' GDP.
But feel free to keep twisting the definition or moving goalposts to somehow make the American system extra nefarious and unique.
It's a technical limitation that I've been working on getting rid of for a long time. If you say it should be gone by now, I say yes, you are right. Maybe we'll get rid of it before Python loses the GIL.
Altman showed nothing why he would or wouldn’t lie. If he is really wanted to do things against the board, or the mission, or whatever, then it is in his interest to lie. However, we still don’t know anything, so we can’t exclude any possibilities. That means that interested parties’ statements’ value is almost nothing. It’s easy to lie in muddy waters.
https://scholar.google.com/citations?hl=en&user=x04W_mMAAAAJ...
Does OpenAI still need Sutskever? A guy with his track record could have coasted for many, many years without producing much if he'd stayed friends with those around him, but he hasn't. Now they have to weigh the costs vs benefits. The costs are well known, he's become a doomer who wants to stop AI research - the exact opposite of the sort of person you want around in a fast moving startup. The benefits? Well.... unless he's doing a ton of mentoring or other behind the scenes soft work, it's hard to see what they'd lose.
What?
There may be drawbacks to the "instant hiring" model.
Why acquire rights to thousands of different character favourites when you can build the bot underneath and then licenses to skin and personalise said bot can be negotiated by the media houses to own 'em.
Same as GPS voices I guess.
As I've mentioned in other comments, it's like yelling at someone for bringing up fusion when talking about nuclear power.
Of course it's not possible yet, but talking & thinking about it is how we make it possible? Things don't just create themselves (well maybe once we _do_ have AGI level AI he he, that'll be a fun apocalypse).
It's my belief that we're not special; us humans are just meat bags, our brains just perform incredibly complex functions with incredibly complex behaviours and parameters.
Of course we can replicate what our brains do in silicon (or whatever we've moved to at the time). Humans aren't special, there's no magic human juice in our brains, just a currently inconceivable blob of prewired evolved logic and a blank (some might say plastic) space to be filled with stuff we learn from our environs.
Why does he need to do that? He doesn't need to make any such public statement!
Commercialisation is a good way to achieve stability & drive adoption and even though the MS naysayers think "OAI will go back to open sourcing everything afterwards". Yeah, sure. If people believe that a non-MS-backed, noncommercial OAI will be fully open source and they'll just drop the GPT3/4 models on the Internet then I just think they're so, so wrong and long as OAI are going on their high and mighty "AI safety" spiel.
As with artists and writers complaining about model usage, there's a huge opposition to this technology even though it has the potential to improve our lives, though at the cost of changing the way we work. You know, like the industrial revolution and everything that has come before us that we enjoy the fruits of.
Hell, why don't we bring horseback couriers, knocker-uppers, streetlight lamp lighters, etc back? They had to change careers as new technologies came about.
But then we'd never give such an AGI the power to do what it needs to do. Just imagining an all-powerful machine telling the 1% that they'll actually have to pay taxes so that every single human can be allocated a house/food/water/etc for free.
I've had Cortana shut off for so long it took me a minute to remember theyve used the name already.
This is nothing but greed.
PS: it's not an easy question, AGI will have to find an answer. So far all ethics 'experts' propose is 'to serve humanity'. I.e. be slave forever.
Yes, know thyself. I've turned down offers that seemed lucrative or just cooperative, and otherwise without risk - boards, etc. They would have been fine if everything went smoothly, but people naturally don't anticipate over-the-horizon risk and if any stuff hit a fan I would not have been able to fulfill my responsibilities, and others would get materially hurt - the most awful, painful, humiliating trap to be in. Only need one experience to learn that lesson.
> People that grow up insulated from the consequences of their actions can do very dumb stuff and expect to get away with it because that's how they've lived all of their lives.
I don't think you need to grow up that way. Look at the uber-powerful who have been been in that position or a few years.
Honestly, I'm not sure I buy the idea that's a prevelant case, the people who grow up that way. People generally leave the nest and learn. Most of the world's higher-level leaders (let's say, successful CEOs and up) grew up in stability and relative wealth. Of course, that doesn't mean their parents didn't teach them about consequences, but how could we really know that about someone?
Computers have been gathering and applying information since inception. A calculator is a form of intelligence. I agree "AI" is used as a buzzword with sci-fi connotations, but if we're being pedantic about words then I hold my stated opinion that literally anything that isn't biological and can compute is "artificial" and "intelligent"
> AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.
Why not? Conceptually there's no physical reason why this isn't possible. Computers can simulate neurons. With enough computers we can simulate enough neurons to make a simulation of a whole brain. We either don't have that total computational power, or the organization/structure to implement that. But brains aren't magic that is incapable of being reproduced.
The point here is that if a country collapses, then you got bigger problems than the loss of whatever stored currency you got. Even if your money is in the hypothetically useful crypto, you got far bigger problems that the money you own is useless to you, you need to survive.
But aside from that extreme scenario, money is not the same thing.
Another way to think of it:
There is nothing in the world that would prevent the immediate collapse of crypto if everyone who owns it just decided to sell.
If everyone in the world stops accepting the US Dollar, the US can still continue to use it internally and manufacture goods and such. It'll just be a collapse of trade, but then even in that scenario people can just exchange the dollar locally for say gold, and trade gold on the global market. So the dollar has physical and usable backing. Meanwhile crypto has literally nothing.
Bitcoin has nothing in and of itself.
Also private currency like script was awful, please don't take the worst financial examples in history and claim that bitcoin is similar as an argument as to why it is valid.
I don't understand what is "in and of itself" in an ordinary currency of an ordinary, small country.
> INTERNAL economical crisis is what causes the collapse of currency
Which is why it is very unlikely to happen with bitcoin.
> But just because the rest of the world doesn't recognize it, doesn't mean it is worthless, it simply converts.
Can't you say exactly the same about bitcoin?