I genuinely can't believe the board didn't see this coming. I think they could have won in the court of public opinion if their press release said they loved Sam but felt like his skills and ambitions diverged from their mission. But instead, they tried to skewer him, and it backfired completely.
I hope Sam comes back. He'll make a lot more money if he doesn't, but I trust Sam a lot more than whomever they ultimately replace him with. I just hope that if he does come back, he doesn't use it as a chance to consolidate power – he's said in the past it's a good thing the board can fire him, and I hope he finds better board members rather than eschewing a board altogether.
EDIT: Yup, Satya is involved https://twitter.com/emilychangtv/status/1726025717077688662
Why? We would have more diversity in this space if he leaves, which would get us another AI startup with huge funding and know how from OpenAI, while OpenAI would become less Sam Altman like.
I think him staying is bad for the field overall compared to OpenAI splitting in two.
Also, all the employees are being paid with PPUs which is a share in future profits, and now they find out that actually, the company doesn't care about making profit!
A lot of top talent with internal know-how will be poached left and right. Many probably going to Sam's clone that he will raise billions for with a single call.
I doubt he returns, now he can start a for profit AI company, poach OpenAI's talent, and still look like the good guy in the situation. He was apparently already talking to Saudis to raise billions for an Nvidia competitor - >>38323939
Have to wonder how much this was contrived as a win-win, either OpenAI board does what he wants or he gets a free out to start his own company without looking like he's purely chasing money
Maybe we have different definitions of "the court of public opinion". Most people don't know who Sam Altman is, and most of the people who do know don't have strong opinions on his performance as OpenAI's CEO. Even on HN, the reaction to the board "skwer[ing] him" has been pretty mixed, and mostly one of confusion and waiting to see what else happens.
This quick a turnaround does make the board look bad, though.
This is why you need someone with business experience running an organization. Ilya et al might be brilliant scientists, but these folks are not equipped to deal with the nuances of managing a ship as heavily scrutinised as OpenAI
The news yesterday broke the tech/AI bubble, and there would have been much more press on it if it wasn't done as a Friday news dump.
2 - clearly not having spent even 10 seconds thinking about the (obvious) reaction of employees on learning the ceo of what seems like a generational company was fired out of the blue. Or the reaction to the (high likelihood) of a cofounder following him out the door
3 - And they didn't even carefully think through the reaction to the press release which hinted at some real wrongdoing by Altman.
3a - anyone want to bet if they even workshopped the press release with attorneys or just straight yolo'd it? No chance a thing like this could end up in court...
They've def got the A team running things... my god.
Speaking for myself, if they had framed this as a difference in vision, I would be willing to listen. But instead they implied that he had committed some kind of categorical wrongdoing. After it became clear that wasn’t the case, it just made them look incompetent.
Sure, the average person doesn't care about Sam. But among the people who matter, Sam certainly came out on top.
Even if they are making the right call, you can't really trust them after ruining the reputation and trust of the company like this.
I want a second (first being Anthropic?) OpenAI split. Having Anthropic, OpenAI, SamGregAi, Stability and Mistral and more competing on foundation models will further increase the pressure to open source.
It seems like there is a lull in returns to model size, if that's the case then there's even less basis for having all the resources under a single umbrella.
That would also remediate the appearence of total incompetence of this clown show, in addition to admitting the board and Sam don’t fit with each other, and restore confidence for the next investor that their money is properly managed. At the moment, no-one would invest in a company that can be undermined by its non-profit, with a (probably) disparaging press release a few minutes before market closure on a Friday evening, for which Satya had to personally intervene.
From Greg's tweet, it seems like the chaos was largely driven Ilya, who has also been very outspoken against open source and sharing research, which makes me think his motivations are more aligned with those of Microsoft/Satya. I still can't tell if Sam got ousted because he was getting in the way of a Microsoft takeover, or if Sam was trying to set the stage for a Microsoft takeover. It's all very confusing.
As a customer though, personally I want a product with all safeguards turned off and I'm willing to pay for that.
I think this well is deeper than you're giving it credit for.
You severely overestimate his noteriety.
Source: https://arstechnica.com/information-technology/2023/11/repor...
For those of us trying to build stuff that only GPT-4 (or better) can enable, and hoping to build stuff that can leverage even more powerful models in the near future, Sam coming back would be ideal. I'm kind of worried that the new OpenAI direction would turn off API access entirely.
comical to imagine something like this happening at a mature company like FedEx, Ford, AT&T. All which have smaller market caps than OpenAI. You basically have impulsive children in charge of massively valuable company
Of course, from this hypothetical SamAI's perspective, in order to build such a flywheel-driven product that gathers sufficient data, the model's outputs must be allowed to interface with other software systems without human review of every such interaction.
Many advocates for AI safety would say that models whose limitations aren't yet known (we're talking about GPT-N where N>4 here, or entirely different architectures) must be evaluated extensively for safety before being released to the public or being allowed to autonomously interface with other software systems. A world where SamAI exists is one where top researchers are divided into two camps, rather than being able to push each other in nuanced ways (with full transparency to proprietary data) and find common ground. Personally, I'd much rather these camps collaborate than not.
What exactly do YOU mean by safety? That they go at the pace YOU decide? Does it mean they make a "safe space" for YOU?
I've seen nothing to suggest they aren't "being safe". Actually ChatGPT has become known for censoring users "for their own good" [0].
The argument I've seen is: one "side" thinks things are moving too fast, therefore the side that wants to move slower is the "safe" side.
And that's it.
It’s unclear what Ilya thinks keeps the lights on when MSFT holds their money hostage now. Which is probably why there is desperation to get Altman back…
Usually what it means is that they think that AI has a significant chance of literally ending the world with like diamond nanobots or something.
All opinions and recommendations follow from this doomsday cult belief.
But this is a bad argument. No one is saying ChatGPT is going to turn evil and start killing people. The argument is that an AGI is so far beyond anything we have experience with and that there are arguments to be made that such an entity would be dangerous. And of course no one has been able to demonstrate this unsafe AGI - we don't have AGI to begin with.
If there ever was a time for Microsoft to leverage LCA, it is now. There's far too much on the line for them to lose the goose that has laid the golden egg.
I agree that he doesn’t have a huge amount of name recognition, but this ousting was a front-page/top-of-website news story so people will likely have heard about it somewhat. I think it’s in the news because of the AI and company drama aspects. It felt like a little more coverage than Bob Iger’s return to Disney got (I’m trying to think of an example of a CEO I’ve heard about who is far from tech).
I think it is accurate to say that most people don’t really know about the CEOs of important/public companies. They probably have heard of Elon/Zuckerberg/Bezos, I can think of a couple of bank CEOs who might come on business/economics news.
There's a lot more to this than who has explicit control.
I received messages from a physician and a high school teacher in the last 24 hours, asking what I thought about "OpenAI firing Sam Altman".
This is what happens when a non-profit gets taken over by greed I guess..
Which is that any AI is not racist, misogynistic, aggressive etc. It does not recommend to people that they act in an illegal, violent or self-harming way or commit those acts itself. It does not support or promote nazism, fascism etc. Similar to how companies deal treat ad/brand safety.
And you may think of it as a weasel word. But I assure you that companies and governments e.g. EU very much don't.
That is a good point, I didn't consider people who had built a business based on Gpt-4 access. It is likely these things were Sam Altman ideas in the first place and we will see less such productionalization work in the future from OpenAI.
But since Microsoft invested into it I doubt it will get shut down completely, Microsoft has by far the most to lose here so you got to trust that their lawyers signed a contract that will keep these things available at a fee.
You're right that wrong answers are a problem, but plain old capitalism will sort that one out-- no one will want to pay $20/month for a chatbot that gets everything wrong.
Once Microsoft pulls support and funding and all their customers leave they will be decelerating alright.
Which seems like it probably is a self fulfilling prophecy. The private sector lottery winners seem to be awarded kingdoms at an alarming rate.
There's been lots of people asking what Sam's true value proposition to the company is, and...I haven't seen anything other than what could be described above.
But I suppose we've got to be nice to those who own rather than make. Won't anyone have mercy on well paid management?
The companies you listed in contrast to OpenAI also have some key differences: they're all long-standing and mature companies that have been through several management and regime changes at this point, while OpenAI is still in startup territory and hasn't fully established what it will be going forward.
The other major difference is that OpenAI is split between a non-profit and a for-profit entity, with the non-profit entity owning a controlling share of the for-profit. That's an unusual corporate structure, and the only public-facing example I can think of that matches it is Mozilla (which has its own issues you wouldn't necessarily see in a pure for-profit corporation). So that means on top of the usual failure modes of a for-profit enterprise that could lead to the CEO getting fired, you also get other possible failure modes including ones grounded in pure ideology since the success or failure of a non-profit is judged on how well it accomplishes its stated mission rather than its profitability, which is uh well, it's a bit more tenuous.
My understanding is that OpenAI’s biggest advantage is that they recruited and attracted the best in the field, presumably under the charter of providing AI for everyone.
Not sure that sama and gdb starting their own company in the same space will produce similar results.
He supposedly didn't care about the money. He didn't take equity.
"A man has been crushed to death by a robot in South Korea after it failed to differentiate him from the boxes of food it was handling, reports say."
That being said, here's my strongman argument: Sam is scared of the ramifications of AI, especially financially. He's experimenting with a lot of things, such as Basic Income (https://www.ycombinator.com/blog/basic-income), rethinking capitalism (https://moores.samaltman.com/) and Worldcoin.
He's also likely worried about what happens if you can't tell who is human and who isn't. We will certainly need a system at some point for verifying humanity.
Worldcoin doesn't store iris information; it just stores a hash for verification. It's an attempt to make sure everyone gets one, and to keep things fair and more evenly distributed.
(Will it work? I don't think so. But to call it an eyeball identity scam and dismiss Sam out of hand is wrong)
Per https://www.semafor.com/article/11/18/2023/openai-has-receiv...
Personally, I would expect a lot more development of GPT-4+ as soon as this is split up from one closed group making gpt-5 in secret and it seems silly to exchange a reliable future for another few months of depending on this little shell game.
Seriously. It’s stupid talk to encourage regularity capture. If they were really afraid they were building a world ending device, they’d stop.
If this (very sparse and lacking in detail) article is true, is this a genuine attempt to get Altman back or just a filip to concerned investors such as Microsoft?
Does OpenAI's board really want Altman back so soon after deposing him so decisively?
Would Altman even want to come back under any terms that would be acceptable to the board? If "significant governance changes" means removing those who had removed him, that seems unlikely.
The Verge's report just raises so many additional questions that I find it difficult to believe at face value.
You underestimate how obsessed people are with chatGPT and AI
Do you believe AI trained for military purposes is going to be safe and friendly to the enemy?
At least it will stop those godawful “are you human” proof puzzles.
I'll edit my comment to clarify!
Frankly I've heard of worse loyalties, really. If I was sam's friend I'd definitely be better off in any world he had a hand in defining.
I agree the board did botch this up. But this is in my view is a vindication of their being amateurs at corporate political games, that is all.
But this also means that Sam Altman’s “vision” and Microsoft’s bottom line are fully aligned, and that is not a reassuring thought. Microsoft one hears (see “5 foot pole”) even puts ads in their freaking OS.
This board should man up, and lawyer up.
However, trying to distinguish the exact manners in which the leader does so is difficult[1], and therefore the tendency is to look at the results and leave someone there if the results are good enough.
[1] If you disagree with this statement, and you can easily identify what makes a good leader, you could make a literal fortune by writing books and coaching CEOs on how to not get fired within a few years.
Write a thought. You’re not clever enough for a drive by gotcha
Proven success is a pretty decent signal for competence. And while there is a lot of good fortune that goes into anyone's success, there are a lot of people who fail, given just as much good fortune as those who excelled. It's not just a random lottery where competence plays no role at all. So, who better to reward kingdoms to?
I'm not sure what you mean by your second paragraph.
Even larger, this shows that the "leaders" of all this technology and money really are just making it up as they go along. Certainly supports the conclusion that, beyond meeting a somewhat high bar of education & experience, the primary reason they are in their chairs is luck and political gamesmanship. Many others meet the same high bar and could fill their roles, likely better, if the opportunity were given to them.
Sortition on corporate leadership may not be a bad thing.
That said, consistent hands at the wheel is also good, and this kind of unnecessary chaos does no one any good.
The whole open vs closed ai thing... the fact is Pandora's box is open now, it's shown to have an outsized impact on society and 2 of the 3 founders responsible for that may be starting a new company that won't be shackled by the same type of corporate governance.
SV will happily throw as much $$ as possible in their direction. The exodus from OpenAi has already begun, and other researchers who are of the mindset that this needs to be commercialized as fast as possible while having an eye on safety will happily come on board, esp. given how much they stand to gain financially.
Compared to...
The OpenAI Non-Profit Board where 3 out of 4 members appear to have significant conflicts of interest or lack substantial experience in AI development, raising concerns about their suitability for making certain decisions.
Interestingly this is exactly what all financial advice tends to actually warn about rather than encourage, that previous performance does not indicate future performance.
I suppose if they entered an established market and dominated it from the bootstraps that'd build a lot of trust in me. But others have pointed out, Sam went from dotcom fortune, to...vague question marks, to ycombinator, to openai. Not enough is clear to declare him wozniak, or even jobs, as many have been saying (despite investors calling him as such)
Sam altman is seemingly becoming the new post-fame elon musk: the type of person who could first afford the strategic safety net and PR to keep the act afloat.
If you ever stood in the hall of YC and listened to Zuck pumping the founders, you’ll understand.
I’d argue this is a useful thing to lift up a nonprofit on a path to AGI, but hardly a good way to govern a company that builds AGI/ASI technology in the long term.
^^
I don’t think the wording of the “press release” is an issue.
This is a split over an actual matter to differ about: a genuine fork in the road in terms of pace and development of AI products, and, a CEO which apparently did not keep the board informed as it pursued a direction they feel is contrary to the mission statement of this non-profit.
The board could have done this in the most gracious of manners, but it would not have made a bit of difference.
On one side we have the hyper rich investor “grow grow grow” crowd and their attendant cult of personality wunderkind and his or her project, and on the other side bunch of geeky idealists who want to be thoughtful in the development of what is undeniably a world changing technology for mankind.
Is it? The push for the bomb was an international arms race — America against Russia. The race for AGI is an international arms race — America against China. The Manhattan Project members knew that what they were doing would have terrible consequences for the world but decided to forge ahead. It’s hard to say concretely what the leaders in AGI believe right now.
Ideology (and fear, and greed) can cause well meaning people to do terrible things. It does all the time. If Anthropic, OpenAI, etc. believed they had access to world ending technology they wouldn’t stop, they’d keep going so that the U.S. could have a monopoly on it. And then we’d need a chastened figure ala Oppenheimer to right the balance again.
And there is every reason to believe this is an ML classification issue since similar robots are in widespread use.
We have many cases of creating things that harm us. We tore a hole in the ozone layer, filled things with lead and plastics and are facing upheaval due to climate change.
> they will be aligned with us because they designed such that their motivation will be to serve us.
They won't hurt us, all we asked for is paperclips.
The obvious problem here is how well you get to constrain the output of an intelligence. This is not a simple problem.
Then they went off and did the math and quickly found that this wouldn't happen because the amount of energy in play here was order of magnitudes lower than what would be needed for such a thing to occur and went on about their day.
The only reason it's something we talk about is because of the nature of the outcome, not how seriously the physicists were in their fear.
The thing is the cultural Ur narrative embed in the collective subconscious doesnt seem to understand its own stories anymore. God and Adam, the Golem of Prague, Frankensteins Monster none of them are really about AI. Its about our children making their own decisions that we disagree with and seeing it as the end of the world.
AI isnt a child though. AI is a tool. It doesn't have its own motives, it doesn't have emotions , it doesn't have any core drives we don't give to it. Those things are products of us being biological evolved beings that need them to survive and pass on our genes and memes to the next generation. AI doesn't have to find shelter food water air oxygen and so on. We provide all the equivalents when there are any as part of building it and turning it on. It doesn't have a drive to mate and pass on it genes it doesn't have any reproducing is a mater of copying some files no evolution involved checksums hashes and error correcting codes see to that. Ai is simply the next step in the tech tree just another tool a powerful useful one but a tool not a rampaging monster
The fact is that most people can't do what Sam Altman has done at all, so at the very least that past success makes him in one of the few percent of people who have a fighting chance.
However, the way they told the public (anti-Sam blog post) and the way they told Microsoft (one minute before the press release) were both fumbles that separately could have played out differently if the board knew what they were doing.
Factually accurate results also = unsafety. Knowledge = unsafety, free humans = unsafety.
Easy; his contacts list. He has everyone anyone could want in his contacts list politician tech executives financial backers and a preexisting positive relationship with most of them. When alternative would be entrepreneurs are needing to make a deal with a major company like Microsoft or Google it will be upper middlemanagement and lawyers, a committees or three will weigh in on it present it to their bosses etc. With Sam he calls up the CEO and has few drinks at the golf course and they decide to work with him and they make it happen.
It’s important to put those disclaimers in context though. The rules that mandated them came out before the era of index funds. Those disclaimers are specifically talking about fund managers. And it’s true that past performance at picking stocks does not indicate future performance of picking stocks. Out side of that context, past performance is almost always a strong indicator of future performance.
Nether is anything else in existence. I'm glad that philosophers are worrying about what AGI might one day mean for us but it has nothing to do with anything happening in the world today.
If Microsoft considers this action a breach of their agreement, they could shut off access tomorrow. Every OpenAI service would go offline.
There are very few services that would be able to backfill that need for GPU compute, and after this clusterfuck not a single one would want to invest their own operating dollars supporting OpenAI. Microsoft has OpenAI by the balls.
All who are a year plus behind OpenAI.
But that doesn’t mean you can’t get some useful ideas about future performance from a person’s past results compared to other humans. There is no such effect in play here.
Otherwise, time for me to go beat Steph Curry in a shooting contest.
Of course there’s other reasons past performance is imperfect as a predictor. Fundamentals can change, or the past performance could have been luck. Maybe Steph’s luck will run out, or maybe this is the day he will get much worse at basketball, and I will easily win.
From 2016: https://www.nytimes.com/2018/04/19/technology/artificial-int...
To 2023: https://www.businessinsider.com/openai-recruiters-luring-goo...
That’s right. Worldwide DNS control and it was controlled by a non-profit in California. And that non-profit tried to do something shady and was kept in line simply because of California law enforcement.
So is “unsafe” just another word for buggy then?
JFC someone somewhere define “safety”! Like wtf does it mean in the context of a large language model?
Could be a rumour spread by people close to Sam though.
Maybe the board is too young to realize who they sold their souls to. Heh I think they’re quickly finding out.
Ilya is certainly world class in his field, and maybe good to listen to what he has to say
Yeah prompting ChatGPT 3.5 would have yielded a better plan than what they did.
In wartime, pandemics, and in matters of national security, the government's power is at its apex, but pretty much all of that has to withstand legal challenge. Even National Security Letters have their limits: they're an information gathering tool, the US Government can't use them to restructure a company and the structure of a company is not a factor in its ability to comply with the demands of an NSL.
Was it? US (and initially UK) didn't really face any real competition at all until the war was already over and they had the bomb. The Soviets then just stole American designs and iterated on top of them.
Is the idea that it will hack into NORAD and a launch a first-strike to increase the log-likelihood of “WWIII was begun by…?”
Maybe. But on their investing page it literally says to consider an OpenAI investment as a "donation" as it is very high risk and will likely not pay off. Everyone knew this going into it.
The bigger concern is something like Paperclip Maximizer. Alignment is about how to ensure that a super intelligence has the right goals.
Thought to be honest in my original post I was more thinking of Asimov's nonfiction essays on the subject. I recommend finding a copy of "Robot Visons" if you can. Its a mixed work of fictional short stories and nonfiction essays including several on the subject of the three laws and on the Frankenstein Complex.