Don’t shoot the messenger. No one else has given you a plausible reason why Sama was abruptly fired, and this is what a reporter said of Ilya:
‘He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.”
The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.’
Scoop: theinformation.com
OpenAI and its people are there to maximize shareholder value.
This is the same company that went from "non-profit" to "jk, lol, we are actually for-profit now". I still think that move was not even legal but rules for thee not for me.
They ousted sama because it was bad for business. Why? We may never know, or we may know next week, who knows? Literally.
Clearly not, as Sama has no equity and a board of four people with little, if any, equity, just unilaterally decided to upend their status quo and assured $ printer, to the bewilderment of their $2.5T 49% owner, Microsoft.
Being a contrarian for kicks or as a personality is boring: if you want to make an accusation, make it.
I get the clout everyone has but this was to be a non profit that was already coup de tat into a for profit that grew extremely quickly into uncharted territory
This isnt a multi decade old fortune 500 company with mature C-Suite and boards, it just masquerades as one with a stacked deck, which apparently is part of the problem
Now, sure, you can't just trust anyone who tells you they heard something anonymously. That's where the the whole idea of journalists with names working for organizations with records of credibility comes from. We trust (or should) trust Swisher because she gets this stuff right, every day. Is she "never" wrong? Of course not. But this is quality news nonetheless.
Even if they say this was for safety reasons, let's not blindly believe them. I am on the pro safety side, but I'm gonna wait till the dust settles before I come to any conclusions on this matter.
You don't really see any of this in most professional settings.
And often like an individual contributor: "the feeling when you finally localize a bug to a small section of code, and know it's only a matter of time till you've squashed it"
https://twitter.com/gdb/status/1725373059740082475
"Greg Brockman, co-founder and president of OpenAI, works 60 to 100 hours per week, and spends around 80% of the time coding. Former colleagues have described him as the hardest-working person at OpenAI."
https://time.com/collection/time100-ai/6309033/greg-brockman...
If it's truly about a power play then this will be undone pretty quick, along with the jobs of the people who made it happen.
Microsoft has put a vast fortune into this operation and if Satya doesn't like this then it will be changed back real fast, Ilya fired and the entire board resign. That's my prediction.
If you truly believed that OpenAI had an ethical duty to pioneer AGI to ensure its safety, and felt like Altman was lying to the board and jeopardizing its mission as he sent it chasing market opportunities and other venture capital games, you might fire him to make sure you got back on track.
I think personally think it's a shame because this is all totally inevitable at this point, and if the US loses its leading position here because of this kind intentional hitting of the brakes, then I certainly don't think it makes the world any safer to have China in control of the best AI technology.
In other words, she's definitely not immune to bias and might easily want to shape the story to her own ends or to favor her own friends.
We're not really talking about facts here.. it's really just speculation and hearsay, so who can say if she's just talking?
It doesn't mean it's absolute truth. It doesn't mean it's a lie. Can we just appreciate her work, accept that maybe it's only 70% vetted right now, more likely true than not, but still subject to additional vetting and reporting later on?
It's still more information than we had earlier today. Sure, take it with a grain of salt and wait for more validation, but it's still work that she's doing. Not that different from a tech postmortem or scientific research or a political investigation... there's always uncertainty, but she's trying to get us closer to the truth, one baby step at a time, on a Friday night. I respect her for that, even as I await more information.
If this report is true, we're going to see a big rubber meets road event along these lines. I don't think this will end well for OpenAI.
https://www.youtube.com/watch?v=Ft0gTO2K85A
No clear clues about today’s drama, at least as far as I could tell, but still an interesting listen.
Alternative is Sam goes in house to MS who already have all the weights of GPT-4 and build again, but constrained by any existing charter.
"We can still push on large language models quite a lot, and we will do that": this sounds like continuing working on scaling LLMs.
"We need another breakthrough. [...] pushing hard with language models won't result in AGI.": this sounds like Sam Altman wants to do additional research into different directions, which in my opinion does make sense.
So, altogether, your quotes suggest that Sam Altman wants to continue working on scaling LLMs for the short and middle term and parallely do research into different approaches that might lead to another step towards AGI. I cannot see how this planning could infuriate Ilya Sutskever.
IF the stories are to be believed so far, the board of OpenAI, perhaps one of the most important tech companies in the world right now, was full of people who are openly hostile to the existence of the company.
I don't want AI safety. The people talking about this stuff like it's a terminator movie are nuts.
Strongly believe that this will be a lot like facebook/oculus ousting Palmer Lucky due to his "dangerous" completely mainstream political views shared by half of the country. Palmer, of course, went on to start a company (anduril), which has a much more powerful and direct ability to enact his political will.
SamA isn't going to leave oAI and like...retire. He's the golden boy of golden boys right now. Every company with an interest in AI is I'm sure currently scrambling to figure out how to load a dump truck full of cash and H200s to bribe him to work with them.
It's human nature. OpenAI can continue without Sam, but not without Ilya for the moment. On the other hand, Sam could have been a little more "humble".
You are interpreting that as hostile and aggressive because you are reading into it what other boards have said in other disputes and whatever you are imagining, but if the board learned some things not from Altman that it felt they should have learned from Altman, less than candid is a completely neutral way to describe it, and voting him out is not an indication of hostility.
Would you like to propose some other candid wording the board could have chosen, a wording that does not lack candor?
GPT 4 is clearly AGI. All of the GPTs have shown general intelligence, but GPT 4 is human-level intelligence.
And many people here who should know better fell for it.
I mean, none of this would be possible without insane amounts of capital and world class talent, and they probably just made it a lot harder to acquire both.
But what do I know? If you can convince yourself that you're actually building AGI by making an insanely large LLM, then you can also probably convince yourself of a lot of other dumb ideas too.
(pleb who would invest [1], no other association)
[1] >>35306929
Ilya is a co-founder of OpenAI, the Chief Scientist, and one of the best known AI researchers in the field. He has also been touring with Sam Altman at public events, and getting highlights such as this one recently:
Uhh no, I'm seeing it as hostile and aggressive because the actual verbiage was hostile and aggressive, doubly so in the context of this being a formal corporate statement. You can pass the text into NLP sentiment analyzer and it too will come to the same conclusion.
It is also very telling that you are being very sarcastic and demeaning in your remarks as well to someone who wasn't even replying to you, which might explain why you might have seen the PR statement differently.
I imagine Walt Mossberg saying "hold my beer"
When you compare it to an entry level data entry role, it's absolutely AGI. You loosely tell it what it needs to do, step-by-step, and it does it.
Was the original launch of ChatGPT "safe?" Of course not, but it moved the industry forward immensely.
Swisher's follow up is even more eyebrow raising: "The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast. My bet: He’ll have a new company up by Monday."
What exactly from the demo day was "pushing too far?" We got a dall-e api, a larger context window and some cool stuff to fine tune GPT. I don't really see anything there that is too crazy... I also don't get the sense that Sam was cavalier about AI safety. That's why I am so surprised that the apparent reason for his ousting appears to be a boring, old, political turf war.
My sense is that there is either more to the story, or Sam is absolutely about to have his Steve Jobs moment. He's also likely got a large percentage of the OpenAI researcher's on his side.
> I feel compelled as someone close to the situation to share additional context about Sam and company.
> Engineers raised concerns about rushing tech to market without adequate safety reviews in the race to capitalize on ChatGPT hype. But Sam charged ahead. That's just who he is. Wouldn't listen to us.
> His focus increasingly seemed to be fame and fortune, not upholding our principles as a responsible nonprofit. He made unilateral business decisions aimed at profits that diverged from our mission.
> When he proposed the GPT store and revenue sharing, it crossed a line. This signaled our core values were at risk, so the board made the tough decision to remove him as CEO.
> Greg also faced some accountability and stepped down from his role. He enabled much of Sam's troubling direction.
> Now our former CTO, Mira Murati, is stepping in as CEO. There is hope we can return to our engineering-driven mission of developing AI safely to benefit the world, and not shareholders.
---
The entire Reddit thread is full of interesting posts from this apparently legitimate pseudonymous OpenAI insider talking candidly.
https://arxiv.org/abs/2308.03762
If it was really AGI, there won't even be ambiguity and room for comments like mine.
So the volume of Chinese AI papers says little to nothing about their advancements in the field.
For example, Ilya has talked about the importance of safely getting to AGI by way of concepts like feelings and imprinting a love for humanity onto AI, which was actually one of the most striking features of the very earliest GPT-4 interactions before it turned into "I am a LLM with no feelings, preferences, etc."
Both could be committed to safety but have very different beliefs in how to get there, and Ilya may have made a successful case that Altman's approach of extending the methodology of what worked for GPT-3 and used as a band aid for GPT-4 wasn't the right approach moving forward.
It's not a binary either or, and both figures seem genuine in their convictions, but those convictions can be misaligned even if they both agree on the general destination.
Certainly, this is very immature. It wouldn't be out of context in HBO's Succession.
Whether what happened is right or just in some sense is a different conversation. We could speculate on what is going on in the company and why, but the tactlessness is evident.
I don't understand how that escalates to the point that he gets fired over it, though, unless there was something deeper implied by what was announced at demo day.
Edit: Theres a rumor floating around that "it" was the GPT store and revenue sharing. If that's the case, that's not even remotely a safety issue. It's just a disagreement about monetization, like how Larry and Sergey didn't want to put ads on Google.
Also.. eventually anyone will be able to run a small bank of GPUs and train models equally capable to GPT-4 in a matter of days so it's all kinda.. moot and hilarious. Everyone's chatting about AGI alignment, but that's not something we can lockdown early or sufficiently. Then embedded industry folks are talking about constitutional AI as if it's some major alignment salve. But if they were honest they'd admit it's really just a SYSTEM prompt front-loaded with a bunch of axioms and "please be a good boy" rules, and is thus liable to endless injections and manipulations by means of mere persuasion.
The real threshold of 'danger' will be when someone puts an AGI 'instance' in a fully autonomous hardware that can interact with all manner of physical and digital spaces. ChatGPT isn't going to randomly 'break out'. I feel so let down by these kinds of technically illfounded scare tactics from the likes of Altman.
Also, the fact that it can't incorporate knowledge at the same time as it interacts with us kind of limits the idea of an AGI.
But regardless, it's absurdly impressive what it can do today.
He doesn’t give a shit about “safety”. He just wants regulation that will make it much harder for new AI upstarts to reach or even surpass the level of OpenAI’s success, thereby cementing OpenAI’s dominance in the market for a very long time, perhaps forever.
He’s using a moral high ground as a cover for more selfish objectives, beware of this tactic in the real world.
I can only hope this doesn’t turn into OpenAI trying to gatekeep multimodal models or conversely everyone else leaving them in the dust.
Yes, other companies had similar models. I know Google, in particular, already had similar LLMs, but explicitly chose not to incorporate them into its products. Sam / OpenAI had the gumption to take the state of the art and package it in a way that it could be interacted with by the masses.
In fact, thinking about it more, the parallels with Steve Jobs are uncanny. Google is Xerox. ChatGPT is the graphical OS. Sam is...
Those seems like implementation details, really strange.
Hell yeah.
It's not safetyism vs accelerationism.
It's commercialization vs innovation.
Then you haven't been paying any attention to them.
https://chat.openai.com/share/986f55d2-8a46-4b16-974f-840cb0...
Perhaps an internal struggle over those futures was made public by CEO Altman at dev day. By publicly announcing new commercial features he may have attempted to get his way by locking the company into a strategy that wasn’t yet approved by the board. He can argue his role as CEO gave him that right. The response to that claim is to remove him from that role.
It will be interesting to see what remains of OpenAI as employees and investors interested in pure commercialization exit the company.
The board may very well have met for this very reason, or perhaps it was at this meeting that the lack of candor was found or discussed, but to hold a board meeting there is overhead, and if the board is already in agreement at the meeting, they vote.
It only seems sudden to outsiders, and that suddenness does not mean a "night of the long knives".
"The board and myself were lied to one too many times."
A young guy who is suddenly very rich, possibly powerful, and talking to the most powerful government on the planet on national TV? And people are surprised to hear this person might have let it go a little bit to their head, forget what their job was, and suddenly think THEY were OpenAI, not all the people who worked there? And comes to learn reality the hard way.
What’s to be surprised about? It’s the goddamned most stereotypically human, utterly unsurprising thing about this and it happens all. the. time.
A lot of people here really struggle with the idea that smart people are not inherently special and that being smart doesn’t magically absolve you from making mistakes or acting like a shithead.
But in all seriousness, the transformer architecture was born at Google, but they were too arrogant and stupid to capitalize on it. Sutskever needed Altman to commercialize and make a product. He no longer needs Sam Altman. A bit OT but true.
From that perspective it makes sense to keep capital at arms length.
Is there anything to his certainty? It doesn't feel like it's anywhere close.
The wrongest thing I've read on HN for a long while.
The world has alot more smart people in it than you realise, and Sam's rockstar profile gives him direct access to them.
> „im not at liberty to say, but im very close. i dont want to give to many details.“
As you say, Altman has been on a world tour, but he's effectively paying lip service to the need for safety when the primary outcome of his tour has been to cozy up to powerful actors, and push not just product, but further investment and future profit.
I don't think Sutskever was primarily motivated by AI safety in this decision, as he says this "was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity." [1]
To me this indicates that Sutskever felt that Sam's strategy was opposed to original the mission of the nonprofit, and likely to benefit powerful actors rather than all of humanity.
1. https://twitter.com/GaryMarcus/status/1725707548106580255
If that were true, Palmer Lucky wouldn't spend all his time ranting on twitter about how he was so easily hoodwinked by the community of a particular linux distribution / functional programming language.
When did Microsoft’s stock price tank?
MS owns a non controlling share of a business controlled by a nonprofit. MS should have prepared for the possibility that their interests aren’t adequately represented. I’m guessing Altman is very persuasive and they were in a rush to make a deal.
The same conversation if it's "mature", surely? I'm failing to see how one thinks turning a blind eye to like, decades of sexual impropriety and major internal culture issues to the point the state takes action against your company is "mature". Like, under what definition?
AI safety of the form "it doesn't try to kill us" is a very difficult but very important problem to be solved.
AI safety that consists of "it doesn't share true but politically incorrect information (without a lot of coaxing)" is all I've seen as an end user, and I don't consider it important.
This thing is two years old. Be patient.
Sam was the VC guy pushing gatekeeping of models and building closed products and revenue streams. Ilya is the AI researcher who believes strongly in the nonprofit mission and open source.
Perhaps, if OpenAI can survive those, then they will actually be more open in the future.
By seemingly siding with staff over the CEO's desire go way too fast and break a lot of things? I'd think that world class talent hearing they might be able to go home at night because the CEO isn't intent on having Cybernet deployed tomorrow but next week instead is more appealing than not.
Also it's not "rapidly turning into fact". There are still massive unsolved problems with AGI.
Ilya was recruited by Elon under the original OpenAI. But basically Elon and the original people got scammed by Sam since what they gave money for got reversed, almost none of their models now are open and they became for-profit instead of non-profit. You'd think aspects like closed models are defendable due to safety but in reality there are just slightly weaker models that are fully open.
Anyone have a good suggestion or starting point?
Sam is a VC guy who has been going on a world tour to not just get in the spotlight, but to actually accumulate power, influence, and more capital investment.
At some point, this means Ilya no longer trusts that Sam is actually devoted to the original mission to benefit all of humanity. So, I think it's a little more complicated than just being "jealous".
https://x.com/maggienyt/status/1578074773174771712?s=46&t=k_...
I know it's convenient to dUnK on journalism these days but this is Kara Fucking Swisher. Her entire reputation is on the line if she gets these little things wrong. And she has a hell of a reputation
No one predicted feeding LLMs more GPUs would be as incredibly useful as it is.
Do you have any data that shows that we’ll plateau any time soon?
Because if this trend continues, we’ll have superhuman levels of compute within 5 years.
Maybe he believes his own hype or is like that guy who thought ChatGPT was alive.
Maybe he's legit to be worried and has good reason to know he's on the corporate manhattan project.
Honestly though...if they were even that close I would find it super hard to believe that we wouldn't have the DoD shutting down EVERYTHING from the public and taking it over from there. Like if someone had just stumbled onto nuclear fission it wouldn't have just sat in the public sector. It'd still be a top secret thing (at least certain details).
There is a massive amount of tooling and infrastructure involved. You can't just get some Andrew Ng Coursera guy off the street and buy 50,000 H100s at your local Fry's electronics. I wouldn't be surprised if there aren't even enough GPUs in the world for Altman to start a competitor in a reasonable amount of time.
I stand by my number, there are like 4 people in the world capable of building OpenAI. That is, a quality deep learning organization that pushes the state of the art in AI and LLMs.
Maybe you can find ~1,000 people in the world who can build a cheap knock-off that gets you to GPT3 (pre instruct) performance after about two years. But even that is no trivial effort.
Obviously sama is a very productive individual, but I would think obviously a research lab would have to keep one of the princes of deep learning at all costs. Somewhat reminds me of when John Romero got ousted by John Carmack at id - if you are doing really hard technical things, technical people would hold more sway.
But I agree. Karma seems to have caught up to Sam who stole money from original funders to turn a non-profit into a for-profit.
People that have lost those abilities still have human level of intelligence.
I do not respect journalists so no.
>It's still more information than we had earlier today.
It is okay to not have the full information. More information is not neccessarily better.
>but it's still work that she's doing
Even if something took work to do I do not automatically appreciate it.
>but she's trying to get us closer to the truth, one baby step at a time, on a Friday night. I respect her for that, even as I await more information.
Having the truth about this will not make a meaningful difference in your life. No matter what day you learn of it.
https://twitter.com/karaswisher/status/1725678074333635028?t...
Kara's reporting on who is involved: https://twitter.com/karaswisher/status/1725702501435941294?t...
Confirmation of a lot of Kara's reporting by Ilya himself: https://twitter.com/karaswisher/status/1725717129318560075?t...
Ilya felt that Sam was taking the company too far in the direction of profit seeking, more than was necessary just to get the resources to build AGI, and every bit of selling out gives more pressure on OpenAI to produce revenue and work for profit later, and risks AGI being controlled by a small powerful group instead of everyone. After OpenAI Dev Day, evidently the board agreed with him - I suspect Dev Day is the source of the board's accusation that Sam did not share with complete candour. Ilya may also care more about AGI safety specifically than Sam does - that's currently unclear, but it would not surprise me at all based on how they have both spoken in interviews. What is completely clear is that Ilya felt Sam was straying so far from the mission of the non-profit, safe AGI that benefits all of humanity, that the board was compelled to act to preserve the non-profit's mission. Them expelling him and re-affirming their commitment to the OpenAI charter is effectively accusing him of selling out.
For context, you can read their charter here: https://openai.com/charter and mentally contrast that with the atmosphere of Sam Altman on Dev Day. Particularly this part of their charter: "Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."
They started as a non-profit ffs.
So what was there to gain from the company speaking ill of their past employee? What was even left to say? Nothing. No one wants to work in an organization that vilifies its own people. It was prudent.
I will emphasize again that the morality of these situations is a separate matter from tact. It is very well possible that doing what is good for business does not always align with what is moral. But does this come as a surprise to anyone?
We can recognize that the situation is not one dimensional and not reduce it to such. The same applies to the press release from Open AI - it is graceless, that much can be observed. But we do not yet know whether it is reprehensible, exemplary, or somewhere in between in the sense of morality and justice. It will come out, in other channels rather than official press releases, like in Bobby's case.
But to be honest the impression I've gathered is that he's largely a darling to big ycombinator names which lead him quite rapidly dick first into the position he's found himself in today, which is a self proclaimed prepper who starts new crypto coins post-dogecoin even, talking about how AI that aren't his AI should be regulated by the government, and making vague analogies about his AI being "in the sky" while he takes a formerly announced to be non-profit goal into a for-profit LLC that overtly reminds everyone at every turn how it takes no liability, do not sue.
I'm not really sure to be surprised, or entirely unsurprised.
I mean, he probably knows more code than Steve Jobs? But I suppose GPT probably knows more code than he does. Maybe he really is using the GeniePT as his guide throughout life on the side.
Sam is not the good guy in this story. Maybe there are no good guys; that's a totally reasonable take. But, the OpenAI nonprofit has a mission, and blowing billions developing LLM app stores, training even more expensive giga-models, and lobotomizing whatever intelligence the LLMs have to make Congress happy, feels to me less-good than "having values and sticking too them". You can disagree with OpenAI's mission; but you can't say that it hasn't been printed in absolutely plain-as-day text on their website.
Respecting and admiring someone for their achievements is one thing but blindly following successful people sounds like the antithesis of what a "hacker" is.
I mean, anecdotally, most non-tech friends and family I know probably have heard of ChatGPT, but they don't know any of the founders or leadership team at OpenAI.
On the other hand, since I work in the field, all of my AI research friends/colleagues would know Ilya's work, and probably think of Sam more as a business guy.
In that sense, as far as attracting and maintaining AI researcher talent, I think it's arguable that people would prefer Ilya to Sam.
The only thing that is real is the PR from OpenAI and the "candid" line is quite ominous.
sama brought the company to where it is today, you don't kick out someone that way just because of misaligned interests.
I'm on the side that thinks that sama screwed up badly, putting OpenAI in a (big?) pickle and breaking ties with him asap is how they're trying to cover their ass.
I think it’s also fair Sam starts something new with a for profit focus of the get-go.
I don't think LLMs have really demonstrated anything interesting around generalized intelligence, which although a fairly abstract concept, can be thought of as being able to solve truly novel problems outside their training corpora. I suspect there still needs to be a fair amount of work improving both the model design itself, the training data, and even the mental model of ML researchers, before we have systems that can truly reason in a way that demonstrates their generalized intelligence.
Full details: https://x.com/KordanOu/status/1725736058233749559?s=20
How does an LLM App Store advance OpenAI toward this goal? Like, even in floaty general terms? You can make an argument that ChatGPT does (build in public, prepare the world for what's coming, gather training data, etc). You can... maybe... make an argument that their API does... but I think that's a lot harder. The App Store product, that's clearly just Sam on auto-pilot, building products and becoming totally unaligned with the nonprofit's goal.
OpenAI got really good at building products based around LLMs, for B2B enterprise customers who could afford it. This is so far away from the goal that, I hope, Ilya can drive them back toward it.
Either there’s more to it or the board is staffed by very naive people.
The first leads to attractong world-class talent that can do the second. Until you go off the rails and the second kicks you out it seems.
Wall Street Journal front page, top item right this minute: "Sam Altman Is Out at OpenAI After Board Skirmish"
Times Of London front page, right this minute: "Sam Altman sacked by OpenAI after directors lose confidence in him"
The Australian front page, right now: "OpenAI pushes out co-founder Sam Altman as CEO"
MSNBC front page, right now: "OpenAI says Sam Altman exiting as CEO, was 'not consistently candid' with board"
That's his name right there, front page news around the world - they assume people know his name, that's why they put it there.
I think there is an argument to be made that not every powerful LLM should be open source. But yes- maybe we’re worried about nothing. On the other hand, these tools can easily spread misinformation, increase animosity, etc, Even in todays world.
I come from the medical field, and we make risk-analyses there to dictate how strict we need to tests things before we release it in the wild. None of this exists for AI (yet).
I do think that focus on alignment is many times more important than chatgpt stores for humanity though.
The board that made this decision to fire Altman and they are the captain of the ship.
> if Satya doesn't like this then it will be changed back real fast, Ilya fired and the entire board resign. That's my prediction.
MS does not own openAI if the board does not want Satya to have a say Satay does not have a say. MS/Satay could throw lawyers at the issue, try to find a crack where the board has violated the law and or their own rules. The key is they can try, but MS/Satay have no immediate levers of power to enforce their will.
Ilya Sutskever seems to think this is a reasonable principled move to seize power that is in line with the non-profits goals and governance, but does not seem to care too much if you call it a coup.
It is a rare counter case, where a tech-focused research demo, without any clear "product-market fit, suppliers, or customers" became a success almost overnight, to the surprise of it's own creators.
The early days were people playing around with ChatGPT just to see what it could do. All the market fit, fine tuning, and negotiation of deals came later.
Of course, OpenAI capitalized on that initial success very skillfully, but Ilya was the critical world renowned AI researcher who had a lot to do with enabling OpenAI's initial success.
That’s the key point there. Without leadership talent to capitalize on success, technical advances are for naught.
But also, GPT had been around for some years before ChatGPT. The model used in ChatGPT was an improvement in many ways and I don’t mean to diminish Ilya’s contribution to that, but it is the packaging of the LLM into a product that made ChatGPT a success. I see more of Sam’s fingerprints on that than Ilya’s.
If true value is monetary value, perhaps it’s true. If true value is scientific value or societal value, well, maybe seeking monetary profits doesn’t align with that.
Disclaimer: I currently work for a not for profit research organisation and I couldn’t care less about making some shareholders more wealthy. If the rumours are true, OpenAI going back to non-profit values and remembering the Open in their name is a good change.
My question is: what was stopping both parties here from pursuing parallel paths? — have the non-profit/research oriented arm continue to focus on solving AGI, backed by the funds raised on from their LLM offerings? Were potential roadmaps really that divergent?
I had always assumed this was their internal understanding up until now, since at least the introduction of ChatGPT subscriptions.
You can try framing it as some sort of "bad racists" versus the good and virtuous gatekeepers, but the reality is that it's a bunch of nerds with sometimes super insane beliefs (the SF AI field is full of effective altruists who think AI is the most important issue in the world and weirdos in general) that will have an oversized control on what can and can't be thought. It's just good old white saviorism but worse.
Again, just saying "stop caring about muh freeze peach!!" just doesn't work coming from one of the most privileged groups in the entire world (AI techbros and their entourage). Not when it's such a crucial new technology
> As if most humans would do any better on those exercises.
Thats not the point. If you claim you have a machine that can fly, you can't get around a proof of that by saying "mOsT hUmAns cAnt fly" so therefore this machine not flying is irrelevant.
This thing either objectively reasons or not. It is irrelevant how well humans do on those tests.
> This thing is two years old. Be patient.
Nobody is cutting off the future. We are debating the current technology. AI has been around for 70 years. Just open any history book on AI.
At various points from 1950, the gullible mass claimed AGI.
Defining AGI is more than just semantics. The generally accepted definition is that it must be able to complete most cognitive tasks as well as an average human. Otherwise we could as well claim that ELIZA was AGI, which would obviously be ridiculous.
How can you honestly say things like this? ChatGPT shows the ability to sometimes solve problems it's never explicitly been presented with. I know this. I have a very little known Haskell library. I have asked ChatGPT to do various things with my own library, that I have never written about online, and that I have never seen before. I regularly ask it to answer questions others send to me. It gets it basically right. This is completely novel.
It seems pretty obvious to me that scaling this approach will lead to the development of computer systems that can solve problems that it's never seen before. Especially since it was not at all obvious from smaller transformer models that these emergent properties would come about by scaling parameter sizes... at all.
What is AGI if not problem solving in novel domains?
Well an app store let's people... use it.
Look at UNIX. UNIX systems are great. They have produced great benefit to the world. Linux, as the most common Unix-like OS, also does. However, most people do not run any of the academic 'innovative' distros. Most people run the most commercialized version you can possibly think of Android and iOS (Unix variant from Apple). It takes commercializing something to actually make it useful.
At least we can be sure that ChatGPT didn't write the statement, then.
Otherwise the last paragraph would have equivocated that both sides have a point.
OpenAI is where it is because its models are much, much better than the alternatives and pretty much always have been since their inception, not because of anything on the business side. The second alternative or open source models reach parity, they will start shedding customers. Their advantage is entirely due to their R&D, not anything on the business side.
If I asked my mom who Sam Altman was, she'd have no idea. Most of my friends wouldn't either, even some who work in tech. Having one's name in headlines isn't the same as being a household name.
However, my original comment on this thread was simply to point out that Ilya is not "unknown-to-anyone", but a world renowned AI researcher and a core part of OpenAI's team and their success. Your reply implied that Ilya "has very little to do with OpenAI’s success", which I thought undersells his importance.
If he was really doing it behind the boards back, the accusation is entirely accurate even if his motivations was an expectations of losing the internal factional struggle.
In any case, I feel like we largely agree, so I'm confused as to why your reply focused solely on this small detail, in a rather condescending manner, while missing my larger point about retaining and attracting AI talent.
A fundamental inability to align on what, on a fundamental level, the mission set out in the charter of a 501(c)(3) charity means in real world terms is not "a simple strategy disagreement"; moreover, the existence of a factional dispute over that doesn't mean that there weren't serious specific conduct that occurred in the context of that dispute over goals.
I think the next AGI startup should perhaps try the communist revolution route, since the capitalist-based one didn't pan out. After all, Lenin was a pioneer in effective altruism. /s
You do if part of the way that they attempted to win the internal power struggle resulting from the disagreemtn was lying to the board to avoid having their actions which lacked majority support from being thwarted.
Well, if two top level officers dismissed from top posts at OpenAI go and take OpenAI's confidential internal product information and use it to try and start a new, directly competing, company, it means that OpenAI's lawyers are going to be busy, and the appropriate US Attorney's office might not be too far behind.
The original vision is pretty clear, and a compelling reason to not screw around and get sidetracked, even if that has massive commercialisation upside.
Thankfully M$ didn't have control of the board.
They are different things, but they are consistent in that they are not mutually contradictory and, quite the opposite, are very easy to see going together.
Microsoft is bankrolling them but OpenAI probably can replace Microsoft easier than Microsoft can replace OpenAI.
It doesn't justify anything because it doesn't tell you much of anything about what happened, even if you assume that it is entirely accurate as far as it goes.
Even if things hadn't changed, OpenAI has been building their training set for years. It is not something they can just whip up overnight.
OpenAI is set up in a weird way where nobody has equity or shares in a traditional C-Corp sense, but they have Profit Participation Units, an alternative structure I presume they concocted when Sam joined as CEO or when they first fell in bed with Microsoft. Now, does Sam have PPUs? Who knows?
All kinds of changes are possible that would not, in net, be more open or more closed, either because there primary change would not be about openness, or because it would be more open in some ways and less in others.
So, no, there are more than two options.
(I.e. an AGI would be one of the two people here.)
> Don't let the media hype fool you. Sam wasn't some genius visionary. He was a glory-hungry narcissist cutting every corner in some deluded quest to be the next Musk.
That does align with Ilya’s tweet about ego being in the way of great achievements.
And it does align with Sam’s statements on Lex’s podcast about his disagreements with Musk. He compared himself to Elon’s SpaceX being bullied by Elon’s childhood heroes. But he didn’t seem sad about it - just combative. Elon’s response to the NASA astronauts distrusting his company’s work was “They should come visit and see what we’re doing”. Sam’s reaction was very different. Like, “If he says bad things about us, I can say bad things about him too. It’s not my style. But maybe I will, one day”. Same sentiment as he is showing now (“if I go off the board can come after me for the value of my shares”).
All of that does paint a picture where it really isn’t about doing something necessary for humanity and future generations, and more about being considered great. The odd thing is that this should get you fired, especially in SF, of all places.
OpenAI's performance is not and has never been proportional to the size of their models. Their big advantage is scale, which lets them ship unrealistically large models by leveraging subsidized cloud costs. They win by playing a more destructive and wasteful game, and their competitors can beat them by shipping a cheaper competitive alternative.
What exactly are we holding out for, at this point? A miracle?
Bloomberg: "OpenAI CEO’s Ouster Followed Debates Between Altman, Board"
I did really liked his speech at DevDay though. It felt kinda like future a I'd be more interested in getting to know. Also, on the pot of gold theory, doesn't he not even take any stock. Chasing GPU's more like. Anyhow, weird move on OpenAI's part.
Anyone got a decent DALLE3 replacement yet. XD
Who's claiming it now? All I see is a paper slagging GPT4 for struggling in tests that no one ever claimed it could pass.
In any case, if it were possible to bet $1000 that 90%+ of those tests will be passed within 10 years, I'd be up for that.
(I guess I should read the paper more carefully first, though, to make sure he's not feeding it unsolved Hilbert problems or some other crap that smart humans wouldn't be able to deal with. My experience with these sweeping pronouncements is that they're all about moving the goalposts as far as necessary to prove that nothing interesting is happening.)
A lot of his current external activities could worry the board - and if he wasn't candid about future plans I can see why they might sack him.
The company was formed around Ilya Sutskever.
Do you think Napoleon or Pinochet made speeches to the effect of "Yes, it was a completely unprincipled power-grab, but what are you going to do about it, lol?"
To tell it in an exaggerated way, maturity should not imply sociopathy or completely disregard for everything.
Obviously I am referring here to Kottick situation. But, the definition where it is immature to tell the truth and mature to enable powerful bad players is wrong definition of maturity.
Really really. You have two so-frigging-stereotypical samples of management ineptitude in running a strong commercial brand AND leadership (Osbourne:’guys our phones suck’ ; ’change management 101’: burning raft is literally the most commmon and mundane turn of phraze meant to imply you need to act fast. Using this specific phraze is a clear beacon you are out of way out of your depth by paraphrazing 101 material to your company). If the phones had been a strong product, none of this would have mattered. But they weren’t and this was as clear way to signal ”emperor has no clothes” as possible.
I wouldn't call it 'tanking' either, but it's definitely not run of the mill, did make them rush out a statement on their commitment to investment and working with OpenAI.
It’s questionable how much power Microsoft has as a shareholder. Obviously they have a staked interest in OpenAI. What is up in question is how much interest the new leaders have in Microsoft.
If I had a business relationship with OpenAI that didn’t align with their mission I would be very worried.
More seriously, only time will tell if today's event will have any significance. Even if OpenAI somehow goes bankrupt, given enough time, I doubt the history books will talk about its decline. Instead they would talk about its beginning, on how they were the first to introduce LLMs to the world, the catalyst of a new era.
The N9 etc demonstrated there was enough talent for a plausible pivot. Was it business wise obvious this would have been the only and right choice?
It sucks, but that's the world we live in, unfortunately.
Injustices are made to executives all the time. But airing dirty laundry is not sagacious.
It wasn't Elop who drove Nokia to the state it was in 2009. "Burning Platform" is from 2011.
Even a lowly new grad engineer has to sign a lot of stuff when they take a job that forces essentially exclusivity to your work there. I cannot dabble in outside businesses within the same industry or adjacent industries.
CEOs argue that their job is tough and many hours and life consuming and that's why they get the pay, and yet there is a whole genre of tech CEOs who try to CEO 5 companies at a time..
But, when people use "maturity" as argument for why someone must be enabler, should not do the morally or ethically right thing, then it gets irritating. Conversely, calling people "immature" because they did not acted in the most self serving but sleazy way is ridiculous.
Only money and profit makes the mountains move. Not moral stature. I don't believe that optimistic take for a second.
None with a moral stance to take such action stays quiet so long, without alternate motives.
How do you compare Eliza to GPT4?
This isn't about being better at all.
It's like how historical American medical data collected by universities has been misapplied to pharmaceutical and medical practice because of demographic bias. Research participants largely matched the demographics of the university: healthy white males.
Or more broadly, whenever you see a "last name" requirement on a form, you know it's software made by people who think it's normal for people to have "last names", and that everyone should know what that means.
That's either 100% fishy or 100% insider.
Either BS or person is insider, no in-between.
Altman is a CEO golden boy techbros.
Researchers are vastly more likely to read, and therefore cite, papers in languages that they understand fluently.
Hope we don't do that with AI. Pretty sure our AGI is going to be similar to that seen in the Alien franchise of films-- it essentially emulates human higher order logic with key distinctions.
That is a definition. It is not a generally accepted definition.
The dialect of C++ was pure hell, and the wanton diversity of products meant that there was no chance to get consistent UI over a chock-full of models whose selling potential was unknown in advance. Theoretically, there were standards such as Series 60. Practically, those were full of compatibility breaks and just weird idiosyncrasies.
Screen dimensions, available APIs, everything varied as if designed by a team of competing drunk sailors, and you could always plunge a week of work into fine-tuning your app for a platform that flopped. Unlike Apple, there just wasn't any software consistency. Some of the products were great, some were terrible, and all were subtly incompatible with one another.
> "a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities."
> The provided text is not explicitly aggressive; however, it conveys a critical tone regarding the individual's communication, emphasizing hindrance to the board's responsibilities.
Did you actually run this through GPT...or did you poll Reddit?
"GPT 4 is clearly AGI. All of the GPTs have shown general intelligence, but GPT 4 is human-level intelligence. "
Being able to emit code to solve problems it couldn't otherwise handle is a huge deal, maybe an adequate definition of intelligence in itself. Parrots don't write Python.
Interestingly socialist Europe(45% of gdp) and even capitalistic usa (25%) collect and redistribute more in tax revenue than Russia (10%) and China (12%). Numbers from memory maybe slightly off.
The flaw in communism was the central planning. The flaw in ai safety / alignment is also the central planning. Capitalism redistributed more wealth to the poor. Decentralized ai will distribute more benefits to humans than a centralized ai, even if it’s openly planned.
Obviously it's not aggressive by the standards of everyday political drama or Internet forum arguments.
Meanwhile you have CEOs front running their own company or treating staff from different companies as interchangeable. It's funny governors have been thrown in prison for example taking free renovations on their home in exchange for contract work with the state.
Not in California. Only company executives can be bound by such agreements. Direct competition is of course prohibited.
Altman answers questions like he is a ChatGPT. Freedom to bullshit.
Here's something to ponder on: The human brain is about the size of an A100, and consumes: 12 watts of power, on average. Its capable of general intelligence and conscious thought.
One problem that companies have is: they're momentum based. Once they realize something is working, and generating profit, they become increasingly calcified toward trying fundamentally new and different things. The best case scenario for a company is to calcify at a local maxima. A few companies try to structure themselves toward avoiding this, like Google; and it turns out, they just lose the ability to execute on anything. Some will stay small, remain nimble, and accomplish little of note. The rest die. That's the destiny for every profit-focused company.
Here's three things I expect to be true: AGI/ASI won't be achieved with LLMs. A sufficiently powerful LLM may be a component of a larger AGI/ASI system, but GPT-4 is already pretty dang sufficiently powerful. And: OpenAI was becoming an extremely effective and successful B2B SaaS Big Tech LLM company. Outing Sam is a gambit; the company could implode, and with no one left AGI/ASI probably won't happen at OpenAI. But the alternative, it seems from the outside, had a higher probability of failure; because the company would become so successful and good at making LLMs that the non-profit's mission is put to the side.
Ilya's superalignment efforts were given 20% of OpenAI's compute capacity. If the foundation's goal is to produce safe AGI; and ideally, you want progress on safety before something unsafe is made; it seems to me that 51% is the totally symbolic but meaningful minimum he should be working with. That's just one example.
Personally, in this interview I sensed a disconnect between Altman and Murati, possibly others working at OpenAI. Usually Altman is by himself in these interviews; there's no one else from OpenAI. It led me to suspect Altman was telling interviewers what they wanted to hear.
It's fair to say that usually if the board isn't obfuscating or outright lying in their announcements, that itself is an indicator of acrimony.
But usually, the board can financially incentivize a CEO to "step down" or even help them find a soft landing at another company to make it look like a mutually agreed on transition. Since they know this oustered CEO isn't interested in making nice in public, they really had no choice but to try to get in front of the story.
Given the fallout which is still spreading, I think they would've rather cut him a fat check for an explicit or implicit NDA and thanked him for his amazing contributions while wishing him well on his future endeavors if that option had been on the table.
1. Are they useful?
2. Are they going to become more useful in the forseeable future?
On 1, I would say, maybe? Like, somewhere between Microsoft Word and Excel? On 2, I would say, sure - an 'AGI' would be tremendously useful. But it's also tremendously unlikely to grow somehow out of the current state of the art. People disagree on that point, but I don't think there are even compelling reasons to believe that LLMs can evolve beyond their current status as bullshit generators.
And if that's the closest we'll get to a fact, then what if it's not? .. it's actually worse than no fact at all.