The next AI winter may have just begun...
I’m never going back to Noe Valley for less than $500,000/yr and a netjets membership
The game is over boys. The only question is how to make these types of companies pay for the crimes committed.
time to stop playing with existential fire. humans suffice. every flaw you see in humans will be magnified X times by an intelligence X times stronger than humans. whether it is autonomous or human lead.
Is it some cute attempt at saying “an AI didn’t write this”?
Wasn't he essentially demoted before quitting though? I guess this means he wasn't even aware he was demoted.
Really wonder what this is all about.
Edit: My bad for not expanding. Noone knows the identity of this "Jimmy Apples" but this is the latest in a series of correct leaks he's made for Open AI for months now. Suffice to say he's in the know somehow.
I wouldn't ... but you do you!
OTOH, I don't think it would surprise anyone that he would quit, and that may well have been the intent.
I was wondering the same thing. Always, on purpose, avoiding starting sentences with capital letters. Both this guy and Sam Altman. What ... why ... ?
FWIW, radio silence from Ilya on twitter https://twitter.com/ilyasut
Clearly Microsoft staked its whole product roadmap on 4 random people with no financial skin in the game.
but over time I've become accustomed to capitalizing a bit more often and it's become sort of random. I actually have auto-capitalization turned off on my phone
Honestly, this is the big problem with Big Non Profit (tm). The entire structure of non-profits is really meant for ladies clubs, Rotary groups, and your church down the street, not openai and ikea.
OpenAI's statement implies he was aware of the demotion... but his statement seems to imply he wasn't.
I guess the most likely situation is that they put out the press release, told him (or vice versa) and it took him a bit to decide to quit.
Usually mandatory for decisions about a board member for them to be recused. That there is an overwhelming potential for conflict between personal interest and the firm’s is pretty clear in that case.
You actually think that for-profit corporate boards are significantly different, especially in the startup/early IPO phase?
https://twitter.com/karaswisher/status/1725682088639119857
nothing to do with dishonesty. That’s just the official reason.
———-
I haven’t heard anyone commenting about this, but the two main figures here-consider: This MUST come down to a disagreement between Altman and Sutskever.
Also interesting that Sutskever tweeted a month and a half ago
https://twitter.com/ilyasut/status/1707752576077176907
The press release about candid talk with the board… It’s probably just cover up for some deep seated philosophical disagreement. They found a reason to fire him that not necessarily reflects why they are firing him. He and Ilya no longer saw eye to eye and it reached its fever pitch with gpt 4 turbo.
Ultimately, it’s been surmised that Sutskever had all the leverage because of his technical ability. Sam being the consummate businessperson, they probably got in some final disagreement and Sutskever reached his tipping point and decided to use said leverage.
I’ve been in tech too long and have seen this play out. Don’t piss off an irreplaceable engineer or they’ll fire you. not taking any sides here.
PS most engineers, like myself, are replaceable. Ilya is probably not.
> thomas.
And there is indeed no law against not pressing shift.
Random twitter guy has thoughts on $Current_Event and a witty quip about the “vibes”. It’s crucial we post this without context to the discussion
Buy 100 prompts now with AIBucks! Loot box prompts! Get 10 AI ultramax level prompts with your purchase of 5 new PromptSkin themes to customize your AI buddy! Pre-order the NEW UltraThink AI and get an exclusive UltraPrompt skin and 25 AIBucks!
Because two executives were ousted from a company? That's dramatic.
Random? Twitter account who's leaked a few things at Open AI for months now.
That could simply mean that he disagreed with the outcome and is expressing that disagreement by quitting.
EDIT: Derp. I was reading the note he wrote to OpenAI staff. The tweet itself says "After learning today's news" -- still ambiguous as to when and where he learned the news.
Probably a similar situation.
Since he was removed as Chairman at the same time as Altman was as CEO, presumably he was excluded from that part of the meeting (which may have been the whole meeting) for the same reason as Altman would have been.
Or Altman will start a competitor.
Edit: Maybe this is a reasonable explanation: >>38312868 . The only other thing not considered is that Microsoft really enjoys having its brand on things.
Man I'm drunk in conspiracy theories tonight. Between a huge lay off and the Open AI fiasco please allow me indulge myself...
Not saying there is proof, but we just found out Ukraine blew up the Russian pipeline so it seems weird to just squash debate at the 'that's too crazy to ever happen'. Way crazier things have happened/are constantly happening.
Maybe my experience with corporate communications is different, but all it implies to me is that he was not removed as President and was being permitted to stay on under the new CEO.
I wouldn't move back to San Francisco anywhere and hybrid would be a midweek affair
What the hell were they thinking? Just because you are a non-profit doesn't mean you should imitate other non-profits and put crazies on the board.
https://www.economist.com/business/2006/05/11/flat-pack-acco...
They achieved AGI internally, but didn't want OpenAI to have it. All the important people will move to another company, following Sam, and OpenAI is left with nothing more than a rotting GPT.
They planned all this from the start, which is why Sam didn't care about equity or long-term finances. They spent all the money in this one-shot gamble to achieve AGI, which can be reimplemented at another company. Legally it's not IP theft, because it's just code which can be memorized and rewritten.
Sam got himself fired intentionally, which gives him and his followers a plausible cover story for moving to another company and continuing the work there. I'm expecting that all researchers from OpenAI will follow Sam.
Ignore and focus on your life, the grapevine in your neighborhood about who is selling their car or their house is not as exciting but will net you way more money than this happening thousands of miles away from you. And most importantly without having to fuck with leverage.
When things cool down in a few months we will learn Altman and Brockman were some of the few sane people on the board.
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
Its totally fair (your interpretation) to think that. He was removed as chairman though, which IMO is a demotion. I think its disingenuous on the part of OpenAI _unless_ originally Greg said he was ok with the new terms. If a company says "Person X will remain as President and report to the CEO" you would think they have worked it out with person X _before_ announcing it.
Like, who is Mira Murati? We only now that she came from Albania (one of the poorest countries) and somehow got into some pretty good private schools, and then to pretty good companies. Who are her parents? What kind of connections does she pull?
But for clarity sake I'm doing neither personally because I'm not a day trader and look more long term.
Betting on the World Cup Final vs. betting on a local match where you know a team has been clubbing and drinking until late into the night at your bar.
Local advantage.
Seriously, I’m asking. Like… if you were an engineer that worked on UNIX System V at AT&T/Bell Labs and contributed code to the BSDs from memory alone, would you really be liable?
just base logic.
I am not dismissing the possibility, far from it. It sounds very plausible. But are there any credible reports to back it up?
Anyway if I was in business of destabilizing governments around the world I would not bother dealing with board meetings. But maybe that's just me.
So, if there's 6 board members and they're looking to "take down" 2... that means those 2 can't really participate, right? Or at the very least, they have to "recuse" themselves on votes regarding them?
Do the 4 members have to organize and communicate "in secret"? Is there any reason 3 members can't hold a vote to oust 1, making it a 3/5 to reach majority, and then from there, just start voting _everyone_ out? Probably stupid questions but I'm curious enough to ask, lol.
Both seem like they are horribly rushed and no auto complete?
A mere direction disagrement would have been handled with "Sam is retiring after 3 months to spend more time with his Family, We thank him for all his work". And surely would be taken months in advance of being announced.
Not going to say it's impossible, but she is doing so good but left so few footprints on the Internet.
Again just my personal early night conspiracy drink. Don't take it seriously.
Unless Brockman was involved, though, firing Brockman doesn't really make sense.
There’s really no nice way to tell someone to fuck off from the biggest thing. Ever.
Here is idea for hacker news crowd: make service which is a proxy for phone number validation: user needs to validate his phone number once in that app and any other 3rd-party service can ask this app for security code which confirms phone number ownership. We use something similar by offloading phone number confirmation via Telegram bot. Also this proxy service could optionally offload management of "bad" phone numbers used by spammers and add other protections
Instead he knew enough to make is call immediately, knew what he was going to do.
So unless any of the necessary bits are patented, I highly doubt an argument against them starting a new company will hold in the courts.
Sometimes the contracts can include a cool-down period before a person can seek employment in the same industry/niche, I don’t think that will apply in Sam’s case - as he was a founder.
Also - the wanting to get himself fired intentionally argument doesn’t have any substance. What will he gain from that? If anything, him leaving on his own terms sounds like a much stronger argument. I don’t buy the getting-fired-and-having-no-choice-but-to-start-an-AGI-company argument.
An interesting twist would be if he joins Elon in his pursuit. Pure speculation, sharing it just for amusement. I don’t think they’ll ever work together. Can’t have two people calling the shots at the top. Leaves employees confused and rarely ever works. Probably not very good for their own mental health either.
It's just a fun theory, which I think is plausible. It's based on my personal view of how Sam Altman operates, i.e. very smart, very calculative, makes big gambles for the "greater purpose".
No, I'm confident that it has nothing to do with that. It must have to do with the current business. Maybe there's a financial conflict of interest. Maybe he's been hiding the severity of losses from the board. Maybe something else. But you don't fire a CEO because you discover that he committed a crime at age 13.
The US has never spent less on its military than it does now, and the military industrial complex has never been less important, because the rest of the economy has grown so much larger. So it's funny to see people still using Cold War-era conspiracy theories from when it actually mattered.
Sam claims LLMs aren't sufficient for AGI (rightfully so).
Ilya claims the transformer architecture, with some modification for efficiency, is actually sufficient for AGI.
Obviously transformers are the core component of LLMs today, and the devil is in the details (a future model may resemble the transformers of today, while also being dynamic in terms of training data/experience), but the jury is still out.
In either case, publicly disagreeing on the future direction of OpenAI may be indicative of deeper problems internally.
There is nothing business-y about this. As a non-profit OpenAI can do whatever they want.
It's very difficult to enforce anything like this in California. They can pay him to not work, but can't just require it.
OpenAI isn't a single person, so decisions like firing the CEO have to be made somehow. I'm wondering about how that framework actually works.
I'm 100% ok with this. I have the choice of using a Visa/MC gift card I bought with cash. Same as I can do with Netflix. Better than linking a unique ID I use everywhere else.
I think what bugs me the most is that there's no direct need for the phone. It's reasonable to give my phone number to a doctor's office because I need to hear from them over the phone.
I mean, I still wonder though if they really only need 3 ppl fully on board to effectively take the entire company. Vote #1, oust Sam, 3/5 vote YES. Sam is out, now the vote is "Demote Greg", 3/4 vote YES, Greg is demoted and quits. Now, there could be one "dissenter" and it would be easy to vote them out too. Surely there's some protection against that?
That way, legitimate machine learning companies can thrive and research for ai can continue without the nuisance.
Incidentally forums are filling up with horror stories from people working at or interviewing with openai, in spite of their paid trolls spamming forums left and right and reporting reddit posts to suppress people. Perhaps the bubble has burst.
Openai has done more harm to ai than any other company.
The cat’s out of the bag.
The announcement: https://openai.com/blog/openai-announces-leadership-transiti...
The discussion: >>38309611
Are you trolling, the letter is short and all lowercase lol
Open AI- we need clarity on your new direction.
1) a confirmation of the dates of employment
2) a confirmation of the role/title during employment
3) whether or not they would rehire that person
... and that's it. The last one is a legally-sound way of saying that their time at the company left something to be desired, up to and including the point of them being terminated. It doesn't give them exposure under defamation because it's completely true, as the company is fully in-charge of that decision and can thus set the reality surrounding it.
That's for a regular employee who is having their information confirmed by some hiring manager in a phone or email conversation. This is a press release for a company connected to several very high-profile corporations in a very well-connected business community. Arguably it's the biggest tech exec news of the year. If there's ulterior or additional motive as you suggest, there's a possibility Sam goes and hires the biggest son-of-a-bitch attorney in California to convince a jury that the ulterior or additional motive was _the only_ motive, and that calling Sam a liar in a press release was defamation. As a result, OpenAI/the foundation, would probably be paying him _at least_ several million dollars (probably a lot more) for making him hard to hire on at other companies.
Either he simply lied to the board and that's it, or OpenAI's counsel didn't do their job and put their foot down over the language used in the press release.
It not like to can just move to another AI company if you don't like their terms.
Only feels last minute to those outside. I've seen some of these go down in smaller companies and it's a lot like bankruptcy - slowly, then all at once.
Except for a clumsy fast press release, this doesn’t really have to end badly for anyone.
Even though I have been an OpenAI fan every since I used their earliest public APIs, I am also very happpy that there is such a rich ecosystem, other commercial players like Anthropic, open model support from Meta and Hugging Face, and the increasingly wonderful small models like Mistral that can be easily run at home.
Huh? Plenty of startups in the stage being referenced are still majority owned by the founders.
I am genuinely flabbergasted as to how she ended up on the board. How does this happen?
I can't even find anything about fellow board member Tasha McCauley...
We have no evidence he agreed or didn't agree to the wording. A quorum of the board met, probably without the chairman, and voted the CEO of the company and the chairman of the board out. The chairman also happens to have a job as the President of the company. The President role reports to the CEO, not the board. Typically, a BOD would not fire a President, the CEO would. The board's statement said the President would continue reporting to the CEO (now a different person) - clarifying that the dismissal as board chairman was separate from his role as a company employee.
Based on the careful wording of the board's statement as well as Greg's tweet, I suspect he wasn't present at the vote nor would he be eligible to vote regarding his own position as chairman. Following this, the remaining board members convened with their newly appointed CEO and drafted a public statement from the company and board.
Edit: Ok seems to be a joke account. I guess I’m getting old.
Many people in AI safety are young. She has more professional experience than many leaders in the field.
Likely it's already brought them more than $10B they paid.
First thing tomorrow I'm kicking off another round of searching for alternatives.
Even with very public cases of company leaders who did horrible things (much worse than lying), the companies that fired them said nothing officially. The person just "resigned". There's just no reason open up even the faintest possibility of an expensive lawsuit, even if they believe they can win.
So yeah, someone definitely told the lawyers to go fuck themselves when they decided to go with this inflammatory language.
They could have waited another 30 mins for the markets to close before making the move. This isn’t the culmination of a long-standing problem.
That is not how IP law works. Even writing new code based on the IP developed at OpenAI would be IP theft.
None of this really makes sense when you consider that Ilya Sutskever, arguably the single most important person at OpenAI, appears to have been a part of removing Sam.
Typically, these documents contain provisions for how voting, succession, recusal, eligibility, etc are to be handled. Based on my experience on both for-profit and non-profit boards, the outside members of the board probably retained outside legal counsel to advise them. Board members have specific duties they are obligated to fulfill along with serious legal liability if they don't do so adequately and in good faith.
Also, I worked in startups and my ex-gf in various nonprofits, and the amount of drama she saw was way higher than in the commercial world
I wouldn't put money on the last one, though.
If their case isn't 100% rock solid, they just handed Sam a lawsuit that he's virtually guaranteed to win.
I'm sure this was part of the disagreement as Sam is "capitalism incarnated" while Ilya gives of much different feelings.
Think back in history. For example, consider the absolutely massive issues at Uber that had to go public before the board did anything. There is no way this is over some disagreement, there has to be serious financial, ethical or social wrongdoing for the board to rush job it and put a company worth tens of billions of dollars at risk.
No shifty business?
LOWERCASEWRITING is one too long for a M-S but could work on a Sunday I guess.
Did Microsoft have any other route to AI relevance?
How the hell can people be so confident about this? You describe two smart people reasonably disagreeing about a complicated topic
You don't fire your CEO and call him a liar if you have any choice about it. That just invites a lawsuit, bad blood, and a poor reputation in the very small circles of corporate executives and board members.
That makes me think that Sam did something on OpenAI's behalf that could be construed as criminal, and the board had to fire him immediately and disavow all knowledge ("not completely candid") so that they don't bear any legal liability. It also fits with the new CEO being the person previously in charge of safety, governance, ethics, etc.
That Greg Brockman, Eric Schmidt, et al are defending Altman makes me think that this is in a legal grey area, something new, and it was on behalf of training better models. Something that an ends-justifies-the-means technologist could look at and think "Of course, why aren't we doing that?" while a layperson would be like "I can't believe you did that." It's probably not something mundane like copyright infringement or webscraping or even GDPR/CalOppa violations though - those are civil penalties, and wouldn't make the board panic as strongly as they did.
Ha! Tell me you don't know about markets without telling me! Stock can drop after hours too.
But sooner or later someone would have done it.
Given that AGI means reaching "any intellectual task that human beings can perform", we need a system that can go beyond lexical reasoning and actually contribute (on it's own) to advance our total knowledge. Anything less isn't AGI.
Ilya may be right that a super-scaled transformer model (with additional mechanics beyond today's LLMs) will achieve AGI, or he may be wrong.
Therefore something more than an LLM is needed to reach AGI, what that is, we don't yet know!
> "More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side."
> "The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast. My bet: He’ll have a new company up by Monday."
[source: https://twitter.com/karaswisher/status/1725702501435941294]
Sounds like you exactly predicted it.
It is:
> As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.
https://openai.com/blog/openai-announces-leadership-transiti...
The portion you quoted says he will remain at the company. This post is about him quitting, and no longer remaining with the company.
Of course the press release is under scrutiny, we are all wondering What Really Happened. But careless statements create significant legal (and thus financial) risk for a big corporate entity, and board members have fiduciary responsibilities, which is why 99.99% of corporate communications are bland in tone, whatever human drama may be taking place in conference rooms.
I don't think it's correct not because it sounds like a sci-fi novel, but because I think it's unlikely that it's even remotely plausible that a new version of their internal AI system would be good enough at this point in time to do something like that, considering all the many problems that need to be solved for Drexler to be right.
I think it's much more likely that this was an ideological disagreement about safety in general rather than a given breakthrough or technology in specific, and Ilya got the backing of US NatSec types (apparently their representative on the board sided with him) to get Sam ousted.
You also wouldn't try to avoid a lawsuit if you believed (hypothetically) it was impossible to avoid a lawsuit.
Technically you can also pay for a burner service to get temporary phone numbers to receive SMSs for registering to services. Can’t attest if any of them are good or trustworthy. I recently looked into it but everything I found was a subscription and/or shady looking.
The military would need to be literally breeding geniuses and cultivating a secret scientific ecosystem to be ahead on AI right now.
Why would they issue a statement saying that he was going to stay on without some form of assurance from him?
I mean, you're writing a release stating that you're firing your CEO and accusing him of lack of candor. Not exactly the best news to give. You're chasing that with "oh by the way, the chairman of the board is stepping down too", so the news are going from bad to worse. The last thing you want is to claim that said chairman of the board is staying as an employee to have him quit hours later. I find it hard to believe that they'd make mistake as dumb as announcing Greg was staying without some sort of assurance from him, knowing that Greg was Sam's ally.
Dang! He left @elonmusk on read. Now that's some ego at play.
Aren't these synonymous at this point? The conceit that you can point AGI at any arbitrary speculative sci-fi concept and it can just invent it is a sci-fi trope.
This page provides confirmation that your request is processed: https://privacy.openai.com/policies
They're probably firing up the eyeball scanning machines on this news.
In reality, it's actually the 1000s of actual engineers that deserve most of the credit, and yet are never mentioned. Society never learns about the one engineer (or team) that solves a problem that others have been stuck on for some time. The aggregate contributions of such innovators are a far more significant driving force behind progress.
Why do we never hear of the many? It's probably because it's just easier to focus on a single personality who can be marketed as an "unconventional genius" or some such nonsense.
None of that makes sense as to why the board would randomly fire him. I don't think it's this.
There's 8 billion people on the planet nowadays, of those, about 7.9 billion would not lift a finger if there's no material benefit to them. Hence why it's strange.
https://time.com/collection/time100-ai/6309033/greg-brockman...
They're all getting paid one way or another.
Without persistence outside of the context window, they can't even maintain a dynamic, stable higher level goal.
Whether you can bolt something small to these architectures for persistence and do some small things and get AGI is an open question, but what we have is clearly insufficient by design.
I expect it's something in-between: our current approaches are a fertile ground for improving towards AGI, but it's also not a trivial further step to get there.
This feels like real like succession panning out. Every board member is trying to figure out how to optimize their position.
I mean, can't you say the same for people? We are easily confused and manipulated, for the most part.
I can reason about something and then combine it with something I reasoned about at a different time.
I can learn new tasks.
I can pick a goal of my own choosing and then still be working towards it intermittently weeks later.
The examples we have now of GPT LLM cannot do these things. Doing those things may be a small change, or may not be tractable for these architectures to do at all... but it's probably in-between: hard but can be "tacked on."
>I'm not patronizing you
(A)ssuming (G)ood (F)aith, referring to someone online by their name, even in an edge case where their username is their name, is considered patronizing as it is difficult to convey a tone via text medium that isn't perceived as a mockery/veiled threat.
This may be a US-internet thing; analogous to getting within striking distance with a raised voice can be a capital offense in the US, juxtaposed to being completely normal in some parts of the Middle East.
How is the language “we are going our separate ways” compared with “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI” going to have a material difference in the outcome of the action of him getting fired?
How do the complainants show a judge and jury that they were materially harmed by the choice of language above?
The board is in absolute control in a not-for-profit. The loophole is that some have bylaws that make ad-hoc board meetings and management change votes very difficult to call for non-operating board members, and it can take months to get a motion to fire the CEO up for a vote.
In some not-for-profits, the board often even manages to recruit and seat new board members. Some not-for-profits operate as membership associations, where the organization’s membership elects the board members to terms.
On the few not-for-profits where I was a board member, we started every meeting with a motion to retain the Executive Director (CEO). If the vote failed, so did the Executive Director.
When they are fired by the board, it sends a very different signal.
I most probably am anthropomorphizing completely wrong. But point is humans may not be any more creative than an LLM, just that we have better computation and inputs. Maybe creativity is akin to LLMs hallucinations.
And this time around he would have the sympathies from the crowd.
Regardless this is very detrimental to OpenAI brand, Ilya might be the genius behind ChatGPT, he couldn’t do it just himself.
The war between OpenAI and Sam AI is just the beginning
The Illuminati are a front for the Jews™ (not to be confused with Jewish people).
The Jews™ are a front for the Catholic Church.
The Catholic church is a front for the Lizard People.
The Lizard People are a front for the Government.
Nobody is in control. The conspiracy is circular. There is no conspiracy. Everything in this post is false. Only an idiot cannot place his absolute certainty in paradoxes.
Basically, there's a huge difference between "I don't think this is a feasible explanation for X event that just happened for specific technical reasons" (good) and "I don't think this is a possible explanation of X event that just happened because it has happened in science fiction stories, so it cannot be true" (dumb).
About nanotechnology specifically, if Drexler from Drexler-Smalley is right then an AGI would probably be able to invent it by definition. If Drexler is right that means it's in principle possible and just a matter of engineering, and an AGI (or a narrow superhuman AI at this task) by definition can do that engineering, with enough time and copies of itself.
OpenAI's board's press release could very easily be construed as "Sam Altman is not trustworthy as a CEO", which could lead to his reputation being sullied among other possible employers. He could argue that the board defamed his reputation and kept him from what was otherwise a very promising career in an unfathomably lucrative field.
This will end up being a blip that corrects once it’s actually digested.
Although, the way this story is unfolding, it’s going to be hilarious if it ends up that the OpenAI board members had taken recent short positions in MSFT.
That's the reason nobody does stock options anymore though, it's all RSUs now.
> with enough time and copies of itself.
Alright, but that’s not what you the previous post was hypothesizing,which is that OpenAI was possibly able to do that without physical experimentation.
If indeed a similar disagreement happened in OpenAI but this time Hinton (Ilya) came on top- it’s a reason to celebrate.
This has to be a joke, right?
I would also say that I believe that long-term goal oriented behavior isn't something that's well represented in the training data. We have stories about it, sometimes, but there's a need to map self-state to these stories to learn anything about what we should do next from them.
I feel like LLMs are much smarter than we are in thinking "per symbol", but we have facilities for iteration and metacognition and saving state that let us have an advantage. I think that we need to find clever, minimal ways to build these "looping" contexts.
I thought this guy was supposed to know what he's talking about? There was a paper that shows LLMs cannot generalise[0]. Anybody who's used ChatGPT can see there's imperfections.
Really they should have just said something to the effect of, "The board has voted to end Sam Altman's tenure as CEO at OpenAI. We wish him the best in his future endeavors."
No-one knows. But I sure would trust the scientist leading the endeavor more than a business person that has interest in saying the opposite to avoid immediate regulations.
Our brain actually uses many different functions for all of these things. Intelligence is incredibly complex.
But also, you don't need all of these to have real intelligence. People can problem solve without memory, since those are different things. People can intelligently problem-solve without a task.
And working towards long-term goals is something we actually take decades to learn. And many fail there as well.
I wouldn't be surprised if, just like in our brain, we'll start adding other modalities that improve memory, planning, etc etc. Seems that they started doing this with the vision update in GPT-4.
I wouldn't be surprised if these LLMs really become the backbone of the AGI. But this is science– You don't really know what'll work until you do it.
I'm in the definitely ready for AGI camp. But it's not going to be a single model that's going to do the AGI magic trick, it's going to be an engineered system consisting of multiple communicating models hooked up using traditional engineering techniques.
> When Altman logged into the meeting, Brockman wrote, the entire OpenAI board was present—except for Brockman. Sutskever informed Altman he was being fired.
> Brockman said that soon after, he had a call with the board, where he was informed that he would be removed from his board position and that Altman had been fired. Then, OpenAI published a blog post sharing the news of Altman’s ouster.
With a more advanced AI system, one that could build better physics simulation environments, write software that's near maximally efficient, design better atomic modelling and tools than currently exist, and put all of that into thinking through a plan to achieve the technology (effectively experimenting inside its own head), I could maybe see it being possible for it to make it without needing the physical lab work. That level of cognitive achievement is what I think is infeasible that OpenAI could possibly have internally right now, for several reasons. Mainly that it's extremely far ahead of everything else to the point that I think they'd need recursive self-improvement to have gotten there, and I know for a fact there are many people at OpenAI who would rebel before letting a recursively self-improving AI get to that point. And two, if they lucked into something that capable accidentally by some freak accident, they wouldn't be able to keep it quiet for a few days, let alone a few weeks.
Basically, I don't think "a single technological advancement that product wants to implement and safety thinks is insane" is a good candidate for what caused the split, because there aren't that many such single technological advancements I can think of and all of them would require greater intelligence than I think is possible for OpenAI to have in an AI right now, even in their highest quality internal prototype.
https://x.com/sama/status/1725748751367852439
Though any fund containing MSFT must be correlated.
You're assuming they even consulted the lawyers...
In general, everyone is professional unless there's something really bad. This was quite unprofessionally handled, and so we draw the obvious conclusion.
Unless OpenAI can prove in a court of law that what they said was true, they're on the hook for that amount in compensation, perhaps plus punitive damages and legal costs.
I recognize that the above para sort of sounds like I think I have some authority to mediate between them, which is not true and not what I think. I'm just replying to this side conversation about how to be polite in public, just giving my take.
The broad pattern here is that there are norms around how and when you use someone's name when addressing them, and when you deviate from those norms, it signals that something is weird, and then the reader has to guess what is the second most likely meaning of the rest of the sentence, because the weird name use means that the most likely meaning is not appropriate.
This just proves that the LLMs available to them, with the training and augmentation methods they employed, aren't able to generalize. This doesn't prove that it is impossible for future LLMs or novel training and augmentation techniques will be unable to generalize.
Yes-- this is pretty much what I believe. And there's considerable uncertainty in how close AGI is (and how cheap it will be once it arrives).
It could be tomorrow and cheap. I hope not, because I'm really uncertain if we can deal with it (even if the AI is relatively well aligned).
Maybe to make it clear that if he leaves, it is him quitting not him being fired. This would avoid potential legal issues.
Maybe they thought there was a chance be would stay.
All this to say that the board is probably unlike the boards of the vast majority of tech companies.
If he is, at this point, so much so not replaceable that he has enough leverage to strong-arm the board into firing the CEO over disagreeing with the CEO, then that would for sure be the biggest problem OpenAI has.
Who cares? Sometimes the remixation of such patterns is what leads to new insights in us humans. It is dumb to think that remixing has no material benefit, especially when it clearly does.
When you're a public person, the bar for winning a defamation case is very high.
Nope, and not all people can achieve this as well. Would you call them less than humans than? I assume you wouldn't, as it is not only sentience of current events that maketh man. If you disagree, then we simply have fundamental disagreements on what maketh man, thus there is no way we'd have agreed in the first place.
She is well-connected with Pentagon leaders, who trust her input. She also is one of the hardest-working people among the West's analysts in her efforts to understand and connect with the Chinese side, as she uprooted her life to literally live in Beijing at one point in order to meet with people in the budding Chinese AI Safety community.
Here's an example of her work: AI safeguards: Views inside and outside China (Book chapter) https://www.taylorfrancis.com/chapters/edit/10.4324/97810032...
She's also co-authored several of the most famous "survey" papers which give an overview of AI safety methods: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=%22h...
She's at roughly the same level of eminence as Dr. Eric Horvitz (Microsoft's Chief Scientific Officer), who has similar goals as her, and who is an advisor to Biden. Comparing the two, Horvitz is more well-connected but Toner is more prolific, and overall they have roughly equal impact.
(Copy-pasting this comment from another thread where I posted it in response to a similar question.)
The commenter above doesn't mean that any reference to someone else by name ("Sam Altman was fired") is patronizing.
> The claim that GPT-4 can’t make B to A generalizations is false. And not what the authors were claiming. They were talking about these kinds of generalizations from pre and post training.
> When you divide data into prompt and completion pairs and the completions never reference the prompts or even hint at it, you’ve successfully trained a prompt completion A is B model but not one that will readily go from B is A. LLMs trained on “A is B” fail to learn “B is A” when the training date is split into prompt and completion pairs
Simple fix - put prompt and completion together, don't do gradients just for the completion, but also for the prompt. Or just make sure the model trains on data going in both directions by augmenting it pre-training.
https://andrewmayne.com/2023/11/14/is-the-reversal-curse-rea...
1) The comments are meant to be read by all, not just the author. If you want to email the author directly and start the message with a greeting containing their name ("hi jrockway!"), or even just their name, that's pretty normal.
2) You don't actually know the person's first name. In this case, it's pretty obvious, since the user in question goes by what looks like <firstname><lastname>. But who knows if that's actually their name. Plenty of people name their accounts after fictional people. It would be weird to everyone if your HN comment to darthvader was "Darth, I don't think you understand how corporate law departments work." Darth is not reading the comment. (OK, actually I would find that hilarious to read.)
3) Starting a sentence with someone's name and a long pause (which the written comma heavily implies) sounds like a parent scolding a child. You rarely see this form outside of a lecture, and the original comment in question is a lecture. You add the person's name to the beginning of the comment to be extra patronizing. I know that's what was going on and the person who was being replied to knows that's what was going on. The person who used that language denies that they were trying to be patronizing, but frankly, I don't believe it. Maybe they didn't mean to consciously do it, but they typed the extra word at the beginning of the sentence for some reason. What was that reason? If to soften the lecture, why not soften it even more by simply not clicking reply? It just doesn't add up.
4) It's Simply Not Done. Open any random HN discussion, and 99.99% of the time, nobody is starting replies with someone's name and a comma. It's not just HN; the same convention applies on Reddit. When you use style that deviates from the norm, you're sending a message, and it's going to have a jarring effect on the reader. Doubly jarring if you're the person they're naming.
TL;DR: Don't start your replies with the name of the person you're replying to. If you're talking with someone in person, sure, throw their name in there. That's totally normal. In writing? Less normal.
I don’t like this whole development one bit, actually. He lost his brakes and I’m sure he doesn’t see it this way at all.
Think of it as the difference between a vote of no confidence and a coup. In the first case you let things simmer for a bit to allow you to wheel and deal and to arrange for the future. In the second case, even in the case of a parliamentary coup like the 9th of Thermidor, the most important thing is to act fast.
I don't claim that RAG + LLM = AGI, but I do think it takes you a long way toward goal-oriented, autonomous agents with at least a degree of intelligence.
My beef with RAG is that it doesn't match on information that is not explicit in the text, so "the fourth word of this phrase" won't embed like the word "of", or "Bruce Willis' mother's first name" won't match with "Marlene". To fix this issue we need to draw chain-of-thought inferences from the chunks we index in the RAG system.
So my conclusion is that maybe we got the model all right but the data is too messy, we need to improve the data by studying it with the model prior to indexing. That would also fix the memory issues.
Everyone is over focusing on models to the detriment of thinking about the data. But models are just data gradients stacked up, we forget that. All the smarts the model has come from the data. We need data improvement more than model improvement.
Just consider the "Textbook quality data" paper Phi-1.5 and Orca datasets, they show that diverse chain of thought synthetic data is 5x better than organic text.
Someone else can probably say it better than I can, but that's how I understand it at this moment.
https://www.theverge.com/2017/10/19/16503076/oracle-vs-googl...
The only think flawed here is this statement. Are you even familiar with the premise of Turing test?
I think creativity is made of 2 parts - generating novel ideas, and filtering bad ideas. For the second part we need good feedback. Humans and LLMs are just as good at novel ideation, but humans have the advantage on feedback. We have a body, access to the real world, access to other humans and plenty of tools.
This is not something an android robot couldn't eventually have, and on top of that AIs got the advantage of learning from massive data. They surpass humans when they can leverage it - see AlphaFold, for example.
lol
> "that's just crazy".
why is it crazy? the purpose of OpenAI is not to make investors rich - having investors on the board trying to make money for themselves would be crazy.
I feel there are potential parallels between RAG and how human memory works. When we humans are prompted, I suspect we engage in some sort of relevant memory retrieval process and the retrieved memories are packaged up and factored in to our mental processing triggered by the prompt. This seems similar to RAG, where my understanding is that some sort of semantic search is conducted over a database of embeddings (essentially, "relevant memories") and then shoved into the prompt as additional context. Bigger context window allows for more "memories" to contextualise/inform the model's answer.
I've been wondering three things: (1) are previous user prompts and model answers also converted to embeddings and stored in the embedding database, as new "memories", essentially making the model "smarter" as it accumulates more "experiences" (2) could these "memories" be stored alongside a salience score of some kind that increases the chance of retrieval (with the salience score probably some composite of recency and perhaps degree of positive feedback from the original user?) (3) could you take these new "memories" and use them to incrementally retrain the model for, say, 8 hours every night? :)
Edit: And if you did (3), would that mean even with a temperature set at 0 the model might output one response to a prompt today, and a different response to an identical prompt tomorrow, due to the additional "experience" it has accumulated?
Also, as long as you are a public person, defamation has a very high bar in the USA. It is not enough to for the statement to be false, you have to actually prove that the person you're accusing of defamation knew it was false and intended it to hurt you.
Note that this is different from an accusation of perjury. They did not accuse Sam Altman of performing illegal acts. If they had, things would have been very different. As it stands, they simply said that he hasn't been truthful to them, which it would be very hard to prove is false.
Though I would go further than that: if that is indeed the reason, the board has proven themselves very much incompetent. It would be quite incompetent to invite this type of shadow of scandal for something that was a fundamentally reasonable disagreement.
i'm sick and tired of everyone sticking a chatbot on random crap that doesn't need it and has no reason to ever need it. it also made HN a lot less interesting to read
Surely, at some level, you can be sued for making unfounded remarks. But then IANAL so, meh.
Who knows, maybe they settled a difference of opinion and Altman went ahead with his plans anyway.
So better be the first to set the narrative.
The board reports to the shareholders and the management reports to the board.
In early stage companies it is possible and likely that all three are the same person, that doesn't change the different fiduciary responsibilities for each role they play.
You would agree the OpenAI is neither ? the comparison doesn't do well then?
Looking at the people on the current board, it doesn't seem they have a lot of experience being independent board members in large public corporations.
No non-profit has this level of public scrutiny, It could just be that they were sloppy because they are not professional board members.
A effective leader whether it is Musk, Jobs, Altman, Gandhi, Mandela (or Hitler for that matter) has the unique to skill to be able to direct everyone in a common direction efficiently like a superconducting material.
They are not individually contributing like say a Nobel laureate doing theoretical research. They get accolades they get is because they were able to direct many other people to achieve a very hard objective and keep them motivated and focused on the common vision, That is rare and difficult to do.
In the case of Altman, yes there were 1000s researchers, programmers who did the all the actual heavy lifting of getting OpenAI where it is today.
However without his ability and vision to get funding none of them would be doing what they are doing today at OpenAI.
All those people would not work a day more if there is no pay, would not be able train any model without resources. A CEO's first priotity is to make that happen by selling that vision to investors, Secondly he has to sell the vision to all these researchers to leave their cushy academic and large company jobs to work in small unproven startup and create an environment they can thrive in their roles. He has done both very well.
If you want to talk about rarer cases, there at lots of examples of people that literally sacrifice their lives and die for no personal benefit
No, in the UK it's unambiguously the other way round. The complainant simply has to persuade the court that the statement seriously harmed or is likely to seriously harm their reputation. Truth is a defence but for that defence to prevail the burden of proof is on the defendant to prove that it was true (or to mount an "honest opinion" defence on the basis that both the statement would reasonably be understood as one of opinion rather than fact and that they did honestly hold that opinion)
From my brief dealings with SA at Loopt in 2005, SA just does not have a dishonest bone in his body.(I got a brief look at the Loopt pitch deck due to interviewing for a mobile dev position at Loopt just after Sprint invested).
If you want an angel invest play, find out the new VCfund firm Sam is setting up for hard research.
If they had the small majority needed to get rid of him over mere differences of future vision they could have done so on whatever timescale they felt like, with no need to rush the departure and certainly no need for the goodbye to be inflammatory and potentially legally actionable
If that happened (speculation) then those resources weren't really dedicated to the research team.
Arms-length neutrality on a board in silicon valley might still work like the rest as other comments have stated. Maybe someone can shed some light on it
Woz did two magic things just in the Apple II which no one else was close to: the hack for the ntsc color, and the disk drive not needing a completely separate CPU. In the late 70s that ability is what enabled the Apple II to succeed.
The point is Woz is a hacker. Once you build a system more properly, with pieces used how their designers explicitly intended, you end up with the Mac (and things like Sun SPARCstafions) which does not have space for Woz to use his lateral thinking talents.
But right now, the board undoubtedly feels the most pressure in the realm of safety. This is where the political and big-money financial (Microsoft) support will be.
If all true, Altman's departure was likely inevitable as well as fortunate for his future.
This is my view!
Expert Systems went nowhere, because you have to sit a domain expert down with a knowledge engineer for months, encoding the expertise. And then you get a system that is expert in a specific domain. So if you can get an LLM to distil a corpus (library, or whatever) into a collection of "facts" attributed to specific authors, you could stream those facts into an expert system, that could make deductions, and explain its reasoning.
So I don't think these LLMs lead directly to AGI (or any kind of AI). They are text-retrieval systems, a bit like search engines but cleverer. But used as an input-filter for a reasoning engine such as an expert system, you could end up with a system that starts to approach what I'd call "intelligence".
If someone is trying to develop such a system, I'd like to know.
You're right: I haven't seen evidence of LLM novel pattern output that is basically creative.
It can find and remix patterns where there are pre-existing rules and maps that detail where they are and how to use them (ie: grammar, phonics, or an index). But it can't, whatsoever, expose new patterns. At least public facing LLM's can't. They can't abstract.
I think that this is an important distinction when speaking of AI pattern finding, as the language tends to imply AGI behavior.
But abstraction (as perhaps the actual marker of AGI) is so different from what they can do now that it essentially seems to be futurism whose footpath hasn't yet been found let alone traversed.
When they can find novel patterns across prior seemingly unconnected concepts, then they will be onto something. When "AI" begins to see the hidden mirrors so to speak.
We won't know for a while, especially since the details of the internal dispute and the soundness of the allegations against Altman are still vague. Whether investors/donors-at-large are more or less comfortable now than they were before is up in the air.
That said, startups and commercial partners that wanted to build on recent OpenAI, LLC products are right to grow skittish. Signs are strong that the remaining board won't support them the way Altman's org would have.
It's foolish for any of us to peer inside the crystal ball of "what would Jobs be without Woz", but I think it is important to acknowledge that the Apple II and IIc pretty much bankrolled Apple through their pre-Macintosh era. Without those first few gigs (which Woz is almost single-handedly responsible for), Apple Computers wouldn't have existed as early (or successfully) as it did. Maybe we still would have gotten an iPhone later down the line, but that's frankly too speculative for any of us to call.
The source of the speculation could further enhance or remove the probability of this being true. For instance, a journalist who covers OpenAI vs. a random tweeter (now X’er?) with no direct connection. It’s a loose application of Bayesian reasoning - where knowing the likelihood of one event (occupation of speculator and their connection to AI) can significantly increase the probability of the other event (the speculation).
Like it's not a bad thing, I'm not implying any kind of judgement but keeping those things in context helps you know that "K." means something totally different coming from your dad.
Yes, generalizing is how we reason, because it lets us strip away information that is not relevant in most scenarios and reduces complexity and depth without losing much in most cases. My point is, this is not a scenario that fits in the set of “most cases.” This is actually probably one of the most unique and corner-casey example of board dynamics in tech. Adherence to generalizations without considering applicability and corner cases doesn’t make sense.
It couldn't do any of that because it would cost money. The AGI wouldn't have money to do that because it doesn't have a job. It would need to get one to live, just like humans do, and then it wouldn't have time to take over the world, just like humans don't.
An artificial human-like superintelligence is incapable of being superhuman because it is constrained in all the same ways humans are and that isn't "they can't think fast enough".
And SAM allowed all this under his nose. Making sure OpenAI is ripe for MSFT takeover. This is a back channel deal for takeover. What about the early donors who donated with humanity goal whose funding made it all possible?
I am not sure Sam has any contribution to the OpenAI Software but he gives the world an impression that he owns that and built the entire thing by himself and not giving any attribution to fellow founders.
This has not do with beneficial ownership of the underlying asset alone. Principals sometimes do not have that relationship. Asset ownership is a common way to benefit from a entity, but not the only way.
Specifically here Sam Altman does not own shares in the for-profit entity and non profit entities do not have shares.
I don't have direct knowledge on how OpenAI handles it, however it is not uncommon to do revenue sharing, or lease an underlying asset like a brand name (WeWork did this) from the Principal directly, or pay for perks like housing, planes etc, or pay lot of money in Salary/Cash compensation, there are myriad ways to benefit from control without share ownership.
Think about the RLHF component that trains LLMs. It's the training itself that generalises - not the final model that becomes a static component.
Most of that is encoded into weights during training, though external function call interfaces and RAG are broadening this.