In this scenario, it was a pure power struggle. The board believed they’d win by showing Altman the door, but it didn’t take long to demonstrate that their actual power to do so was limited to the de jure end of the spectrum.
The way the board pulled this off really gave them no good outcome. They stand to lose talent AND investors AND customers. Half the people I know who use GPT in their work are wondering if it will be even worth paying for if the model’s improvements stagnates with the departure of these key people.
If Sam were to be ousted it should have happened before ChatGPT was unleashed on the world.
If the coup fails in the end (which seems likely), it will have proved that the "nonprofit safeguard" was toothless. Depending on the board members' ideological positions, maybe that's better than nothing.
Why? I see a lot of hero-worship for Sam, but very little concrete facts about what he's done to make this a success.
And given his history, I'm inclined to believe he just got lucky.
OpenAI is far from being self-sustainable and without significant external investment they'll just probably be soon overtaken by someone else.
People say this like it's some kind of truism but I've never seen it happened and when questioned everyone I've known who's claimed it ends up admitting they were measuring their "success" by a different metric then the company.
If you believe that the hunt for ASI shouldn't be OpenAI's mission, then that's an entirely different thing. The problem is: That is their mission. Its literally their mission statement and the responsibility of the board to direct every part of the organization toward that goal. Every fucking investor, including Microsoft, knew this, they knew the corporate governance, they knew the values alignment, they knew the power the nonprofit had over the for-profit. Argue credentialism, fine, but four people on the board that Microsoft invested in are now saying that OAI's path isn't the right path toward ASI; and, in case it matters, one of them is universally regarded as one of the top minds on the subject of artificial intelligence, on the planet. The time to argue credentialism was when the investors were signing checks; but they didn't. Its too late now.
My hunch is that the majority of the sharpest research minds at OAI didn't sign on to build B2B SaaS and become the next Microsoft; more accurately, Microsoft's thrall, because they'll never surpass Microsoft, they'll always be their second, if Satya influences Sam back into the boardroom then Microsoft, not Sam, will always be in control. Sam isn't going to be able to right that ship. OAI without its mission also isn't going to exist. That's the reality everyone on Sam's side needs to realize. And; absolutely, an OAI under Ilya is also extremely suspect in its ability to raise the resources necessary to meet the non-profit's goals; they'll be a zombie for years.
The hard reality that everyone needs to accept at this point is, OpenAI is probably finished. Unless they made some massive breakthrough a few weeks ago, which Sam did hint to three days ago, which should be the last hope that we all hold on to, that AI research as a species hasn't just been set back a decade with this schism. If that's the case; I think anyone hoping that Microsoft regains control the company is not thinking the situation through, and the best case scenario is: Ilya and the scientists retain control, and they are given space to understand the breakthrough.
Here's something I feel higher confidence in, but still don't know: Its not obvious to me that OAI would be overtaken by someone else. There are two misconceptions that we need to leave on the roadside: (1) Technology always evolves forward, and (2) More money produces better products. Both of these ideas, at best, only indicate correlative relationships, and at worst are just wrong. No one has overtaken GPT-4 yet. Money is a necessary input to some of these problems, but you can't just throw money at it and get better results.
And here's something I have even higher confidence in: "Being overtaken by someone else" is a sin worthy of the death penalty in the Valley VC Culture; but their's is not the only culture.
Working at OpenAI meant working on GPT-4 (and whatever is next in line), which is attractive because it's the best thing in the field right now by a significant margin.
There is no alterantive, if you're wedded to "fitting functions to frequencies of text tokens" as your 'research paradigm' -- the only think that can be is a commericalised trinket.
So either the whole org starts staffing top level research teams in neuroscience, cognitive science, philosophy of mind, logic, computer science, biophysics, materials science.... or it just delivers an app.
If sam is the only one interested in the app, its because he's the only sane guy in the room.
You may be interested by the neuroscience research on the application of a time-difference like algorithm in the brain; predictor-corrector systems are just conditional statistical models being trained by reinforcement.
If Altman's contribution had simply been signing deals for data and compute then keeping staff fantasies under control, that already makes him unique in that space and hyper valuable. But he also seems to have good product sense. If you remember, the researchers originally didn't want to do chatgpt because they thought nobody would care.
The semantics of the terms of the 'algorithm' in the case of animals are radically different, and insofar as these algorithms describe anything, it is because they are empirically adequate.
I'm not sure what you think 'evidence' would look like that conditional probability cannot lead to agi -- other than a serious of obvious observations: conditional probability doesnt resolve causation, it is therefore not a model of any physical process, it does not provide a mechanism for generating the propositions which are conditioned-on, it does not model relevance, and a huge list of other severe issues.
The idea that P(A|B) is even relevant to AGI is a sign of a fundamental basic lack of curiosity beyond what is on-trend in computer science.
We can easily explain why any given conditional probability mdoel can encode aribatary (Q, A) pairs -- so any given 'task' expressed as a sequence of Q-A prompts/replies can be modelled.
But who cares. The burden-of-proof on people claiming that conditional probability is a route to AGI is to explain how it models: causation, relevance, counter-factual reasoning, deduction, abduction, sensory-motor adaption, etc.
The gap between what has been provided and this burden-of-proof is laughable
(I am aware that conceptually it can lead to a skynet scenario)
If Ilya could come out and clearly articulate how his utopian version of OpenAI would function, how they will continue to sustain the research and engineering efforts, at what cadence they can innovate and keep shipping and how will they make it accessible to others then maybe there would be more support.
This reasoning is the only one that makes sense. Where action taken by the board aligns with logic and any private action done by Sam Altman that could have offended the board.
The story of the board being incompetent to the point of mental retardation and firing such a key person is just the most convenient, attractive and least probable narrative. It's rare for an intelligent person to do something stupid, even rarer an entire board of intelligent people to do something stupid in unison.
But it's so easy to fall for that trope narrative.