In this scenario, it was a pure power struggle. The board believed they’d win by showing Altman the door, but it didn’t take long to demonstrate that their actual power to do so was limited to the de jure end of the spectrum.
The way the board pulled this off really gave them no good outcome. They stand to lose talent AND investors AND customers. Half the people I know who use GPT in their work are wondering if it will be even worth paying for if the model’s improvements stagnates with the departure of these key people.
If you believe that the hunt for ASI shouldn't be OpenAI's mission, then that's an entirely different thing. The problem is: That is their mission. Its literally their mission statement and the responsibility of the board to direct every part of the organization toward that goal. Every fucking investor, including Microsoft, knew this, they knew the corporate governance, they knew the values alignment, they knew the power the nonprofit had over the for-profit. Argue credentialism, fine, but four people on the board that Microsoft invested in are now saying that OAI's path isn't the right path toward ASI; and, in case it matters, one of them is universally regarded as one of the top minds on the subject of artificial intelligence, on the planet. The time to argue credentialism was when the investors were signing checks; but they didn't. Its too late now.
My hunch is that the majority of the sharpest research minds at OAI didn't sign on to build B2B SaaS and become the next Microsoft; more accurately, Microsoft's thrall, because they'll never surpass Microsoft, they'll always be their second, if Satya influences Sam back into the boardroom then Microsoft, not Sam, will always be in control. Sam isn't going to be able to right that ship. OAI without its mission also isn't going to exist. That's the reality everyone on Sam's side needs to realize. And; absolutely, an OAI under Ilya is also extremely suspect in its ability to raise the resources necessary to meet the non-profit's goals; they'll be a zombie for years.
The hard reality that everyone needs to accept at this point is, OpenAI is probably finished. Unless they made some massive breakthrough a few weeks ago, which Sam did hint to three days ago, which should be the last hope that we all hold on to, that AI research as a species hasn't just been set back a decade with this schism. If that's the case; I think anyone hoping that Microsoft regains control the company is not thinking the situation through, and the best case scenario is: Ilya and the scientists retain control, and they are given space to understand the breakthrough.
There is no alterantive, if you're wedded to "fitting functions to frequencies of text tokens" as your 'research paradigm' -- the only think that can be is a commericalised trinket.
So either the whole org starts staffing top level research teams in neuroscience, cognitive science, philosophy of mind, logic, computer science, biophysics, materials science.... or it just delivers an app.
If sam is the only one interested in the app, its because he's the only sane guy in the room.
You may be interested by the neuroscience research on the application of a time-difference like algorithm in the brain; predictor-corrector systems are just conditional statistical models being trained by reinforcement.
The semantics of the terms of the 'algorithm' in the case of animals are radically different, and insofar as these algorithms describe anything, it is because they are empirically adequate.
I'm not sure what you think 'evidence' would look like that conditional probability cannot lead to agi -- other than a serious of obvious observations: conditional probability doesnt resolve causation, it is therefore not a model of any physical process, it does not provide a mechanism for generating the propositions which are conditioned-on, it does not model relevance, and a huge list of other severe issues.
The idea that P(A|B) is even relevant to AGI is a sign of a fundamental basic lack of curiosity beyond what is on-trend in computer science.
We can easily explain why any given conditional probability mdoel can encode aribatary (Q, A) pairs -- so any given 'task' expressed as a sequence of Q-A prompts/replies can be modelled.
But who cares. The burden-of-proof on people claiming that conditional probability is a route to AGI is to explain how it models: causation, relevance, counter-factual reasoning, deduction, abduction, sensory-motor adaption, etc.
The gap between what has been provided and this burden-of-proof is laughable