zlacker

[parent] [thread] 40 comments
1. twoodf+(OP)[view] [source] 2023-11-18 23:06:27
This suggests a plausible explanation that Altman was attempting to engineer the board’s expansion or replacement: After the events of the last 48 hours, could you blame him?

In this scenario, it was a pure power struggle. The board believed they’d win by showing Altman the door, but it didn’t take long to demonstrate that their actual power to do so was limited to the de jure end of the spectrum.

replies(4): >>spacem+d2 >>coreth+t3 >>sfink+vk >>nemetr+9m
2. spacem+d2[view] [source] 2023-11-18 23:16:19
>>twoodf+(OP)
Any talented engineer or scientist who actually wants to ship product AND make money would head over to Sam’s startup. Any investor who cares about making money would fund Sam’s startup as well.

The way the board pulled this off really gave them no good outcome. They stand to lose talent AND investors AND customers. Half the people I know who use GPT in their work are wondering if it will be even worth paying for if the model’s improvements stagnates with the departure of these key people.

replies(4): >>TeaBra+B7 >>015a+Ve >>whokno+or >>neel89+yZ
3. coreth+t3[view] [source] 2023-11-18 23:22:05
>>twoodf+(OP)
this is the most likely explanation. Altman was going to oust them, hence why they had to make what seems like a bad strategic move. The move seems bad from our perspective but it's actually the most logical strategy for the board in terms of self preservation. I agree. I think this is most likely what occurred.
replies(1): >>knd775+8D
◧◩
4. TeaBra+B7[view] [source] [discussion] 2023-11-18 23:43:44
>>spacem+d2
I still haven't heard an explanation of why people who use GPT would be under the impression that Sam had anything to do with the past improvements in GPT versions.
replies(4): >>cmrdpo+d8 >>icelan+5c >>spacem+wd >>tick_t+Rj
◧◩◪
5. cmrdpo+d8[view] [source] [discussion] 2023-11-18 23:48:04
>>TeaBra+B7
Brockman maybe, though. Or at least in some sort of leadership capacity.
replies(1): >>TeaBra+Bg
◧◩◪
6. icelan+5c[view] [source] [discussion] 2023-11-19 00:04:33
>>TeaBra+B7
If technical expertise is what drove all progress, Google / DeepMind would be far ahead right now.
◧◩◪
7. spacem+wd[view] [source] [discussion] 2023-11-19 00:11:54
>>TeaBra+B7
Sam attracted money and attention, which attracted talent. If Sam departs for another venture, some - or a lot - of the talent and attention and money will leave too. This isn’t a car factory where you can replace one worker with another. If some of the top folks leave with Sam (as they already are) it’s reasonable to assume that the product will suffer.
◧◩
8. 015a+Ve[view] [source] [discussion] 2023-11-19 00:19:27
>>spacem+d2
And any talented engineer or scientist who actually wants to build safe AGI in an organization that isn't obsessed with boring B2B SaaS would align with Ilya. See, there are two sides to this? Sam isn't a god, despite what the media makes him out to be; none of them are.
replies(8): >>andy99+jl >>branda+wl >>fisf+Pm >>wwtrv+Zu >>foota+tx >>m3kw9+tQ >>rvba+fE1 >>laurel+7F1
◧◩◪◨
9. TeaBra+Bg[view] [source] [discussion] 2023-11-19 00:30:14
>>cmrdpo+d8
I'd understand the argument for Brockman considering he had a hand in recruiting the initial team at OpenAI, was previously the CTO, from some reports still involved himself in coding, was the only other founder on the board besides Ilya.
◧◩◪
10. tick_t+Rj[view] [source] [discussion] 2023-11-19 00:50:41
>>TeaBra+B7
Have you really never been at a place without someone with vision leading the cause? Try it some time and you'll start understanding how and why a CEO can make or break a company.
replies(2): >>whokno+ks >>ahartm+5o1
11. sfink+vk[view] [source] 2023-11-19 00:53:58
>>twoodf+(OP)
Why are people calling this already? There was a coup. The people on the losing end, which includes some large investors, counterattacked. That's where we are now (or were when the article was published). Of course they counterattacked! But did the counterattack land? I'm not sure why you're assuming it did. Personally, I don't know enough to guess. Given that the board was inspired to do this by the very mission that the non-profit was set up to safeguard, there's some level of legal coverage, but enough to cover their asses from a $10 billion assault? I for one can't call it.
◧◩◪
12. andy99+jl[view] [source] [discussion] 2023-11-19 00:59:48
>>015a+Ve
AGI has nothing to do with transformers. It's a hypothetical towards which there has been no progress other than finding things that didn't work. It's a cool thing to work on, but it's so different than what the popular version of openAI is, and it has such different timescales and economics... if some vestigial openAI wants to work on that, cool. There is definitely also room in the market for the current openAI centered around GPT-x et al, even if some people consider SaaS beneath them, and I hope they (OpenAI) find a way to continue with that mission.
replies(1): >>015a+FU
◧◩◪
13. branda+wl[view] [source] [discussion] 2023-11-19 01:01:07
>>015a+Ve
The problem is it already became the other thing in a very impactful way.

If Sam were to be ousted it should have happened before ChatGPT was unleashed on the world.

replies(1): >>015a+hX
14. nemetr+9m[view] [source] 2023-11-19 01:06:21
>>twoodf+(OP)
They might not even have believed that they'd win, just that this outcome would be better than being silently outmaneuvered.

If the coup fails in the end (which seems likely), it will have proved that the "nonprofit safeguard" was toothless. Depending on the board members' ideological positions, maybe that's better than nothing.

◧◩◪
15. fisf+Pm[view] [source] [discussion] 2023-11-19 01:10:53
>>015a+Ve
Sure, but without funding and/or massive support from MS this is not going to happen.
◧◩
16. whokno+or[view] [source] [discussion] 2023-11-19 01:46:45
>>spacem+d2
>would head over to Sam’s startup

Why? I see a lot of hero-worship for Sam, but very little concrete facts about what he's done to make this a success.

And given his history, I'm inclined to believe he just got lucky.

replies(2): >>wwtrv+cv >>nvm0n2+ht1
◧◩◪◨
17. whokno+ks[view] [source] [discussion] 2023-11-19 01:53:45
>>tick_t+Rj
This happens all the time. It's far more common for teams to succeed despite (or even in spite) of executive leadership.
replies(2): >>tick_t+vB >>fooste+oC
◧◩◪
18. wwtrv+Zu[view] [source] [discussion] 2023-11-19 02:09:16
>>015a+Ve
Would those talented engineers or scientists be content with significantly lower compensation and generally significantly less resources to work with. However good their intentions might this probably won't make them too attractive to future investors and antagonizing MS doesen't seem like a great idea.

OpenAI is far from being self-sustainable and without significant external investment they'll just probably be soon overtaken by someone else.

replies(1): >>015a+qZ
◧◩◪
19. wwtrv+cv[view] [source] [discussion] 2023-11-19 02:10:30
>>whokno+or
He presumably can attract investors?
replies(2): >>int_19+s01 >>frabcu+o71
◧◩◪
20. foota+tx[view] [source] [discussion] 2023-11-19 02:22:14
>>015a+Ve
I don't think the people that want to move slowly and do research are necessarily working at OpenAI.
◧◩◪◨⬒
21. tick_t+vB[view] [source] [discussion] 2023-11-19 02:47:03
>>whokno+ks
> It's far more common for teams to succeed despite (or even in spite) of executive leadership.

People say this like it's some kind of truism but I've never seen it happened and when questioned everyone I've known who's claimed it ends up admitting they were measuring their "success" by a different metric then the company.

◧◩◪◨⬒
22. fooste+oC[view] [source] [discussion] 2023-11-19 02:53:12
>>whokno+ks
Of course it isn’t. Without executive sponsorship there is no staff or resources.
◧◩
23. knd775+8D[view] [source] [discussion] 2023-11-19 02:58:07
>>coreth+t3
How could he possibly oust them?
replies(1): >>coreth+gO1
◧◩◪
24. m3kw9+tQ[view] [source] [discussion] 2023-11-19 04:32:25
>>015a+Ve
It’s just an illusion that Sam is trying to be unsafe about it, it’s a scare tactic or sorts to get what they want. Example regulations, and now internally, power. It’s all bs man this AI will end the world stuff, it’s pushed for an agenda and you all are eating it up
◧◩◪◨
25. 015a+FU[view] [source] [discussion] 2023-11-19 05:06:30
>>andy99+jl
Its been, like, two years dude. This mindset is entirely why any organization which has a chance at inventing/discovering ASI can't be for-profit and needs to be ran by scientists. You've got tik tok brain. Google won't be able to do it, because they're too concerned about image, and also got a bad case of corpo tik tok brain. Mistral and Anthropic won't be able to do it, because they have VC expectations to meet. Sam's next venture, if he chooses to walk that path, also won't, for the same reason. Maybe Meta? Do you want them being the first to ASI?

If you believe that the hunt for ASI shouldn't be OpenAI's mission, then that's an entirely different thing. The problem is: That is their mission. Its literally their mission statement and the responsibility of the board to direct every part of the organization toward that goal. Every fucking investor, including Microsoft, knew this, they knew the corporate governance, they knew the values alignment, they knew the power the nonprofit had over the for-profit. Argue credentialism, fine, but four people on the board that Microsoft invested in are now saying that OAI's path isn't the right path toward ASI; and, in case it matters, one of them is universally regarded as one of the top minds on the subject of artificial intelligence, on the planet. The time to argue credentialism was when the investors were signing checks; but they didn't. Its too late now.

My hunch is that the majority of the sharpest research minds at OAI didn't sign on to build B2B SaaS and become the next Microsoft; more accurately, Microsoft's thrall, because they'll never surpass Microsoft, they'll always be their second, if Satya influences Sam back into the boardroom then Microsoft, not Sam, will always be in control. Sam isn't going to be able to right that ship. OAI without its mission also isn't going to exist. That's the reality everyone on Sam's side needs to realize. And; absolutely, an OAI under Ilya is also extremely suspect in its ability to raise the resources necessary to meet the non-profit's goals; they'll be a zombie for years.

The hard reality that everyone needs to accept at this point is, OpenAI is probably finished. Unless they made some massive breakthrough a few weeks ago, which Sam did hint to three days ago, which should be the last hope that we all hold on to, that AI research as a species hasn't just been set back a decade with this schism. If that's the case; I think anyone hoping that Microsoft regains control the company is not thinking the situation through, and the best case scenario is: Ilya and the scientists retain control, and they are given space to understand the breakthrough.

replies(1): >>mjburg+we1
◧◩◪◨
26. 015a+hX[view] [source] [discussion] 2023-11-19 05:30:55
>>branda+wl
And if Microsoft had major concerns about OpenAI's board and governance, it should have been voiced and addressed before they invested. Yet; here we are; slaves to our past decisions.
◧◩◪◨
27. 015a+qZ[view] [source] [discussion] 2023-11-19 05:53:22
>>wwtrv+Zu
I don't know; on a lot of those questions. I tend to think that there was more mission and ideology at OAI than at most companies; and that's a very powerful motivational force.

Here's something I feel higher confidence in, but still don't know: Its not obvious to me that OAI would be overtaken by someone else. There are two misconceptions that we need to leave on the roadside: (1) Technology always evolves forward, and (2) More money produces better products. Both of these ideas, at best, only indicate correlative relationships, and at worst are just wrong. No one has overtaken GPT-4 yet. Money is a necessary input to some of these problems, but you can't just throw money at it and get better results.

And here's something I have even higher confidence in: "Being overtaken by someone else" is a sin worthy of the death penalty in the Valley VC Culture; but their's is not the only culture.

replies(1): >>tjsrjh+7A1
◧◩
28. neel89+yZ[view] [source] [discussion] 2023-11-19 05:54:24
>>spacem+d2
This is power struggle between silicon valley VC group and AI scientists. This conflict was bound to happen at some point across every company. I don't think the interest of both the group aligns after certain point. No self respecting AI scientist want to work hard for making closed model SaaS products.
◧◩◪◨
29. int_19+s01[view] [source] [discussion] 2023-11-19 06:04:39
>>wwtrv+cv
If that was the only issue, why not just go to Google, Meta, or Microsoft directly to work on their AI stuff? What do you really need Altman for?

Working at OpenAI meant working on GPT-4 (and whatever is next in line), which is attractive because it's the best thing in the field right now by a significant margin.

◧◩◪◨
30. frabcu+o71[view] [source] [discussion] 2023-11-19 07:18:56
>>wwtrv+cv
So can Dario Amodei and Mustafa Suleyman.
◧◩◪◨⬒
31. mjburg+we1[view] [source] [discussion] 2023-11-19 08:31:41
>>015a+FU
The problem is that this "AGI research group" is staffed by people who build statitiscal models, call them AI, and are delusional enough to think this is a route to general intelligene.

There is no alterantive, if you're wedded to "fitting functions to frequencies of text tokens" as your 'research paradigm' -- the only think that can be is a commericalised trinket.

So either the whole org starts staffing top level research teams in neuroscience, cognitive science, philosophy of mind, logic, computer science, biophysics, materials science.... or it just delivers an app.

If sam is the only one interested in the app, its because he's the only sane guy in the room.

replies(1): >>sudosy+1t1
◧◩◪◨
32. ahartm+5o1[view] [source] [discussion] 2023-11-19 10:01:16
>>tick_t+Rj
The vision of Worldcoin dude to get rich quick? Very inspiring.
◧◩◪◨⬒⬓
33. sudosy+1t1[view] [source] [discussion] 2023-11-19 10:48:11
>>mjburg+we1
There is little evidence that conditional statistical models can never be a route to AGI. There's limited evidence they can, but far less they can't.

You may be interested by the neuroscience research on the application of a time-difference like algorithm in the brain; predictor-corrector systems are just conditional statistical models being trained by reinforcement.

replies(1): >>mjburg+Hu1
◧◩◪
34. nvm0n2+ht1[view] [source] [discussion] 2023-11-19 10:50:15
>>whokno+or
OpenAI is very conspicuously the only lab that (a) managed to keep the safety obsessives in their box, (b) generate huge financial upside for its employees and (c) isn't run by a researcher.

If Altman's contribution had simply been signing deals for data and compute then keeping staff fantasies under control, that already makes him unique in that space and hyper valuable. But he also seems to have good product sense. If you remember, the researchers originally didn't want to do chatgpt because they thought nobody would care.

◧◩◪◨⬒⬓⬔
35. mjburg+Hu1[view] [source] [discussion] 2023-11-19 11:04:18
>>sudosy+1t1
I am well aware of the literature in the area. 'Trained by reinforcement' in the case of animals includes direct causal contact with the environment, as well as sensory-motor adaption, and organic growth.

The semantics of the terms of the 'algorithm' in the case of animals are radically different, and insofar as these algorithms describe anything, it is because they are empirically adequate.

I'm not sure what you think 'evidence' would look like that conditional probability cannot lead to agi -- other than a serious of obvious observations: conditional probability doesnt resolve causation, it is therefore not a model of any physical process, it does not provide a mechanism for generating the propositions which are conditioned-on, it does not model relevance, and a huge list of other severe issues.

The idea that P(A|B) is even relevant to AGI is a sign of a fundamental basic lack of curiosity beyond what is on-trend in computer science.

We can easily explain why any given conditional probability mdoel can encode aribatary (Q, A) pairs -- so any given 'task' expressed as a sequence of Q-A prompts/replies can be modelled.

But who cares. The burden-of-proof on people claiming that conditional probability is a route to AGI is to explain how it models: causation, relevance, counter-factual reasoning, deduction, abduction, sensory-motor adaption, etc.

The gap between what has been provided and this burden-of-proof is laughable

◧◩◪◨⬒
36. tjsrjh+7A1[view] [source] [discussion] 2023-11-19 11:59:26
>>015a+qZ
Citation needed on the ideology being a powerful motivational force in this context. People who think they're doing groundbreaking work that'll impact the future of humanity are going to be pretty motivated ideologically either way regardless of if they're also drinking the extra flavor from the mission statement's Kool-Aid.
◧◩◪
37. rvba+fE1[view] [source] [discussion] 2023-11-19 12:33:35
>>015a+Ve
Where do you go if you want to build an unsafe AGI with no morals? Military? China? Russia?

(I am aware that conceptually it can lead to a skynet scenario)

◧◩◪
38. laurel+7F1[view] [source] [discussion] 2023-11-19 12:41:13
>>015a+Ve
There are significantly fewer people that would want to work with Ilya than there are people that would want to work with Sam/Greg.

If Ilya could come out and clearly articulate how his utopian version of OpenAI would function, how they will continue to sustain the research and engineering efforts, at what cadence they can innovate and keep shipping and how will they make it accessible to others then maybe there would be more support.

replies(1): >>incogn+PS1
◧◩◪
39. coreth+gO1[view] [source] [discussion] 2023-11-19 14:03:03
>>knd775+8D
I'm sure their are ways that we aren't privy to knowing just like we don't know why Altman was fired. Why was Sam Altman being dishonest and what was he dishonest about?

This reasoning is the only one that makes sense. Where action taken by the board aligns with logic and any private action done by Sam Altman that could have offended the board.

The story of the board being incompetent to the point of mental retardation and firing such a key person is just the most convenient, attractive and least probable narrative. It's rare for an intelligent person to do something stupid, even rarer an entire board of intelligent people to do something stupid in unison.

But it's so easy to fall for that trope narrative.

replies(1): >>knd775+dD6
◧◩◪◨
40. incogn+PS1[view] [source] [discussion] 2023-11-19 14:36:14
>>laurel+7F1
Wrong. Ilya is the goose that laid the golden egg. Do you think other orgs don’t have engineers and data scientists?
◧◩◪◨
41. knd775+dD6[view] [source] [discussion] 2023-11-20 17:37:24
>>coreth+gO1
We know the org structure. It's not possible.
[go to top]