zlacker

[return to "Ilya Sutskever "at the center" of Altman firing?"]
1. Bjorkb+87[view] [source] 2023-11-18 03:32:31
>>apsec1+(OP)
I have a hard time believing this simply since it seems so ill-conceived. Sure, maybe Sam Altman was being irresponsible and taking risks, but they had an insanely good thing going for them. I'm not saying Sam Altman was responsible for the good times they were having, but you're probably going to bring them to an end by abruptly firing one of the most prominent members of the group, seeing where individual loyalties lie, and pissing off Microsoft by tanking their stock price without giving them any heads up.

I mean, none of this would be possible without insane amounts of capital and world class talent, and they probably just made it a lot harder to acquire both.

But what do I know? If you can convince yourself that you're actually building AGI by making an insanely large LLM, then you can also probably convince yourself of a lot of other dumb ideas too.

◧◩
2. hilux+88[view] [source] 2023-11-18 03:39:59
>>Bjorkb+87
Reading between lots of lines, one possibility is that Sam was directing this "insanely good thing" toward making lots of money, whereas the non-profit board prioritized other goals higher.
◧◩◪
3. Bjorkb+o9[view] [source] 2023-11-18 03:50:27
>>hilux+88
Sure, I get that, but to handle a disagreement over money in such a consequential fashion just doesn't make sense to me. They must have understood that to arrive in a position where they have to fire the CEO with little warning is going to have profound consequences, perhaps even existential ones.
◧◩◪◨
4. 015a+vg[view] [source] 2023-11-18 04:43:51
>>Bjorkb+o9
AGI is existential. That's the whole point, I think. If they can get to AGI, then building an LLM app store is such a distraction along the path that any reasonable person would look back and laugh at how cute an idea it was, despite how big or profitable it feels today.
◧◩◪◨⬒
5. disgru+W01[view] [source] 2023-11-18 11:22:55
>>015a+vg
Sure, but you need money for compute to get to AGI, so selling stuff is a well accepted way of getting money.
◧◩◪◨⬒⬓
6. 015a+lP2[view] [source] 2023-11-18 22:16:23
>>disgru+W01
I find this line of thinking to be extremely indicative of the problem, I'd bet, OpenAI was trying to get rid of with Sam.

Here's something to ponder on: The human brain is about the size of an A100, and consumes: 12 watts of power, on average. Its capable of general intelligence and conscious thought.

One problem that companies have is: they're momentum based. Once they realize something is working, and generating profit, they become increasingly calcified toward trying fundamentally new and different things. The best case scenario for a company is to calcify at a local maxima. A few companies try to structure themselves toward avoiding this, like Google; and it turns out, they just lose the ability to execute on anything. Some will stay small, remain nimble, and accomplish little of note. The rest die. That's the destiny for every profit-focused company.

Here's three things I expect to be true: AGI/ASI won't be achieved with LLMs. A sufficiently powerful LLM may be a component of a larger AGI/ASI system, but GPT-4 is already pretty dang sufficiently powerful. And: OpenAI was becoming an extremely effective and successful B2B SaaS Big Tech LLM company. Outing Sam is a gambit; the company could implode, and with no one left AGI/ASI probably won't happen at OpenAI. But the alternative, it seems from the outside, had a higher probability of failure; because the company would become so successful and good at making LLMs that the non-profit's mission is put to the side.

Ilya's superalignment efforts were given 20% of OpenAI's compute capacity. If the foundation's goal is to produce safe AGI; and ideally, you want progress on safety before something unsafe is made; it seems to me that 51% is the totally symbolic but meaningful minimum he should be working with. That's just one example.

[go to top]