I mean, none of this would be possible without insane amounts of capital and world class talent, and they probably just made it a lot harder to acquire both.
But what do I know? If you can convince yourself that you're actually building AGI by making an insanely large LLM, then you can also probably convince yourself of a lot of other dumb ideas too.
I can only hope this doesn’t turn into OpenAI trying to gatekeep multimodal models or conversely everyone else leaving them in the dust.
A young guy who is suddenly very rich, possibly powerful, and talking to the most powerful government on the planet on national TV? And people are surprised to hear this person might have let it go a little bit to their head, forget what their job was, and suddenly think THEY were OpenAI, not all the people who worked there? And comes to learn reality the hard way.
What’s to be surprised about? It’s the goddamned most stereotypically human, utterly unsurprising thing about this and it happens all. the. time.
A lot of people here really struggle with the idea that smart people are not inherently special and that being smart doesn’t magically absolve you from making mistakes or acting like a shithead.
When did Microsoft’s stock price tank?
Sam was the VC guy pushing gatekeeping of models and building closed products and revenue streams. Ilya is the AI researcher who believes strongly in the nonprofit mission and open source.
Perhaps, if OpenAI can survive those, then they will actually be more open in the future.
By seemingly siding with staff over the CEO's desire go way too fast and break a lot of things? I'd think that world class talent hearing they might be able to go home at night because the CEO isn't intent on having Cybernet deployed tomorrow but next week instead is more appealing than not.
I think the next AGI startup should perhaps try the communist revolution route, since the capitalist-based one didn't pan out. After all, Lenin was a pioneer in effective altruism. /s
I wouldn't call it 'tanking' either, but it's definitely not run of the mill, did make them rush out a statement on their commitment to investment and working with OpenAI.
Here's something to ponder on: The human brain is about the size of an A100, and consumes: 12 watts of power, on average. Its capable of general intelligence and conscious thought.
One problem that companies have is: they're momentum based. Once they realize something is working, and generating profit, they become increasingly calcified toward trying fundamentally new and different things. The best case scenario for a company is to calcify at a local maxima. A few companies try to structure themselves toward avoiding this, like Google; and it turns out, they just lose the ability to execute on anything. Some will stay small, remain nimble, and accomplish little of note. The rest die. That's the destiny for every profit-focused company.
Here's three things I expect to be true: AGI/ASI won't be achieved with LLMs. A sufficiently powerful LLM may be a component of a larger AGI/ASI system, but GPT-4 is already pretty dang sufficiently powerful. And: OpenAI was becoming an extremely effective and successful B2B SaaS Big Tech LLM company. Outing Sam is a gambit; the company could implode, and with no one left AGI/ASI probably won't happen at OpenAI. But the alternative, it seems from the outside, had a higher probability of failure; because the company would become so successful and good at making LLMs that the non-profit's mission is put to the side.
Ilya's superalignment efforts were given 20% of OpenAI's compute capacity. If the foundation's goal is to produce safe AGI; and ideally, you want progress on safety before something unsafe is made; it seems to me that 51% is the totally symbolic but meaningful minimum he should be working with. That's just one example.