zlacker

[return to "We have reached an agreement in principle for Sam to return to OpenAI as CEO"]
1. tomohe+V[view] [source] 2023-11-22 06:08:13
>>staran+(OP)
So, Ilya is out of the board, but Adam is still on it. I know this will raise some eyebrows but whatever.

Still though, this isn't something that will just go away with Sam back. OAI will undergo serious changes now that Sam has shown himself to be irreplaceable. Future will tell but in the long terms, I doubt we will see OAI as one of the megacorps like Facebook or Uber. They lost the trust.

◧◩
2. Terrif+D2[view] [source] 2023-11-22 06:19:15
>>tomohe+V
The OpenAI of the past, that dabbled in random AI stuff (remember their DotA 2 bot?), is gone.

OpenAI is now just a vehicle to commercialize their LLM - and everything is subservient to that goal. Discover a major flaw in GPT4? You shut your mouth. Doesn’t matter if society at large suffers for it.

Altman's/Microsoft’s takeover of the former non-profit is now complete.

Edit: Let this be a lesson to us all. Just because something claims to be non-profit doesn't mean it will always remain that way. With enough political maneuvering and money, a megacorp can takeover almost any organization. Non-profit status and whatever the organization's charter says is temporary.

◧◩◪
3. karmas+L4[view] [source] 2023-11-22 06:33:17
>>Terrif+D2
> now just a vehicle to commercialize their LLM

I mean it is what they want isn't it. They did some random stuff like, playing dota2 or robot arms, even the Dalle stuff. Now they finally find that one golden goose, of course they are going to keep it.

I don't think the company has changed at all. It succeeded after all.

◧◩◪◨
4. hadloc+vd[view] [source] 2023-11-22 07:30:38
>>karmas+L4
There's no moat in giant LLMs. Anyone on a long enough timeline can scrape/digitize 99.9X% of all human knowledge and build an LLM or LXX from it. Monetizing that idea and staying the market leader over a period longer than 10 years will take a herculean amount of effort. Facebook releasing similar models for free definitely took the wind out of their sails, even a tiny bit; right now the moat is access to A100 boards. That will change as eventually even the Raspberry Pi 9 will have LLM capabilities
◧◩◪◨⬒
5. cft+Ik[view] [source] 2023-11-22 08:26:18
>>hadloc+vd
You are forgetting about the end of the Moore's law. The costs for running a large scale AI won't drop dramatically. Any optimizations will require non-trivial expensive PhD Bell Labs level research. Running intelligent LLMs will be financially accessible only to a few mega corps in the US and China (and perhaps to the European government). The AI "safety" teams will control the public discourse. Traditional search engines that blacklist websites with dissenting opinions will be viewed as the benevolent free speech dinosaurs of the past.
◧◩◪◨⬒⬓
6. dontup+1G[view] [source] 2023-11-22 11:29:49
>>cft+Ik
This assumes the only way to use LLMs effectively is to have a monolith model that does everything from translation (from ANY language to ANY language) to creative writing to coding to what have you. And supposedly GPT4 is a mixture of experts (maybe 8-cross)

The efficiency of finetuned models is quite, quite a bit improved at the cost of giving up the rest of the world to do specific things, and disk space to have a few dozen local finetunes (or even hundreds+ for SaaS services) is peanuts compared to acquiring 80GB of VRAM on a single device for monomodels

◧◩◪◨⬒⬓⬔
7. cft+eM[view] [source] 2023-11-22 12:18:35
>>dontup+1G
Sutskever says there's a "phase transition" at the order of 9 bn neurons, after which LLMs begin to become really useful. I don't know much here, but wouldn't the monomodels become overfit, because they don't have enough data for 9+bn parameters?
[go to top]