zlacker

[return to "Three senior researchers have resigned from OpenAI"]
1. aidama+67[view] [source] 2023-11-18 08:11:23
>>convex+(OP)
GPT5 pre-training just ended I believe. Brock, Pachocki, Szymon Sidor, would have likely all been involved.

These are huge losses. Pachocki led pre-training for GPT-4, and probably GPT-5. Brockman is the major engineer responsible for the efficiency improvements that enabled ChatGPT and GPT-4 to be even remotely cost-effective. That is a piece that is often overlooked, but OpenAI's advantage over the competition in compute efficiency is probably even larger than the model itself.

◧◩
2. convex+l8[view] [source] 2023-11-18 08:22:14
>>aidama+67
"Greg Brockman works 60 to 100 hours per week, and spends around 80% of the time coding. Former colleagues have described him as the hardest-working person at OpenAI."

https://time.com/collection/time100-ai/6309033/greg-brockman...

◧◩◪
3. sasaf5+Cb[view] [source] 2023-11-18 08:52:33
>>convex+l8
I am either skeptical or envious of such claims. Someone coding so much would quickly be launched into meetings to communicate one's results and to coordinate with others.

It would be my life's dream to spend 80 hours per week coding without having to communicate with others... but no one is an island...

◧◩◪◨
4. jstumm+Zj[view] [source] 2023-11-18 10:06:12
>>sasaf5+Cb
OpenAI is an absolute unicorn, and not in the bullshit-1-mrd-vc-money-dollar sense but in being truly outstanding. Since all they do is software, that is solely because of the people involved, being able to do things and doing things that other people won't and achieving things that other people don't.

When it comes to sports it's fairly obvious what outliers look like and well accepted that they exist. I don't see a single reason to believe, that the same would not be true in every other walk of life or thinking that OpenAI just got lucky (considering how many people are trying to get lucky right now with less success in this space).

There are extraordinarily effective people in this world, and they are sparse and it's probably not you or me (but that's completely fine with me, I am happy to stretch myself to the best of my abilities).

◧◩◪◨⬒
5. Tactic+bv[view] [source] 2023-11-18 11:35:55
>>jstumm+Zj
> Since all they do is software...

For a certain definition of "software": when only doing one training run costs an 8 digits sum (requiring hardware one order of magnitude more expensive than that to run) I kinda dispute the "all they do is software".

It's definitely not "all software": a big part of their advantage compared to actually free and open models is the insane hardware they have access to.

The free and open LLMs are doing very well compared to OpenAI once you take into account that the cost to train them is 1/100th to 1/1000th what it costs to train the OpenAI models.

This can be seen with StableDiffusion: once the money is poured in training the models and then the model made free, suddenly the edge of proprietary solutions is tiny (if it even exists at all).

I'd like to see the actually open and free models trained on the hardware used to train OpenAI: then we'd see how much of a "software edge" OpenAI has.

And my guess is it'd be way less impressive than you make it out to be.

◧◩◪◨⬒⬓
6. jstumm+5H[view] [source] 2023-11-18 13:02:10
>>Tactic+bv
They are using hardware, yes, but they are not creating (which is what I mean by "doing") the hardware. Anyone else with funding could have access to the same hardware for running their software, and other people did do that, and do do that (now, of course, in a drastically tighter supply/demand situation).

I do not wanna be flippant here: Obviously having easy access to money and a good standing with the right people is making things A LOT simpler, but other people could have reasonably convinced someone to give them money to built the same software. That's what VCs do, after all.

Regarding the rest: Feels very much like a different topic. I'll pass.

[go to top]