zlacker

[parent] [thread] 9 comments
1. Satam+(OP)[view] [source] 2023-11-19 00:33:56
It now seems to me that it was inevitable that something like GPT would take off - but it didn't necessarily have to come from OpenAI. Someone else would have filled their place. The collective ML knowledge and research were rapidly evolving, computing was getting faster and cheaper. The pressure was building and at some point, something somewhere had to pop off. They were a great but not a singular team.

And it looks like now they might be very close to the limits of their own capability. I'm not sure how much more they can give.

On the surface, their new features always seem to be quite exciting. But when the dust settles it is again all very lackluster, often copied from open source ideas. Not something you can bet on.

Their biggest moats are their popularity, marketing, and their large bags of cash. The latter of which they are burning through extremely quickly. The thing is, it's easy to build something massive when you don't care about unit economics. But where do they end up when the competitive forces commoditize this?

When listening to interviews with Sam I was always surprised by how little useful information I am able to get out of listening to him. I'm sure he's very smart but he tries to project the aura of radical honesty while simultaneously trying to keep all of his cards extremely close to his chest. All that without the product chops to actually back it up. That's my read.

replies(3): >>cmrdpo+t1 >>erikpu+F1 >>jimsim+R4
2. cmrdpo+t1[view] [source] 2023-11-19 00:44:16
>>Satam+(OP)
The reason something like GPT didn't come out of e.g. Google is that they had/have stuff similar but were very reticent about making it public -- because of the ethical aspects as well as the sheer resource cost of hosting it to the public -- and also I suspect have naturally internal conflicts about whether it's the right direction to take, etc.

And so something like OpenAI came along where Ilya S etc. got bags of money to go take that approach and scale the crap out of it, and, yeah, they got results. Because they didn't have to be careful, or deal with competing interests.

That's all fine, but it's also no surprise when it all blows up, is it?

replies(3): >>tempes+M2 >>pishpa+l8 >>erosen+H8
3. erikpu+F1[view] [source] 2023-11-19 00:46:06
>>Satam+(OP)
> When listening to interviews with Sam I was always surprised by how little useful information I am able to get out of listening to him

To be fair, isn’t that kind of the bar for CEOs? Their job is to hire and fire senior people, ensure they have a mountain of cash, and put out fires.

It’s not an operational position and so I wouldn’t expect a CEO to have deep operational knowledge.

Maybe I’m misunderstanding the division of labor though?

replies(1): >>Satam+u4
◧◩
4. tempes+M2[view] [source] [discussion] 2023-11-19 00:52:03
>>cmrdpo+t1
And because of the advantage of using it internally without sharing it with competitors?
◧◩
5. Satam+u4[view] [source] [discussion] 2023-11-19 01:02:58
>>erikpu+F1
I think you're right but there might be a catch. It doesn't seem like he's able to steer the delivery of polished products either. I know it's the fastest growing app and all that but that's driven by their tech. I use ChatGPT daily but the tool itself has a subpar design, it lags, its streaming UI is choppy, it breaks and cuts off mid-sentence, and they are not able to meet the demand either.

I don't know how this unfolds but when somewhat smart models become a commodity, and thus the remaining 90% of the population get access to polished chatbots distributed through dominant platforms like Google, Facebook, Instagram, etc. - where does that leave OpenAI at? High-end models probably. And maybe with superintelligence unlocked it's all that's needed to win business-wise, I don't know.

replies(1): >>buckst+mc
6. jimsim+R4[view] [source] 2023-11-19 01:06:41
>>Satam+(OP)
Insanely based take.

Sam tries to sound smart while not really having any technical insight. He does a tremendous job with it though.

One way to think about this is: at some point in the next few years we'll have a few hundred GPUs/TPUs that can provide the compute the compute used to train GPT3.

This discovery was always going to happen. The question is if OpenAI made radical scaling possible unlike before. Answer there is also a no. There are clear limits to number of collocated GPUs, nVidia release cycles, TSMC capacity, power generation etc.,

So in the best case OpenAI fudged the timeline a little bit. Real credit belongs to the Deep Learning community as a whole.

replies(1): >>discor+O9
◧◩
7. pishpa+l8[view] [source] [discussion] 2023-11-19 01:34:44
>>cmrdpo+t1
It for sure is not because of ethical concerns. There is a higher bar to clear to burn cash when other projects are delivering high ROI. That sort of thing will never come out of Google in that form, since the post-Google Labs days.
◧◩
8. erosen+H8[view] [source] [discussion] 2023-11-19 01:37:06
>>cmrdpo+t1
Being first to openly generate from billions of copyrighted documents would not have been a sane move for Google's management.
◧◩
9. discor+O9[view] [source] [discussion] 2023-11-19 01:45:14
>>jimsim+R4
> This discovery was always going to happen.

It’s not clearly obvious that’s the case. In retrospect things always seem obvious, but that another party would have created GPT-3/4 is not.

◧◩◪
10. buckst+mc[view] [source] [discussion] 2023-11-19 02:01:17
>>Satam+u4
It runs on Azure.
[go to top]