And it looks like now they might be very close to the limits of their own capability. I'm not sure how much more they can give.
On the surface, their new features always seem to be quite exciting. But when the dust settles it is again all very lackluster, often copied from open source ideas. Not something you can bet on.
Their biggest moats are their popularity, marketing, and their large bags of cash. The latter of which they are burning through extremely quickly. The thing is, it's easy to build something massive when you don't care about unit economics. But where do they end up when the competitive forces commoditize this?
When listening to interviews with Sam I was always surprised by how little useful information I am able to get out of listening to him. I'm sure he's very smart but he tries to project the aura of radical honesty while simultaneously trying to keep all of his cards extremely close to his chest. All that without the product chops to actually back it up. That's my read.
And so something like OpenAI came along where Ilya S etc. got bags of money to go take that approach and scale the crap out of it, and, yeah, they got results. Because they didn't have to be careful, or deal with competing interests.
That's all fine, but it's also no surprise when it all blows up, is it?
To be fair, isn’t that kind of the bar for CEOs? Their job is to hire and fire senior people, ensure they have a mountain of cash, and put out fires.
It’s not an operational position and so I wouldn’t expect a CEO to have deep operational knowledge.
Maybe I’m misunderstanding the division of labor though?
I don't know how this unfolds but when somewhat smart models become a commodity, and thus the remaining 90% of the population get access to polished chatbots distributed through dominant platforms like Google, Facebook, Instagram, etc. - where does that leave OpenAI at? High-end models probably. And maybe with superintelligence unlocked it's all that's needed to win business-wise, I don't know.
Sam tries to sound smart while not really having any technical insight. He does a tremendous job with it though.
One way to think about this is: at some point in the next few years we'll have a few hundred GPUs/TPUs that can provide the compute the compute used to train GPT3.
This discovery was always going to happen. The question is if OpenAI made radical scaling possible unlike before. Answer there is also a no. There are clear limits to number of collocated GPUs, nVidia release cycles, TSMC capacity, power generation etc.,
So in the best case OpenAI fudged the timeline a little bit. Real credit belongs to the Deep Learning community as a whole.
It’s not clearly obvious that’s the case. In retrospect things always seem obvious, but that another party would have created GPT-3/4 is not.