And it looks like now they might be very close to the limits of their own capability. I'm not sure how much more they can give.
On the surface, their new features always seem to be quite exciting. But when the dust settles it is again all very lackluster, often copied from open source ideas. Not something you can bet on.
Their biggest moats are their popularity, marketing, and their large bags of cash. The latter of which they are burning through extremely quickly. The thing is, it's easy to build something massive when you don't care about unit economics. But where do they end up when the competitive forces commoditize this?
When listening to interviews with Sam I was always surprised by how little useful information I am able to get out of listening to him. I'm sure he's very smart but he tries to project the aura of radical honesty while simultaneously trying to keep all of his cards extremely close to his chest. All that without the product chops to actually back it up. That's my read.
Sam tries to sound smart while not really having any technical insight. He does a tremendous job with it though.
One way to think about this is: at some point in the next few years we'll have a few hundred GPUs/TPUs that can provide the compute the compute used to train GPT3.
This discovery was always going to happen. The question is if OpenAI made radical scaling possible unlike before. Answer there is also a no. There are clear limits to number of collocated GPUs, nVidia release cycles, TSMC capacity, power generation etc.,
So in the best case OpenAI fudged the timeline a little bit. Real credit belongs to the Deep Learning community as a whole.
It’s not clearly obvious that’s the case. In retrospect things always seem obvious, but that another party would have created GPT-3/4 is not.