People tend to really underestimate just how big these models are. Of course these models aren't simply "really really big" MLPs, but the cleverness of the techniques used to build them is only useful at insanely large scale.
I do find these models impressive as examples of "here's what the limit of insane amounts of data, insane amounts of compute can achieve with some matrix multiplication". But at the same time, that's all they are.
What saddens me about the rise of deep neural networks is it is really is the end of the era of true hackers. You can't reproduce this at home. You can't afford to reproduce this one in the cloud with any reasonable amount of funding. If you want to build this stuff your best bet is to go to top tier school, make the right connections and get hired by a mega-corp.
But the real tragedy here is that the output of this is honestly only interesting it if it's the work of some hacker fiddling around in their spare time. A couple of friend hacking in their garage making images of raccoon painting is pretty cool. One of the most powerful, well funded, owners of the likely the most compute resources on the planet doing this as their crowning achievement in AI... is depressing.
I think it's fair to say that this is the way it's always been. In 1990, you couldn't hack on an accurate fluid simulation at home, you needed to be at a university or research lab with access to a big cluster. But then, 10 years later, you could do it on a home PC. And then, 10 years after that, you could do it in a browser on the internet.
It's the same with this AI stuff.
I think if we weren't in the midst of this unique GPU supply crunch, the price of a used 1070 would be about $100 right now -- such a card would be state of the art 10 years ago!
Other funding models are possible as well, in the grand scheme of things the price for these models is small enough.