Parent's point is that GPT-4 is better because they invested more money (was that ~$60M?) in training infrastructure, not because their core logic is more advanced.
I'm not arguing for one or the other, just restating parent's point.
Adding more parameters tends to make the model better. With OpenAI having access to huge capital they can afford 'brute forcing' a better model. AFAIK right now OpenAI has the most compute power, which would partially explain why GPT4 yields better results than most of the competition.
Just having the hardware is not the whole story of course, there is absolutely a lot of innovation and expertise coming from oAI as well.
Can not see ≠ easy to see
But with compromises, as it was like applying loose compression on an already compressed data set.
If any other organisation could invest the money in a high quality data pipeline then the results should be as good, at least that my understanding.
[1] https://crfm.stanford.edu/2023/03/13/alpaca.html [2] https://newatlas.com/technology/stanford-alpaca-cheap-gpt/