>>rqtwte+7G1
This is mostly true, yeah. Although in certain cases OpenAI is indeed pushing research forward themselves (not only using existing algorithms/architectures just at scale) and therefore has first mover's advantage, e.g. with reinforcement-learning-based fine-tuning based on human preferences (Reinforcement Learning from Human Feedback, RLHF), which is basically the secret sauce which turned the original relatively-dumb GPT-3 (pure language model, "document autocomplete", what is the next most likely token based on my training data) into ChatGPT (what token would a human
prefer to come next).