zlacker

[parent] [thread] 0 comments
1. smolde+(OP)[view] [source] 2023-11-18 08:05:53
Transformer-based LLMs are almost a half-decade old at this point, and GPT-4 is the least-efficient model of it's kind ever produced (that I am aware of).

OpenAI's performance is not and has never been proportional to the size of their models. Their big advantage is scale, which lets them ship unrealistically large models by leveraging subsidized cloud costs. They win by playing a more destructive and wasteful game, and their competitors can beat them by shipping a cheaper competitive alternative.

What exactly are we holding out for, at this point? A miracle?

[go to top]