>>Aperoc+(OP)
Many teams are trying to combine their ideas with LLM. Because despite their weaknesses, it seems LLMs (and related concepts such as RLFH, transformers, self-supervised learning, internet-scale datasets), have made some remarkable gains. Those team are coming from the whole spectrum of ML and AI research. And they wish to use their ideas to overcome some of the weaknesses of current day LLMs. Do you also think that none of these children can lead to AGI? Why not?