There's zero chance LLMs lead to AGI or superintelligence, so if that's all OpenAI is going to focus on for the next ~5 years, a group related to superintelligence alignment is unnecessary.
>A language model is a mathematical construct
That is like telling someone from the Middle Ages that a gun is merely an assemblage of metal parts not too different from the horseshoes and cast-iron nails produced by your village blacksmith and consequently it is safe to give a child a loaded gun.
ADDED. Actually a better response (because it does not rely on an analogy) is to point out that none of the people who are upset over the possibility that most of the benefits of AI might accrue to a few tech titans and billionaires would be in the least bit re-assured by being told that an AI model is just a mathematical construct.
You mean like PALM-E? https://palm-e.github.io/
Embodiment is the easy part.
Do you have a better analogy? I'd like to hear more about how ML models can't be intelligent, if you don't mind.
I'm pretty skeptical of the idea that we know enough at this point to make that claim definitively.
I'm tuning my probabilities back to 99%, I still don't believe just feeding more data to the LLM will do it. But I'll give the chance a possibility.
Books (and writing) are a big force in cultural evolution.
Books aren't data/info-processing machines, by themselves. LLMs are.
My intuition leads me to believe that these are arising properties/characteristics of complex and large prediction engines. A sufficiently good prediction/optimization engine can act in an agentic way, while never had that explicit goal.
I recently read this very interesting piece that dives into this: https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality...
Next steps would be in totally different fields, like implementing actual reasoning, global outline planning and the capacity to evolve after training is done.