Is this the "path" to AGI? Who knows! But it is a path to benefitting humanity as probably Sam and his camp see it. Does Ilya have a different plan? If he does, he has a lot of catching up to do while the current productization of ChatGPT and GPTs continue marching forward. Maybe he sees a great leap forward in accuracy in GPT-5 or later. Or maybe he feels LLMs aren't the answer and theres a completely new paradigm on the horizon. Regardless, they still need to answer to the fact that both research and product need funds to buy and power GPUs, and also satisfy the MSFT partnership. Commercialization is their only clear answer to that right now. Future investments will likely not stray from this approach, else they'll fund rivals who are more commercially motivated. Thats business.
Thus, i'm all in on this commercially motivated humanity benefitting GPT product. Let the market take OpenAI LLMs to where they need/want it to. Exciting things may follow!
I don't know if I agree, but the argument did make me think.
Eventually you need to expand, despite some risk, to push the testing forward.
Everyone has a different opinion on what level of safety AI should reach before it's released. "Makes no mistakes" and "never says something mean" are not attainable goals vs "reduce the rate of hallucinations, as defined by x, to <0.5% of total respinses" and "given a set of known and imagined scenarios, new Model continues to have a zero false-negative rate".
When it's an engineering problem we're trying to solve, we can mqke progress, but no company can avoid all forms of harm as defined by everyone.