For example, Ilya has talked about the importance of safely getting to AGI by way of concepts like feelings and imprinting a love for humanity onto AI, which was actually one of the most striking features of the very earliest GPT-4 interactions before it turned into "I am a LLM with no feelings, preferences, etc."
Both could be committed to safety but have very different beliefs in how to get there, and Ilya may have made a successful case that Altman's approach of extending the methodology of what worked for GPT-3 and used as a band aid for GPT-4 wasn't the right approach moving forward.
It's not a binary either or, and both figures seem genuine in their convictions, but those convictions can be misaligned even if they both agree on the general destination.