Is that right?
Don't know if Altman or Sutskever is right, there seems to be a kind of arms race between the companies. OpenAI may be past the point where they can try out a radically different system, due to competition in the space. Maybe trying out new approaches could only work in a new company, who knows?
Inward safety for people means their vulnerabilities are not exposed and/or open to attack. Outward safety means they are not attacking others and looking out for the general wellbeing of others. We have a lot of different social constructs with other people of which we attempt to keep this in balance. It doesn't work out for everyone, but in general it's somewhat stable.
What does it mean to be safe if you're not at threat of being attacked and harm? What does attacking others mean if doing so is meaningless, just a calculation? This goes from everything from telling someone to kill themselves (it's just words) to issuing a set of commands to external devices with real world effects (print ebola virus or launch nuclear weapon). The concern here is the AI of the future will be extraordinarily powerful yet very risky when it comes to making decisions that could harm others.