You mean the official stated purpose of OpenAI. The stated purpose that is constantly contradicted by many of their actions, and I think nobody took seriously anymore for years.
From everything I can tell the people working at OpenAI have always cared more about advancing the space and building great products than "openeness" and "safe AGI". The official values of OpenAI were never "their own".
Board member Helen Toner strongly criticized OpenAI for publicly releasing it's GPT when it did and not keeping it closed for longer. That would seem to be working against openness for many people, but others would see it as working towards safe AI.
The thing is, people have radically different ideas about what openness and safe mean. There's a lot of talk about whether or not OpenAI stuck with it's stated purpose, but there's no consensus on what that purpose actually means in practice.