Personally, I feel like the risks of future AI developments are real, but none of the stuff I've seen OpenAI do so far has made ChatGPT actually feel "safer" (in a sense of e.g., preventing unhealthy parasocial relationships with the system, actually being helpful when it comes to ethical conflicts, etc), just more stuck-up and excessively moralizing in a way that feels 100% tuned for bland corporate PR bot usage.