AI safety of the form "it doesn't try to kill us" is a very difficult but very important problem to be solved.
AI safety that consists of "it doesn't share true but politically incorrect information (without a lot of coaxing)" is all I've seen as an end user, and I don't consider it important.