You can imagine an AI that answers questions and helps you get things within reason that doesn't hurt anyone else plus corrections for whatever problems you imagine with this. That's roughly an aligned AI. It will help you build a bomb as a fun experiment, but would stop you from hurting someone with the bomb.
Humanity doesn’t have unified interests or shared values on many things. We have different cultural memories and different boundaries. What to some is an expression of a fundamental right is an affront.
I’m also not a moral relativist, I don’t think all values are equivalent, but you don’t even need to go there - before that point a lot of what humans want is not controversial and the “obvious” cases are not so obvious or easy to classify.