It is the way they choose to train it with the reinforcement learning from human feedback (RLHF) which made it a people pleaser. There is nothing in the architecture which makes it so.
They could have made a chat agent which belittle the person asking. They could have made one which ignores your questions and only talks about elephants. They could have made one which answers everything with a Zen Koan. (They could have made it answer with the same one every time!) They could have made one which tries to reason everything out from bird facts. They could have made one which only responds with all-caps shouting in a language different from the one it was asked in.
Training agents on every written word ever produced, or selected portions of it, will never impart the lessons that humans learn through “The School of Hard Knocks.” They are nihilist children who were taught to read, given endless stacks of encyclopedias and internet chat forum access, but no (or no consistent) parenting.