Less glibly: anyone with a horse in this race wants theirs to win. Dropping out doesn't make others stop trying, and arguably the only scalable way to prevent others from making and using unaligned AGI is to develop an aligned AGI first.
Also, having AGI tech would be incredibly, stupidly profitable. And so if other people are going to try and make it anyways: why should you in particular stop? Prisoner's dilemma analysis shows that "defect" is always the winning move unless perfect information and cooperation shows up.