I'm not saying this will happen, but it seems to me like an incredibly silly move.
And Microsoft are risk adverse enough that I think they do care about AI safety, even if only from a "what's best for the business" standpoint.
Tbh idc if we get AGI. There'll be a point in the future where we have AGI and the technology is accessible enough that anybody can create one. We need to stop this pointless bickering over this sort of stuff, because as usual, the larger problem is always going to be the human using the tool than the tool itself.