> Broadly distributed benefits
> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Hell, it's the first bullet point on it!
You can't just define AI safety concerns to be 'the set of scenarios depicted in fairy tales', and then dismiss them as 'well, fairy tales aren't real...'
I just know that it's hard to do much worse than putting this question in the hands of a highly optimized profit-first enterprise.