“resulting technology will benefit the public and the corporation will seek to open source technology for the public benefit when applicable. The corporation is not organized for the private gain of any person"
I don't think it is going to be hard to show that they are doing something very different than what they said they were going to do.
It is safer to operate an AI in a centralized service, because if you discover dangerous capabilities you can turn it off or mitigate them.
If you open-weight the model, if dangerous capabilities are later discovered there is no way to put the genie back in the bottle; the weights are out there, anyone can use them.
This of course applies to both mundane harms (eg generating deepfake porn of famous people) or existential risks (eg power-seeking behavior).