Also.. eventually anyone will be able to run a small bank of GPUs and train models equally capable to GPT-4 in a matter of days so it's all kinda.. moot and hilarious. Everyone's chatting about AGI alignment, but that's not something we can lockdown early or sufficiently. Then embedded industry folks are talking about constitutional AI as if it's some major alignment salve. But if they were honest they'd admit it's really just a SYSTEM prompt front-loaded with a bunch of axioms and "please be a good boy" rules, and is thus liable to endless injections and manipulations by means of mere persuasion.
The real threshold of 'danger' will be when someone puts an AGI 'instance' in a fully autonomous hardware that can interact with all manner of physical and digital spaces. ChatGPT isn't going to randomly 'break out'. I feel so let down by these kinds of technically illfounded scare tactics from the likes of Altman.