I don’t get the obsession with safety. If an organisation’s stated goal is to create AGI, how can you reasonably think you can ever make it “safe”? We’re talking about an intelligence that’s magnitudes smarter than the smartest human. How can you possibly even imagine to rein it in?