>>0xDEAF+(OP)
I don’t really understand what safety work is or entails here, given OpenAI will surely not be the only group to achieve AGI (assuming any group does.) What stops other companies from offering similar models with no (or just less) regard for safety/alignment, which may even be seen as a sort of competitive edge against other providers? Would the “safety work” being done or thought about somehow affect other eventual players in the market? Even regulation has the same challenges, but with nations instead of companies, and AFAIK that was more Sam’s domain than Ilya’s. It almost seems like acceleration for the sake of establishing a monopolistic presence in the market to prevent other players from viability, and then working in safety afterwards, would give a better chance of safety long-term… but that of course also seems very unrealistic. I think more broadly, if we’re concerned with the safety of humanity as a species we can’t think about the safety problem on the timescale of individual companies or people, or even governments. I do wonder how Ilya and team are thinking about this.