They're taking for granted the fact that they'll create AI systems much smarter than humans.
They're taking for granted the fact that by default they wouldn't be able to control these systems.
They're saying the solution will be creating a new, separate team.
That feels weird, organizationally. Of all the unknowns about creating "much smarter than human" systems, safety seems like one that you might have to bake in through and through. Not spin off to the side with a separate team.
There's also some minor vibes of "lol creating superintelligence is super dangerous but hey it might as well be us that does it idk look how smart we are!" Or "we're taking the risks so seriously that we're gonna do it anyway."