They're taking for granted the fact that they'll create AI systems much smarter than humans.
They're taking for granted the fact that by default they wouldn't be able to control these systems.
They're saying the solution will be creating a new, separate team.
That feels weird, organizationally. Of all the unknowns about creating "much smarter than human" systems, safety seems like one that you might have to bake in through and through. Not spin off to the side with a separate team.
There's also some minor vibes of "lol creating superintelligence is super dangerous but hey it might as well be us that does it idk look how smart we are!" Or "we're taking the risks so seriously that we're gonna do it anyway."
They're taking for granted that superintelligence is achievable within the next decade (regardless of who achieves it).
>They're taking for granted the fact that by default they wouldn't be able to control these systems.
That's reasonable though. You wouldn't need guardrails on anything if manufacturers built everything to spec without error, and users used everything 100% perfectly.
But you can't make those presumptions in the real world. You can't just say "make a good hacksaw and people won't cut their arm off". And you can't presume the people tasked with making a mechanically desirable and marketable hacksaw are also proficient in creating a safe one.
>They're saying the solution will be creating a new, separate team.
The team isn't the solution. The solution may be borne of that team.
>There's also some minor vibes of [...] "we're taking the risks so seriously that we're gonna do it anyway."
The alternative is to throw the baby out with the bathwater.
The goal here is to keep the useful bits of AGI and protect against the dangerous bits.
If it's achieved by someone else why should we assume that the other person or group will give a damn about anything done by this team?
What influence would this team have on other organizations, especially if you put your dystopia-flavored speculation hat on and imagine a more rogue group...
This team is only relevant to OpenAI and OpenAI-affiliated work and in that case, yes, it's weird to write some marketing press release copy that treats one hard thing as a fait accompli while hyping up how hard this other particular slice of the problem is.
You can't assume that. But that doesn't mean some 3rd party wouldn't be interested in utilizing that research anyway.