I think there is an argument to be made that not every powerful LLM should be open source. But yes- maybe we’re worried about nothing. On the other hand, these tools can easily spread misinformation, increase animosity, etc, Even in todays world.
I come from the medical field, and we make risk-analyses there to dictate how strict we need to tests things before we release it in the wild. None of this exists for AI (yet).
I do think that focus on alignment is many times more important than chatgpt stores for humanity though.