Not saying I agree that being closed source is in the public good, although one could certainly argue that accelerating the efforts of bad actors to catch up would not be a positive.
Not really. It slows down like security over obscurity. It needs to be open that we know the real risks and we have the best information to combat it. Otherwise, someone who does the same in closed matter, has better chances to get advantage when misusing it.
Nuclear capacity is constrained, and those constraining it attempt to do so for reasons public good (energy, warfare, peace). You could argue about effectiveness, but our failure to self-annihilate seems positive testament to the strategy.
Transparency does not serve us when mitigating certain forms of danger. I'm trying to remain humble with this, but it's not clear to me what balance of benefit and danger current AI is. (Not even considering the possibility of AGI, which is beyond scope of my comment)
If one could just walk into a store and buy plutonium, then society would probably take a much different approach to nuclear security.
Transparency doesn't serve us here.
bioweapons is the thing, AI is a tool to make things. That's exactly the most powerful distinction here. Bioweapon research didn't also serendipitously make available powerful tools for the generation of images/sounds/text/ideas/plans -- so there isn't much reason to compare the benefit of the two.
These arguments aren't the same as "Let's ban the personal creation of terrifying weaponry", they're the same as "Let's ban wrenches and hack-saws because they can be used down the line in years from now to facilitate the create of terrifying weaponry" -- the problem with this argument being that it ignores the boons that such tools will allow for humanity.
Wrenches and hammers would have been banned too had they been framed as weapons of bludgeoning and torture by those that first encountered them. Thankfully people saw the benefits offered otherwise.
You're literally painting a perfect analogy for biotech/nuclear/AI. Catastrophe and culture-shifting benefits go hand in hand with all of them. It's about figuring out where the lines are. But claiming there is minimal or negligible risk ("so let's just run with it" as some say, maybe not you) feels very cavalier to me.
But you're not alone, if you feel that way. I feel like I'm taking crazy pills with how the software dev field talks about sharing AI openly.
And I'm literally an open culture advocate for over a decade, and have helped hundreds of ppl start open community projects. If there's anyone who's be excited for open collaboration, it's me! :)