Not saying I agree that being closed source is in the public good, although one could certainly argue that accelerating the efforts of bad actors to catch up would not be a positive.
Not really. It slows down like security over obscurity. It needs to be open that we know the real risks and we have the best information to combat it. Otherwise, someone who does the same in closed matter, has better chances to get advantage when misusing it.
The whole “security through obscurity doesn’t work” is absolute nonsense. It absolutely works and there are countless real world examples. What doesn’t work is relying on that as your ONLY security.
While the US briefly had unique knowledge about the manufacture of nuclear weapons, the basics could be easily worked out from first principles, especially once schoolchildren could pick up an up-to-date book on atomic physics. The engineering and testing part is difficult, of course, but for a large nation-state stealing the plans is only a shortcut. The on-paper part of the engineering is doable by any team with the right skills. So the main blocker with nuclear weapons isn't the knowledge, it's acquiring the raw fissile material and establishing the industrial base required to refine it.
This makes nuclear weapons a poor analogy for AI, because all you need to develop an LLM is a big pile of commodity GPUs, the publicly available training data, some decent software engineers, and time.
So in both cases all security-through-obscurity will buy you is a delay, and when it comes to AI probably not a very long one (except maybe if you can restrict the supply of GPUs, but the effectiveness of that strategy against China et al remains to be seen).