Not really. It slows down like security over obscurity. It needs to be open that we know the real risks and we have the best information to combat it. Otherwise, someone who does the same in closed matter, has better chances to get advantage when misusing it.
Nuclear capacity is constrained, and those constraining it attempt to do so for reasons public good (energy, warfare, peace). You could argue about effectiveness, but our failure to self-annihilate seems positive testament to the strategy.
Transparency does not serve us when mitigating certain forms of danger. I'm trying to remain humble with this, but it's not clear to me what balance of benefit and danger current AI is. (Not even considering the possibility of AGI, which is beyond scope of my comment)
If OpenAI can do it, I would not say that that is very unlikely for someone else to do the same. Open or not. The best chance is still that we prepare with the best available information.
The whole “security through obscurity doesn’t work” is absolute nonsense. It absolutely works and there are countless real world examples. What doesn’t work is relying on that as your ONLY security.
The tech exists, and will rapidly become easy to access. There is approximately zero chance of it remaining behind lock and key.
While the US briefly had unique knowledge about the manufacture of nuclear weapons, the basics could be easily worked out from first principles, especially once schoolchildren could pick up an up-to-date book on atomic physics. The engineering and testing part is difficult, of course, but for a large nation-state stealing the plans is only a shortcut. The on-paper part of the engineering is doable by any team with the right skills. So the main blocker with nuclear weapons isn't the knowledge, it's acquiring the raw fissile material and establishing the industrial base required to refine it.
This makes nuclear weapons a poor analogy for AI, because all you need to develop an LLM is a big pile of commodity GPUs, the publicly available training data, some decent software engineers, and time.
So in both cases all security-through-obscurity will buy you is a delay, and when it comes to AI probably not a very long one (except maybe if you can restrict the supply of GPUs, but the effectiveness of that strategy against China et al remains to be seen).
And the countries that want nukes have some anyway, even if they are not as good.
It's not because of arcane secrets that so few nations have nuclear weapons, all you need is a budget, time and brilliant physicists and engineers. The reason we don't have more is largely down to surveillance, economics, challenge of reliable payload delivery, security assurances, agreements and various logistical challenges.
Most countries are open and transparent about their nuclear efforts due to the diplomatic advantages. There are also methods to trace and detect secret nuclear tests and critical supply chains can be monitored. Countries who violate these norms can face anything from heavy economic sanctions and isolation to sabotage of research efforts. On the technical side, having safe and reliable launch capacity is arguably as much if not more of a challenge than the bomb itself. Logistical issues include mass manufacture (merely having capacity only paints a target on your back with no real gains) and safe storage. There are a great many reasons why it is simply not worth going forward with nuclear weapons. This calculus changes however, if a country has cause for fear for their continued existence, as is presently the case for some Eastern European countries.
If one could just walk into a store and buy plutonium, then society would probably take a much different approach to nuclear security.
Transparency doesn't serve us here.
Bioweapons do not have similar dual-use beneficial purpose as the AI does. As a result, AI development will continue regardless. It can give competitive advantage on any field.
Bioweapons are not exactly secret as well. Most of the methods to develop such things are open science. The restricting factor is that you potentially kill your own people as well, and the use-case is really just a weapon for some mad man, without other benefits.
Edit: To add, science behind "bioweapons" (or genetic modification of viruses/bacteria) are public exactly for the reason, that we could prevent the next future pandemic.
The US (and other nations) is not too friendly toward countries developing nukes. There are significant threats against them.
Also perspective is an interesting thing. Non-nuclear countries like Iran and (in the past) North Korea that get pushed around by western governments probably wouldn't agree that restriction is for the best. They would probably explain how nukes and the threat of destruction/MAD make people a lot more understanding, respectful, and restrained. Consider how Russia has been handled the past few years, compared to say Iraq.
(To be clear I'm not saying we should YOLO with nukes and other weapon information/technology, I'm just saying I think it's a lot more complicated an issue than it at first seems, and in the end it kind of comes down to who has the power, and who does not have the power, and the people without the power probably won't like it).
bioweapons is the thing, AI is a tool to make things. That's exactly the most powerful distinction here. Bioweapon research didn't also serendipitously make available powerful tools for the generation of images/sounds/text/ideas/plans -- so there isn't much reason to compare the benefit of the two.
These arguments aren't the same as "Let's ban the personal creation of terrifying weaponry", they're the same as "Let's ban wrenches and hack-saws because they can be used down the line in years from now to facilitate the create of terrifying weaponry" -- the problem with this argument being that it ignores the boons that such tools will allow for humanity.
Wrenches and hammers would have been banned too had they been framed as weapons of bludgeoning and torture by those that first encountered them. Thankfully people saw the benefits offered otherwise.
It's more like 'security through scarcity and trade control.'
This has been the case since 1960: https://www.theguardian.com/world/2003/jun/24/usa.science
Consider nuclear nonproliferation. It doesn't only affect weapons - it also affects nuclear power generation, nuclear physics research and even medicine. There's various degrees of secrecy to research and technologies that affect "tools that people around the world right now are using towards personal/professional/capitalistic benefit". Why? Because the same knowledge makes military and terrorist applications easier, reducing barrier to entry.
Consider then, biotech, particularly synthetic biology and genetic engineering. All that knowledge is dual-use, and unlike with nuclear weapons, biotech seems to scale down well. As a result, we have both a growing industry and research field, and kids playing with those same techniques at school and at home. Biohackerspaces were already a thing over a decade ago (I would know, I tried to start one in my city circa 2013). There's a reason all those developments have been accompanied by a certain unease and fear. Today, an unlucky biohacker may give themselves diarrhea or cancer, in ten years, they may accidentally end the world. Unlike with nuclear weapons, there's no natural barrier to scaling this capability down to individual level.
And of course, between the diarrhea and the humanity-ending "hold my beer and watch this" gain-of-function research, there's whole range of smaller things like getting a community sick, or destroying a local ecosystem. And I'm only talking about accidents with peaceful/civilian work here, ignoring deliberate weaponization.
To get a taste of what I'm talking about: if you buy into the lab leak hypothesis for COVID-19, then this is what a random fuckup at a random BSL-4 lab looks like, when we are lucky and get off easy. That is why biotech is another item on the x-risks list.
Back to the point: the AI x-risk is fundamentally more similar to biotech x-risk than nuclear x-risk, because the kind of world-ending AI we're worried about could be created and/or released by accident by a single group or individual, could self-replicate on the Internet, and would be unstoppable once released. The threat dynamics are similar to a highly-virulent pathogen, and not to a nuclear exchange between nation states - hence the comparison I've made in the original comment.
The most risky parts of biotech, the ones directly related to bioweapons, are not made publicly accessible - but it's hard, as unlike with nukes, biotech is dual-use to the very end, so we have to balance prevention and defense with ease of creating deadly pathogens.
Except the GPUs are on export control, and keeping up with the arms race requires a bunch of data you don't have access to (NVidia's IP) - or direct access to the source.
Just like building a nuclear weapon requires access to either already refined fissile material. Or the IP and skills to build your own refining facilities (IP most countries don't have). Literally everyone has access to Uranium - being able to do something useful with it is another story.
Kind of like... AI.
But that's irrelevant anyway, because nukes are a terrible analogy. If you insist on sci-fi speculation, use an analogy that's somewhat remotely similar -- perhaps compare the development of AI vs. traditional medicine. They're both very general technologies with incredible benefits and important dangers (e.g. superbugs, etc).
Every wealthy nation & individual on Earth has abundant access to AI's "ingredients" -- compute, data, and algorithms from the '80s. The resource controls aren't really comparable to nuclear weapons. Moreover, banning nukes won't also potentially delay cures for disease, unlock fusion, throw material science innovation into overdrive, and other incredible developments. That's because you're comparing a general tool to one exclusively proliferated for mass slaughter. It's just...not a remotely appropriate comparison.
I'm not sure why you're conflating process technology with GPUs, but if you want to go there, sure. If anyone was surprised by China announcing they had the understanding of how to do 7nm, they haven't been paying attention. China has been openly and actively poaching TSMC engineers for nearly a decade now.
Announcing you can create a 7nm chip is a VERY, VERY different thing than producing those chips at scale. The most ambitious estimates put it at a 50% yield, and the reality is with China's disinformation engine, it's probably closer to 20%. They will not be catching up in process technology anytime soon.
>Every wealthy nation & individual on Earth has abundant access to AI's "ingredients" -- compute, data, and algorithms from the '80s. The resource controls aren't really comparable to nuclear weapons. Moreover, banning nukes won't also potentially delay cures for disease, unlock fusion, throw material science innovation into overdrive, and other incredible developments. That's because you're comparing a general tool to one exclusively proliferated for mass slaughter. It's just...not a remotely appropriate comparison.
Except they don't? Every nation on earth doesn't have access to the technology to scale compute to the levels needed to make meaningful advances in AI. To say otherwise shows an ignorance of the market. There are a handful of nations capable, at best. Just like there are a handful of nations that have any hope of producing a nuclear weapon.
Artificial general intelligence is not a stick you can wield and threaten other countries with. It's a process, complex beyond our understanding.
That isn't the case here. If some well meaning person discovers a way that you can create a pandemic causing superbug, they can't just "fix" the AI to make that impossible. Not if it is open source. Very different thing.
I also worry every time I drop a hammer from my waist that it could bounce and kill everyone I love. Really anyone on the planet could drop a hammer which bounces and kills everyone I love. That is why hammers are an 'x-risk'
You're literally painting a perfect analogy for biotech/nuclear/AI. Catastrophe and culture-shifting benefits go hand in hand with all of them. It's about figuring out where the lines are. But claiming there is minimal or negligible risk ("so let's just run with it" as some say, maybe not you) feels very cavalier to me.
But you're not alone, if you feel that way. I feel like I'm taking crazy pills with how the software dev field talks about sharing AI openly.
And I'm literally an open culture advocate for over a decade, and have helped hundreds of ppl start open community projects. If there's anyone who's be excited for open collaboration, it's me! :)
I mean, you spend a lot of time in your own life denying the inevitable, humans spend a lot of time and effort avoiding their own personal extinction.
>The best chance is still
The best information we have now is if we create AGI/ASI at this time, we all die. The only winning move is not to play in that game.
Which is why you take your course of antibiotics to the end, because superbugs are a thing.
But if we hide the things, we have no idea what we are trying to control.
We can still unplug or turn off the things. We are still very faraway from the situation where AI has some factories and full supply chain to control and take physical control of the world.
>Imagine a large pond that is completely empty except for 1 lily pad. The lily pad will grow exponentially and cover the entire pond in 3 years. In other words, after 1 month there will 2 lily pads, after 2 months there will be 4, etc. The pond is covered in 36 months
We're all going to be sitting around at 34 months saying "Look, it's been years and AI hasn't taken over that much of the market.