Agreed. She’s not famous for her publications. She’s famous (and intimidating) for being a “power broker” or whatever the term is for the person who is participating in off-the-record one-on-one meetings with military generals.
> Anyway, the idea that the Chinese military or leadership actually will sacrifice a potential advantage in the name of ai safety is absurd.
The point of her work (and mine and Dr. Horvitz’s as well) is to make it clear that putting AI in charge of nuclear weapons and other existential questions is self-defeating. There is no advantage to be gained. Any nation that does this is shooting themselves in the foot, or the heart.
Is part of the advocacy convincing nation-states that an AI arms-race is not similar to a nuclear arms-race amounting to a stalemate?
What's the best place for a non-expert to read about this?
Thank you for your interest! :) I'd recommend skimming some of the papers cited by this working group I'm in called DISARM:SIMC4; we've tried to collect the most relevant papers here in one place:
In response to your question:
At a high level, the academic consensus is that combining AI with nuclear command & control does not increase deterrence, and yet it increases the risk of accidents, and increases the chances that terrorists can "catalyze" a great-power conflict.
So, there is no upside to be had, and there's significant downside, both in terms of increasing accidents and empowering fundamentalist terrorists (e.g. the Islamic State) which would be happy to utilize a chance to wipe the US, China, and Russia all off the map and create a "clean slate" for a new Caliphate to rule the world's ashes.
There is no reason at all to connect AI to NC3 except that AI is "the shiny new thing". Not all new things are useful in a given application.