But it's also easy to parody this. I am just imagining Ilya and Jan coming out on stage wearing red capes.
I think George Hotz made sense when he pointed out that the best defense will be having the technology available to everyone rather than a small group. We can at least try to create a collective "digital immune system" against unaligned agents with our own majority of aligned agents.
But I also believe that there isn't any really effective mitigation against superintelligence superseding human decision making aside from just not deploying it. And it doesn't need to be alive or anything to be dangerous. All you need is for a large amount of decision-making for critical systems to be given over to hyperspeed AI and that creates a brittle situation where things like computer viruses can be existential risks. It's something similar to the danger of nuclear weapons.
Even if you just make GPT-4 say 33% smarter and 50 or 100 times faster and more efficient, that can lead to control of industrial and military assets being handed over to these AI agents. Because the agents are so much faster, humans cannot possibly compete, and if you interrupt them to try to give them new instructions then your competitor's AIs race ahead the equivalent of days or weeks of work. This, again, is a precarious situation to be in.
There is huge promise and benefit from making the systems faster, smarter, and more efficient, but in the next few years we may be walking a fine line. We should agree to place some limitation on the performance level of AI hardware that we will design and manufacture.
Out of the options to reduce that risk I think it would really take something like this, which also seems extremely unlikely to actually happen given the coordination problem: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...
You talk about aligned agents - but there aren't any today and we don't know how to make them. It wouldn't be aligned agents vs. unaligned, it's only unaligned.
I don't think spreading out the tech reduces the risk. Spreading out nuclear weapons doesn't reduce the risk (and with nukes at least it's a lot easier to control the fissionable materials). Even with nukes you can still create them and decide not to use them, not so true with superintelligent AGI.
If anyone could have made nukes from their computer humanity may not have made it.
I'm glad OpenAI understands the severity of the problem though and is at least trying to solve it in time.
Alignment != capability
Think a paperclip maximizing robot that in its process of creating paperclips kills everyone on earth to turn them into paperclips.
1: https://en.m.wikipedia.org/wiki/Energy_usage_of_the_United_S...
I guess this phrasing is up for debate, but according to the source linked "the DoD would rank 58th in the world" in fossil fuels.
Is that a huge amount of fossil fuel use? Absolutely. But one of the biggest?
Sure, the phrasing could be debated but the fact that it even ranks close to actual nation states is already problematic. The US military is basically an entire nation state of its own. This is nothing new if you're old enough to have observed the kind of damage it has done but it demonstrates my point about profit and alignment. Profits are very often misaligned with human values because war is extremely profitable.
I'm not sure how to properly compare the military of one country with the entirety of a country ~1/30th the size. On the surface it doesn't seem crazy for those to have similar budgets or resource use.
It's only going to keep getting worse and the AI alarmism is not doing anything to address the actual root causes of the crisis. If anything, AI development might actually make things more sustainable by better allocating and managing natural resources so retarding AI progress is actually making things worse in the long run.
There's a strong correlation between GDP growth and oil use, that's a huge problem and one that likely can't be solved without fundamentally revisiting modern economic models.
AI poses it's own concerns though, everything from the alignment problem to the challenge of even having to define what consciousness even is. AI development won't inherently make allocating natural resources easier - with the wrong incentive model and lack of safety rails AI could find its own solution to preserving natural resources that may not work out so well for us humans.
Bill Gates has bought up a bunch of farmland and I am certain he will use AI to manage them because manual allocation will be too inefficient[1].
1: https://www.popularmechanics.com/science/environment/a425435...