Alignment != capability
Think a paperclip maximizing robot that in its process of creating paperclips kills everyone on earth to turn them into paperclips.
1: https://en.m.wikipedia.org/wiki/Energy_usage_of_the_United_S...
I guess this phrasing is up for debate, but according to the source linked "the DoD would rank 58th in the world" in fossil fuels.
Is that a huge amount of fossil fuel use? Absolutely. But one of the biggest?
Sure, the phrasing could be debated but the fact that it even ranks close to actual nation states is already problematic. The US military is basically an entire nation state of its own. This is nothing new if you're old enough to have observed the kind of damage it has done but it demonstrates my point about profit and alignment. Profits are very often misaligned with human values because war is extremely profitable.
Iraq is a now broken third word country/economy in recovery so not a great comparable to US. Sweden is small but a good comparable culturally/development-wise. US is 331 million people. It spends 3% of GDP on military. 3% of 331m is 10 million. Sweden is 10 million people. U.S. military fuel use is in line with Sweden’s.
I could be off here (DOD!=US military?), corrections welcome, but I wouldn’t even be shocked if a military entity uses 3-10x more fuel than a civilian average and above math puts us surprisingly close to 1x.
In some ways this is a lot like Bitcoin, in that people think that with enough math and science expertise you can just reason your way out of social problems. And you can, to an extent, but not if you're fighting an organized social adversary that is collectively smarter than you. 7 billion humans is a superintelligence and it's a high bar to be smarter than that.
It’s not an anti-goal that’s intentionally set, it’s that complex goal setting is hard and you may end up with something dumb that maximizes the reward unintentionally.
The issue is all of the AGIs will be unaligned in different ways because we don’t know how to align any of them. Also, the first to be able to improve itself in pursuit of its goal could take off at some threshold and then the others would not be relevant.
There’s a lot of thoughtful writing that exists on this topic and it’s really worth digging into the state of the art about it, your replies are thoughtful so it sounds like something you’d think about. I did the same thing a few years ago (around 2015) and found the arguments persuasive.
This is a decent overview: https://www.samharris.org/podcasts/making-sense-episodes/116...
Thanks for reminding me that I need to properly write up why I don't think self-improvement is a huge issue.
(My thought won't fit into a comment, and I'll want to link to it later).
I'm not sure how to properly compare the military of one country with the entirety of a country ~1/30th the size. On the surface it doesn't seem crazy for those to have similar budgets or resource use.
It's only going to keep getting worse and the AI alarmism is not doing anything to address the actual root causes of the crisis. If anything, AI development might actually make things more sustainable by better allocating and managing natural resources so retarding AI progress is actually making things worse in the long run.
There's a strong correlation between GDP growth and oil use, that's a huge problem and one that likely can't be solved without fundamentally revisiting modern economic models.
AI poses it's own concerns though, everything from the alignment problem to the challenge of even having to define what consciousness even is. AI development won't inherently make allocating natural resources easier - with the wrong incentive model and lack of safety rails AI could find its own solution to preserving natural resources that may not work out so well for us humans.
Bill Gates has bought up a bunch of farmland and I am certain he will use AI to manage them because manual allocation will be too inefficient[1].
1: https://www.popularmechanics.com/science/environment/a425435...