But it's also easy to parody this. I am just imagining Ilya and Jan coming out on stage wearing red capes.
I think George Hotz made sense when he pointed out that the best defense will be having the technology available to everyone rather than a small group. We can at least try to create a collective "digital immune system" against unaligned agents with our own majority of aligned agents.
But I also believe that there isn't any really effective mitigation against superintelligence superseding human decision making aside from just not deploying it. And it doesn't need to be alive or anything to be dangerous. All you need is for a large amount of decision-making for critical systems to be given over to hyperspeed AI and that creates a brittle situation where things like computer viruses can be existential risks. It's something similar to the danger of nuclear weapons.
Even if you just make GPT-4 say 33% smarter and 50 or 100 times faster and more efficient, that can lead to control of industrial and military assets being handed over to these AI agents. Because the agents are so much faster, humans cannot possibly compete, and if you interrupt them to try to give them new instructions then your competitor's AIs race ahead the equivalent of days or weeks of work. This, again, is a precarious situation to be in.
There is huge promise and benefit from making the systems faster, smarter, and more efficient, but in the next few years we may be walking a fine line. We should agree to place some limitation on the performance level of AI hardware that we will design and manufacture.
I call BS on this...it's an LLM...
Out of the options to reduce that risk I think it would really take something like this, which also seems extremely unlikely to actually happen given the coordination problem: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...
You talk about aligned agents - but there aren't any today and we don't know how to make them. It wouldn't be aligned agents vs. unaligned, it's only unaligned.
I don't think spreading out the tech reduces the risk. Spreading out nuclear weapons doesn't reduce the risk (and with nukes at least it's a lot easier to control the fissionable materials). Even with nukes you can still create them and decide not to use them, not so true with superintelligent AGI.
If anyone could have made nukes from their computer humanity may not have made it.
I'm glad OpenAI understands the severity of the problem though and is at least trying to solve it in time.
It’s paying a cost of doing business. The minimum theater required to minimize expected regulatory cost.
They want to own the saftey issue so they can risk your life for their profit.
Guarantees of correctness and safety are obviously of huge concern, hence the main article. But it's absolutely not unreasonable to see these models allowing humanoid robots capable of various day to day activities and work.
Why are tech people stuck in the now and not future looking?
https://tidybot.cs.princeton.edu/ https://innermonologue.github.io/
https://www.microsoft.com/en-us/research/group/autonomous-sy...
The alignment problem will come up when the robot control system notices that the guy with the stick is interfering with the robot's goals.
Alignment != capability
Think a paperclip maximizing robot that in its process of creating paperclips kills everyone on earth to turn them into paperclips.
My own belief is that regardless of what we do in terms of the most immediate dangers, within one or two centuries (maximum) we will enter the posthuman era where digital intelligent life has taken control of the planet. I don't mean "posthuman" as in all of the humans have been killed (necessarily), just that what humans 1.0 do won't be very important or interesting relative to what the superintelligent AIs are doing.
I don't think there is anything that prevents people from giving AI all of the characteristics of animals (such as humans). I think it's foolish, but researchers seem determined to do it.
But this is fairly speculative and much harder to convince people of.
also it knows when to use a calculator if it has access to one so it's not a big deal
Would you be "comforted" that this mega-genius is worse at arithmetic than you are and doesn't remember what it did yesterday?
Probably not. You might well be worried that this weird psychopath is going to get a medical license and cut the wrong number of fingers off of a whole bunch of patients.
That can guide me through the process of writing a Navier-Stokes simulation…
In a foreign language…
That can be trivially put into a loop and tasked with acting like an agent…
And which is good enough that people are already seriously asking themselves if they need to hire people to do certain tasks…
…
Why call BS?
It's not perfect, sure, but it's not making a highly regional joke about the Isle of White Ferry[0] either.
[0] "What's brown and comes steaming out the back of Cowes?"
How so? If they cannot drive a car?
https://www.psy.ox.ac.uk/news/the-brain-is-a-prediction-mach...
Proliferation of a dangerous technology is the best defense?
Sure, it's a libertarian meme, but it wouldn't work for nuclear weapons or virus research. Maybe that would make sense, but the argument needs to be made.
1: https://en.m.wikipedia.org/wiki/Energy_usage_of_the_United_S...
It looks like being LLM-based is helpful for generating control scripts and communicating its reasoning. Text seems to provide useful building blocks for higher-order reasoning and behavior. As with humans!
I guess this phrasing is up for debate, but according to the source linked "the DoD would rank 58th in the world" in fossil fuels.
Is that a huge amount of fossil fuel use? Absolutely. But one of the biggest?
We humans sure didn't do this. We're genetically extremely similar to other primates and yet we destroy their habitats, throw them in zoos, and use them for lab experiments.
Sure, the phrasing could be debated but the fact that it even ranks close to actual nation states is already problematic. The US military is basically an entire nation state of its own. This is nothing new if you're old enough to have observed the kind of damage it has done but it demonstrates my point about profit and alignment. Profits are very often misaligned with human values because war is extremely profitable.
The capabilities are coming fast. There is no alignment.
Another comment already links to demos and papers of LFMs operating robots and agents in 3D environments.
You can imagine an AI that answers questions and helps you get things within reason that doesn't hurt anyone else plus corrections for whatever problems you imagine with this. That's roughly an aligned AI. It will help you build a bomb as a fun experiment, but would stop you from hurting someone with the bomb.
I sincerely doubt that. Gpt4 and it's ilk excel at the 5 paragraph essay on topics that are so well understood by humans that books have been written about them. ChatGPT4 is a very useful took when writing text. But it is useful in the sense that a thesaurus and a spell check use useful.
What chatGPT4 truly sucks at is understanding a large amount of text and synthesizing it. That token limit is really a problem if you want gpt to become a scientist or a military strategist. Strategy requires you to consume a huge amount of less than certain information and to synthesize that in a coherent strategy, preferably explainable in terms potus can understand. Science is the same thing. Play the the Phd game that just featured on HN frontpage. It is a lot of false starts, a lot of reading, again things gpt just cannot do.
By the way their text understanding is really a lot less than human. A nice example are 'word in context' puzzle's. In this puzzle a target word is used in two different sentences. The puzzle is to decide if the word is used in the same meaning or not. chatGpt4 does better than 3.5 but it doesn't take a lot of effort to trick it. Especially if you ask a couple of questions in one prompt, it will easily trip up.
Humanity doesn’t have unified interests or shared values on many things. We have different cultural memories and different boundaries. What to some is an expression of a fundamental right is an affront.
Iraq is a now broken third word country/economy in recovery so not a great comparable to US. Sweden is small but a good comparable culturally/development-wise. US is 331 million people. It spends 3% of GDP on military. 3% of 331m is 10 million. Sweden is 10 million people. U.S. military fuel use is in line with Sweden’s.
I could be off here (DOD!=US military?), corrections welcome, but I wouldn’t even be shocked if a military entity uses 3-10x more fuel than a civilian average and above math puts us surprisingly close to 1x.
In some ways this is a lot like Bitcoin, in that people think that with enough math and science expertise you can just reason your way out of social problems. And you can, to an extent, but not if you're fighting an organized social adversary that is collectively smarter than you. 7 billion humans is a superintelligence and it's a high bar to be smarter than that.
It’s not an anti-goal that’s intentionally set, it’s that complex goal setting is hard and you may end up with something dumb that maximizes the reward unintentionally.
The issue is all of the AGIs will be unaligned in different ways because we don’t know how to align any of them. Also, the first to be able to improve itself in pursuit of its goal could take off at some threshold and then the others would not be relevant.
There’s a lot of thoughtful writing that exists on this topic and it’s really worth digging into the state of the art about it, your replies are thoughtful so it sounds like something you’d think about. I did the same thing a few years ago (around 2015) and found the arguments persuasive.
This is a decent overview: https://www.samharris.org/podcasts/making-sense-episodes/116...
I’m also not a moral relativist, I don’t think all values are equivalent, but you don’t even need to go there - before that point a lot of what humans want is not controversial and the “obvious” cases are not so obvious or easy to classify.
Thanks for reminding me that I need to properly write up why I don't think self-improvement is a huge issue.
(My thought won't fit into a comment, and I'll want to link to it later).
It seems reasonable that they wouldn't deviate, but that depends on how specifically and wholly the original goals were defined. We'd basically be attempting to outwit the LLMs, I'm not sure if that's realistic or not.
I'm not sure how to properly compare the military of one country with the entirety of a country ~1/30th the size. On the surface it doesn't seem crazy for those to have similar budgets or resource use.
It's only going to keep getting worse and the AI alarmism is not doing anything to address the actual root causes of the crisis. If anything, AI development might actually make things more sustainable by better allocating and managing natural resources so retarding AI progress is actually making things worse in the long run.
There's a strong correlation between GDP growth and oil use, that's a huge problem and one that likely can't be solved without fundamentally revisiting modern economic models.
AI poses it's own concerns though, everything from the alignment problem to the challenge of even having to define what consciousness even is. AI development won't inherently make allocating natural resources easier - with the wrong incentive model and lack of safety rails AI could find its own solution to preserving natural resources that may not work out so well for us humans.
Bill Gates has bought up a bunch of farmland and I am certain he will use AI to manage them because manual allocation will be too inefficient[1].
1: https://www.popularmechanics.com/science/environment/a425435...