zlacker

[parent] [thread] 14 comments
1. tomrod+(OP)[view] [source] 2023-05-16 15:22:19
With due respect, the inventors of a thing rarely turn into the innovators or implementers of a thing.

Should we be concerned about networked, hypersensing AI with bad code? Yes.

Is that an existential threat? Not so long as we remember that there are off switches.

Should we be concerned about kafkaesqe hellscapes of spam and bad UX? Yes.

Is that an existential threat? Sort of, if we ceded all authority to an algorithm without a human in the loop with the power to turn it off.

There is a theme here.

replies(6): >>woeiru+66 >>cma+g6 >>digbyb+C7 >>olddus+VB >>Number+f71 >>DirkH+bj1
2. woeiru+66[view] [source] 2023-05-16 15:45:52
>>tomrod+(OP)
Did you even watch the Terminator series? I think scifi has been very adept at demonstrating how physical disconnects/failsafes are unlikely to work with super AIs.
3. cma+g6[view] [source] 2023-05-16 15:46:23
>>tomrod+(OP)
> Is that an existential threat? Not so long as we remember that there are off switches.

Remember there are off switches for human existence too, like whatever biological virus a super intelligence could engineer.

An off-switch for a self-improving AI isn't as trivial as you make it sound if it gets to anything like in those quotes, and even then you are assuming the human running it isn't malicious. We assume some level of sanity at least with the people in charge of nuclear weapons, but it isn't clear that AI will have the same large state actor barrier to entry or the same perception of mutually assured destruction if the actor were to use it against a rival.

replies(1): >>tomrod+PX
4. digbyb+C7[view] [source] 2023-05-16 15:52:27
>>tomrod+(OP)
There are multiple risks that people talk about, the most interesting is the intelligence explosion. In that scenario we end up with a super intelligence. I don’t feel confident in my ability to asses the likelihood of that happening, but assuming it is possible, thinking through the consequences is a very interesting exercise. Imagining the capabilities of an alien super intelligence is like trying to imagine a 4th spatial dimension. It can only be approached with analogies. Can it be “switched off”. Maybe not, if it was motivated to prevent itself from being switched off. My dog seems to think she can control my behavior in various predictable ways, like sitting or putting her paw on my leg, and sometimes it works. But if I have other things I care about in that moment, things that she is completely incapable of understanding, then who is actually in control becomes very obvious.
5. olddus+VB[view] [source] 2023-05-16 17:59:56
>>tomrod+(OP)
Sure, so just to test this, could you turn off ChatGPT and Google Bard for a day.

No? Then what makes you think you'll be able to turn off the $evilPerson AI?

replies(1): >>tomrod+MQ
◧◩
6. tomrod+MQ[view] [source] [discussion] 2023-05-16 19:16:18
>>olddus+VB
I feel like you're confusing a single person (me) with everyone who has access to an off switch at OpenAI or Google, possibly for the contorting an extreme-sounding negative point in a minority opinion.

You tell me. An EMP wouldn't take out data centers? No implementation has an off switch? AutoGPT doesn't have a lead daemon that can be killed? Someone should have this answer. But be careful not to confuse yours truly, a random internet commentator speaking on the reality of AI vs. the propaganda of the neo-cryptobros, versus people paying upwards of millions of dollars daily to run an expensive, bloated LLM.

replies(1): >>olddus+4T
◧◩◪
7. olddus+4T[view] [source] [discussion] 2023-05-16 19:25:12
>>tomrod+MQ
You miss my point. Just because you want to turn it off doesn't mean the person who wants to acquire billions or rule the world or destroy humanity, does.

The people who profit from a killer AI will fight to defend it.

replies(1): >>tomrod+wU
◧◩◪◨
8. tomrod+wU[view] [source] [discussion] 2023-05-16 19:30:11
>>olddus+4T
And will be subject to the same risks they point their killing robots to, as well as being vulnerable.

Eminent domain lays out a similar pattern that can be followed. Existence of risk is not a deterrent to creation, simply an acknowledgement for guiding requirements.

replies(1): >>olddus+nV
◧◩◪◨⬒
9. olddus+nV[view] [source] [discussion] 2023-05-16 19:34:52
>>tomrod+wU
So the person who wants to kill himself and all humanity alongside is subject to the same risk as everyone else?

Well that's hardly reassuring. Do you not understand what I'm saying or do you not care?

replies(1): >>tomrod+dX
◧◩◪◨⬒⬓
10. tomrod+dX[view] [source] [discussion] 2023-05-16 19:42:10
>>olddus+nV
At this comment level, mostly don't care -- you're asserting that avoiding the risks through preventing AI build because base people exist is a preferable course of action, which ignores that the barn is fire and the horses are already out.

Though there is an element of your comments being too brief, hence the mostly. Say, 2% vs 38%.

That constitutes 40% of the available categorization of introspection regarding my current discussion state. The remaining 60% is simply confidence that your point represents a dominated strategy.

replies(1): >>olddus+Ql1
◧◩
11. tomrod+PX[view] [source] [discussion] 2023-05-16 19:44:55
>>cma+g6
Both things are true.

If we have a superhuman AI, we can run down the powerplants for a few days.

Would it suck? Sure, people would die. Is it simple? Absolutely -- Texas and others are mostly already there some winters.

replies(1): >>cma+s23
12. Number+f71[view] [source] 2023-05-16 20:32:32
>>tomrod+(OP)
We've already ceded all authority to an algorithm that no one can turn off. Our political and economic structures are running on their own, and no single human or even group of humans can really stop them if they go off the rails. If it's in humanity's best interest for companies not to dump waste anywhere they want, but individual companies benefit from cheap waste disposal, and they lobby regulators to allow it, that sort of lose-lose situation can go on for a very long time. It might be better if everyone could coordinate so that all companies had to play by the same rules, and we all got a cleaner environment. But it's very hard to break out.

Do I think capitalism has the potential to be as bad as a runaway AI? No. I think that it's useful for illustrating how we could end up in a situation where AI takes over because every single person has incentives to keep it on, even when the outcome of all people keeping it running turns out to be really bad. A multi-polar trap, or "Moloch" problem. It seems likely to end up with individual actors all having incentives to deploy stronger and smarter AI, faster and faster, and not to turn them off even as they start to either do bad things to other people or just the sheer amount of resources dedicated to AI starts to take its toll on earth.

That's assuming we've solved alignment, but that neither we or AGI has solved the coordination problem. If we haven't solved alignment, and AGIs aren't even guaranteed to act in the interest of the human that tries to control them, then we're in worse shape.

Altman used the term "cambrian explosion" referring to startups, but I think it also applies to the new form of life we're inventing. It's not self-replicating yet, but we are surely on-track on making something that will be smart enough to replicate itself.

As a thought experiment, you could imagine a primitive AGI, if given completely free reign, might be able to get to the point where it could bootstrap self-sufficiency -- first hire some humans to build it robots, buy some solar panels, build some factories that can plug into our economy to build factories and more solar panels and GPUs, and get to a point where it is able to survive and grow and reproduce without human help. It would be hard, it would need either a lot of time, or a lot of AI minds working together.

But that's like a human trying to make a sandwich by farming or raising every single ingredient, wheat, pigs, tomatoes, etc, though. A much more effective way is to just make some money and trade for what you need. That depends on AIs being able to own things, or just a human turning over their bank account to an AI, which has already happened and probably will keep happening.

My mind goes to a scenario where AGI starts out doing things for humans, and gradually transitions to just doing things, and at some point we realize "oops", but there was never a point along the way where it was clear that we really had to stop. Which is why I'm so adamant that we should stop now. If we decide that we've figured out the issues and can start again later, we can do that.

13. DirkH+bj1[view] [source] 2023-05-16 21:39:15
>>tomrod+(OP)
This is like saying we should just go ahead and invent the atom bomb and undo the invention after the fact if the cons of having atom bombs around outweight the pros.

Like try turning off the internet. That's the same situation we might be in with regards to AI soon. It's a revolutionary tech now with multiple Google-grade open source variants set to be everywhere.

This doesn't mean it can't be done. Sure, we in principle could "turn off" the internet, and in principal could "uninvent" the atom bomb if we all really coordinated and worked hard. But this failure to imagine that "turning off dangerous AI" in the future will ever be anything other than an easy on/off switch is so far-gone ridiculous to me I don't understand why anyone believes it provides any kind of assurance.

◧◩◪◨⬒⬓⬔
14. olddus+Ql1[view] [source] [discussion] 2023-05-16 21:55:15
>>tomrod+dX
Ok, so you don't get it. Read "Use of Weapons" and realise that AI is a weapon. That's a good use of your time.
◧◩◪
15. cma+s23[view] [source] [discussion] 2023-05-17 13:11:51
>>tomrod+PX
Current state of the art language models can run inference slowly on a single Xeon or M1 Max with a lot of RAM. Individuals can buy H100s that can infer too.

Maybe it needs a full cluster for training if it is self improving (or maybe that is done another way more similar to finetuning the last layers).

If that is still the case with something super-human in all domains then you'd have to shut down all minor residential solar installs, generators, etc.

[go to top]