zlacker

[parent] [thread] 2 comments
1. cma+(OP)[view] [source] 2023-05-16 15:46:23
> Is that an existential threat? Not so long as we remember that there are off switches.

Remember there are off switches for human existence too, like whatever biological virus a super intelligence could engineer.

An off-switch for a self-improving AI isn't as trivial as you make it sound if it gets to anything like in those quotes, and even then you are assuming the human running it isn't malicious. We assume some level of sanity at least with the people in charge of nuclear weapons, but it isn't clear that AI will have the same large state actor barrier to entry or the same perception of mutually assured destruction if the actor were to use it against a rival.

replies(1): >>tomrod+zR
2. tomrod+zR[view] [source] 2023-05-16 19:44:55
>>cma+(OP)
Both things are true.

If we have a superhuman AI, we can run down the powerplants for a few days.

Would it suck? Sure, people would die. Is it simple? Absolutely -- Texas and others are mostly already there some winters.

replies(1): >>cma+cW2
◧◩
3. cma+cW2[view] [source] [discussion] 2023-05-17 13:11:51
>>tomrod+zR
Current state of the art language models can run inference slowly on a single Xeon or M1 Max with a lot of RAM. Individuals can buy H100s that can infer too.

Maybe it needs a full cluster for training if it is self improving (or maybe that is done another way more similar to finetuning the last layers).

If that is still the case with something super-human in all domains then you'd have to shut down all minor residential solar installs, generators, etc.

[go to top]