zlacker

[parent] [thread] 11 comments
1. ForHac+(OP)[view] [source] 2025-01-22 11:18:25
...so it can think really hard all the time and come up with lots of great, devious evil ideas?

Again, I wonder why no group of smart people with brilliant ideas has unilaterally imposed those ideas on the rest of humanity through sheer force of genius.

replies(3): >>Philpa+f1 >>lupire+Fi >>jprete+Im
2. Philpa+f1[view] [source] 2025-01-22 11:29:57
>>ForHac+(OP)
An equivalent advance in autonomous robotics would solve the force projection issue, if that's what you're getting at.

I don't know if this will happen with any certainty, but the general idea of commoditising intelligence very much has the ability to tip the world order: every problem that can be tackled by throwing brainpower at it will be, and those advances will compound.

Also, the question you're posing did happen: it was called the Manhattan Project.

replies(2): >>redser+0h >>ForHac+ti5
◧◩
3. redser+0h[view] [source] [discussion] 2025-01-22 13:29:57
>>Philpa+f1
And if this whole exercise turns out to be a flop and gets us absolutely nowhere closer to AGI?

“AGI” has proven to be today’s hot marketing stunt for when you need to raise another round of cash and your only viable product is optimism.

Flying cars were just around the corner in the 60s, too.

replies(2): >>anon84+8v >>arisAl+6v1
4. lupire+Fi[view] [source] 2025-01-22 13:40:09
>>ForHac+(OP)
Look at any corporation or government to understand how a large group of humans can be driven to do specific things none of them individually want.
5. jprete+Im[view] [source] 2025-01-22 14:04:08
>>ForHac+(OP)
Quite a few have succeeded in conquering large fractions of the Earth's population: Napoleon, Hitler, Genghis Khan, the Roman emperors, Alexander the Great, Mao Zedong. America and Britain as systems did so for long periods of time.

All of these entities would have been enormously more powerful with access to an AGI's immortality, sleeplessness, and ability to clone itself.

replies(2): >>Sketch+gs >>anon84+uv
◧◩
6. Sketch+gs[view] [source] [discussion] 2025-01-22 14:37:01
>>jprete+Im
I can see what you're trying to say, but I cannot for the life of me figure out how an AGI would have helped Alexander the Great.
replies(1): >>jprete+Pw
◧◩◪
7. anon84+8v[view] [source] [discussion] 2025-01-22 14:54:52
>>redser+0h
This thread started from a deliberately pessimistic hypothetical of what happens if AGI actually manifests, so your comment is misplaced.
◧◩
8. anon84+uv[view] [source] [discussion] 2025-01-22 14:56:54
>>jprete+Im
And of course the more society is wired up and controlled by computer systems, the more the AGI could directly manage it.
◧◩◪
9. jprete+Pw[view] [source] [discussion] 2025-01-22 15:05:13
>>Sketch+gs
Alexander the Great made his conquests by building a really good reputation for war, then leveraging it to get tribute agreements while leaving the local governments intact. This is a good way to do it when communication lines are slow and unreliable, because the emperor just needs to check tribute once a year to enforce the agreements, but it's weak control.

If Alexander could have left perfectly aligned copies of himself in every city he passed, he could have gotten much more control and authority, and still avoided a fight by agreeing to maintain the local power structure with himself as the new head of state.

replies(1): >>Sketch+2y
◧◩◪◨
10. Sketch+2y[view] [source] [discussion] 2025-01-22 15:11:00
>>jprete+Pw
Oh, you're assuming an entire networking infrastructure as well. That makes way more sense, but the miracle there isn't AGI - without networking they'd lose alignment over time. Honestly, I feel like it would devolve in a patchwork of different kingdoms run by an Alexander figurehead... where have I seen this before?

The problem you're proposing could be solved via a high quality cellular network.

◧◩◪
11. arisAl+6v1[view] [source] [discussion] 2025-01-22 20:45:04
>>redser+0h
You really haven't used any LLM seriously eh
◧◩
12. ForHac+ti5[view] [source] [discussion] 2025-01-24 10:18:27
>>Philpa+f1
So don't plug the smart evil computer into the strong robots? Great, AI apocalypse averted.

The Manhattan Project would be a cute example if the Los Alamos scientists had gone rogue and declared themselves emperors of mankind, but no, in fact the people in charge remained the people in charge - mostly not supergeniuses.

[go to top]