zlacker

[return to "Stargate Project: SoftBank, OpenAI, Oracle, MGX to build data centers"]
1. belter+X71[view] [source] 2025-01-22 07:56:09
>>tedsan+(OP)
This is a Military project. Have no doubts about it.
◧◩
2. Gud+ie1[view] [source] 2025-01-22 08:55:57
>>belter+X71
This is a money making scheme.
◧◩◪
3. arisAl+we1[view] [source] 2025-01-22 08:57:52
>>Gud+ie1
This has cosmological significance if it leads to superintelligence
◧◩◪◨
4. Cthulh+al1[view] [source] 2025-01-22 09:56:46
>>arisAl+we1
It won't unless there's another (r)evolution in the underlying technology / science / algorithms, at this point scaling up just means they use bigger datasets or more iterations, but it's more finetuning and improving the existing output then coming up with a next generation / superintelligence.
◧◩◪◨⬒
5. Fillig+3o1[view] [source] 2025-01-22 10:24:10
>>Cthulh+al1
Okay, but let’s be pessimistic for a moment. What can we do if that revolution does happen, and they’re close to AGI?

I don’t believe the control problem is solved, but I’m not sure it would matter if it is.

◧◩◪◨⬒⬓
6. ForHac+zo1[view] [source] 2025-01-22 10:29:33
>>Fillig+3o1
Being pessimistic, how come no human supergeniuses ever took over the world? Why didn't Leibniz make everyone else into his slaves?

I don't even understand what the proposed mechanism for "rouge AI enslaves humanity" is. It's scifi (and not hard scifi) as far as I can see.

◧◩◪◨⬒⬓⬔
7. Philpa+Zp1[view] [source] 2025-01-22 10:43:31
>>ForHac+zo1
Once you have one AGI, you can scale it to many AGI as long as you have the necessary compute. An AGI never needs to take breaks, can work non-stop on a problem, has access to all of the world's information simultaneously, and can interact with any system it's connected to.

To put it simply, it could outcompete humanity on every metric that matters, especially given recent advancements in robotics.

◧◩◪◨⬒⬓⬔⧯
8. ForHac+Us1[view] [source] 2025-01-22 11:18:25
>>Philpa+Zp1
...so it can think really hard all the time and come up with lots of great, devious evil ideas?

Again, I wonder why no group of smart people with brilliant ideas has unilaterally imposed those ideas on the rest of humanity through sheer force of genius.

◧◩◪◨⬒⬓⬔⧯▣
9. Philpa+9u1[view] [source] 2025-01-22 11:29:57
>>ForHac+Us1
An equivalent advance in autonomous robotics would solve the force projection issue, if that's what you're getting at.

I don't know if this will happen with any certainty, but the general idea of commoditising intelligence very much has the ability to tip the world order: every problem that can be tackled by throwing brainpower at it will be, and those advances will compound.

Also, the question you're posing did happen: it was called the Manhattan Project.

◧◩◪◨⬒⬓⬔⧯▣▦
10. ForHac+nL6[view] [source] 2025-01-24 10:18:27
>>Philpa+9u1
So don't plug the smart evil computer into the strong robots? Great, AI apocalypse averted.

The Manhattan Project would be a cute example if the Los Alamos scientists had gone rogue and declared themselves emperors of mankind, but no, in fact the people in charge remained the people in charge - mostly not supergeniuses.

[go to top]