zlacker

[return to "Stargate Project: SoftBank, OpenAI, Oracle, MGX to build data centers"]
1. deknos+6a1[view] [source] 2025-01-22 08:16:44
>>tedsan+(OP)
This is so much money with which we could actually solve problems in the world. maybe even stop wars which break out because of scarcity issues.

maybe i am getting to old or to friendly to humans, but it's staggering to me how the priorities are for such things.

◧◩
2. pizzat+1l2[view] [source] 2025-01-22 16:59:35
>>deknos+6a1
I am surprised at the negativity from HN. Their clear goal is to build superintelligence. Listen to any of the interviews with Altman, Demis Hassabis, or Dario Amodei (Anthropic) on the purpose of this. They discuss the roadmaps to unlimited energy, curing disease, farming innovations to feed billions, permanent solutions to climate change, and more.

Does no one on HN believe in this anymore? Isn't this tech startup community meant to be the tip of the spear? We'll find out by 2030 either way.

◧◩◪
3. Tiktaa+oC3[view] [source] 2025-01-23 01:59:47
>>pizzat+1l2
What if the AI doesn't want to do any of that stuff.
◧◩◪◨
4. Ukv+fr4[view] [source] 2025-01-23 11:24:40
>>Tiktaa+oC3
Humans choose its loss function, then continue to guide it with finetuning/RL/etc.
◧◩◪◨⬒
5. jbuhbj+Rc9[view] [source] 2025-01-25 12:38:59
>>Ukv+fr4
Once AGI is many times smarter than humans, the 'guiding' evaporates as foolish irrational thinking. There is no way around the fact when AGI acquires 10 times, 100, 1000 times human intelligence, we are suddenly completely powerless to change anything anymore.

AGI can go wrong in innumerable ways, most of which we cannot even imagine now, because we are limited by our 1 times human intelligence.

The liftoff conditions literally have to be near perfect.

So the question is, can humanity trust the power hungry billionaire CEOs to understand the danger and choose a path for maximum safety? Looking at how it is going so far, I would say absolutely not.

[go to top]