zlacker

[return to "Stargate Project: SoftBank, OpenAI, Oracle, MGX to build data centers"]
1. deknos+6a1[view] [source] 2025-01-22 08:16:44
>>tedsan+(OP)
This is so much money with which we could actually solve problems in the world. maybe even stop wars which break out because of scarcity issues.

maybe i am getting to old or to friendly to humans, but it's staggering to me how the priorities are for such things.

◧◩
2. pizzat+1l2[view] [source] 2025-01-22 16:59:35
>>deknos+6a1
I am surprised at the negativity from HN. Their clear goal is to build superintelligence. Listen to any of the interviews with Altman, Demis Hassabis, or Dario Amodei (Anthropic) on the purpose of this. They discuss the roadmaps to unlimited energy, curing disease, farming innovations to feed billions, permanent solutions to climate change, and more.

Does no one on HN believe in this anymore? Isn't this tech startup community meant to be the tip of the spear? We'll find out by 2030 either way.

◧◩◪
3. Tiktaa+oC3[view] [source] 2025-01-23 01:59:47
>>pizzat+1l2
What if the AI doesn't want to do any of that stuff.
◧◩◪◨
4. Ukv+fr4[view] [source] 2025-01-23 11:24:40
>>Tiktaa+oC3
Humans choose its loss function, then continue to guide it with finetuning/RL/etc.
◧◩◪◨⬒
5. jbuhbj+Rc9[view] [source] 2025-01-25 12:38:59
>>Ukv+fr4
Once AGI is many times smarter than humans, the 'guiding' evaporates as foolish irrational thinking. There is no way around the fact when AGI acquires 10 times, 100, 1000 times human intelligence, we are suddenly completely powerless to change anything anymore.

AGI can go wrong in innumerable ways, most of which we cannot even imagine now, because we are limited by our 1 times human intelligence.

The liftoff conditions literally have to be near perfect.

So the question is, can humanity trust the power hungry billionaire CEOs to understand the danger and choose a path for maximum safety? Looking at how it is going so far, I would say absolutely not.

◧◩◪◨⬒⬓
6. Ukv+YTa[view] [source] 2025-01-26 04:56:17
>>jbuhbj+Rc9
> [...] 1000 times human intelligence, we are suddenly completely powerless [...] The liftoff conditions literally have to be near perfect.

I don't consider models suddenly lifting off and acquiring 1000 times human intelligence to be a realistic outcome. To my understanding, that belief is usually based around the idea that if you have a model that can refine its own architecture, say by 20%, then the next iteration can use that increased capacity to refine even further, say an additional 20%, leading to exponential growth. But that ignores diminishing returns; after obvious inefficiencies and low-hanging fruit are taken care of, squeezing out even an extra 10% is likely beyond what the slightly-better model is capable of.

I do think it's possible to fight against diminishing returns and chip away towards/past human-level intelligence, but it'll be through concerted effort (longer training runs of improved architectures with more data on larger clusters of better GPUs) and not an overnight explosion just from one researcher somewhere letting an LLM modify its own code.

> can humanity trust the power hungry billionaire CEOs to understand the danger and choose a path for maximum safety

Those power-hunger billionaire CEOs who shall remain nameless, such as Altman and Musk, are fear-mongering about such a doomsday. Goal seems to be regulatory capture and diverting attention away from the more realistic issues like use for employee surveillance[0].

[0]: https://www.bbc.co.uk/news/technology-55938494

[go to top]