zlacker

[return to "Stargate Project: SoftBank, OpenAI, Oracle, MGX to build data centers"]
1. TheAce+9g[view] [source] 2025-01-22 00:03:02
>>tedsan+(OP)
I'm confused and a bit disturbed; honestly having a very difficult time internalizing and processing this information. This announcement is making me wonder if I'm poorly calibrated on the current progress of AI development and the potential path forward. Is the key idea here that current AI development has figured out enough to brute force a path towards AGI? Or I guess the alternative is that they expect to figure it out in the next 4 years...

I don't know how to make sense of this level of investment. I feel that I lack the proper conceptual framework to make sense of the purchasing power of half a trillion USD in this context.

◧◩
2. HarHar+9l[view] [source] 2025-01-22 00:37:32
>>TheAce+9g
Largest GPU cluster at the moment is X.ai's 100K H100's which is ~$2.5B worth of GPUs. So, something 10x bigger (1M GPUs) is $25B, and add $10B for 1GW nuclear reactor.

This sort of $100-500B budget doesn't sound like training cluster money, more like anticipating massive industry uptake and multiple datacenters running inference (with all of corporate America's data sitting in the cloud).

◧◩◪
3. intern+0O[view] [source] 2025-01-22 04:25:07
>>HarHar+9l
Shouldn't there be a fear of obsolescence?
◧◩◪◨
4. HarHar+4Q[view] [source] 2025-01-22 04:46:54
>>intern+0O
It seems you'd need to figure periodic updates into the operating cost of a large cluster, as well as replacing failed GPUs - they only last a few years if run continuously.

I've read that some datacenters run mixed generation GPUs - just updating some at a time, but not sure if they all do that.

It'd be interesting to read something about how updates are typically managed/scheduled.

[go to top]