YC company announces a giant data center in space
Startup Twitter: Makes so much sense, it will be so easy to cool in space!
Space Twitter: How the hell are they proposing to keep it cool??
Data centers in space use 24/7 solar power
Maybe I’m just missing something obvious, but how exactly do you get 24/7 solar coverage in space? Surely they don’t propose launching all this stuff beyond LEO…?EDIT: they cover it later on; if you think about it you can orbit on the exact line that is perpendicular to both the equator and the direction of the sun, and then have the orbit progress perfectly in sync with the earth’s orbit (`365.25/360`° of longitude drift per day) so that it’s always on this special orbit. So it’s like GEO but (presumably…?) way harder to maintain, and slightly more exclusive (how many could you safely pack in a constellation?). Not to mention the need to account for all the other LEO satellites to dodge this 4km wide square, and the much higher (presumably??) risk of debris/dust collisions.
This seems like a hat on top of a hat on top of a thousand hats, but Godspeed you mad geniuses. I would love to see this some day, perhaps if/when AGI trivializes some of the problems involved!
PS if my napkin math is right (`tan(4/800)*60` = 0.005 arc minutes) it wouldn’t be visible to the human eye at typical LEO altitudes, a threshold that Wikipedia puts around 0.5 arc minutes. We can pump those numbers up! As-is it would certainly be a sight to behold every day at dawn and dusk, that’s for sure…
Why not build the data center for general use in space as well? The companies that build data center certainly have enough resources to pull off something like this. What's stopping them?
> Cooling (water usage): Land: 1.7m tons vs. Space: None
They completely ignore the cost of 40MW of radiative cooling in the “Cost comparison” table. Then they have a section saying “we’re inventing something to dissipate gigawatts of energy into space.”
So the cost comparison is entirely misleading, and that critical lie by omission makes it very hard to take them at face value about literally anything else.
A "sun-synchronous orbit" is typically a polar orbit that precesses so that it's sunside pass is always "over" the same local time (say, noon(ish)).
That puts it over the antipodean midnight time once it crosses the pole.
If you're thinking of a different orbit, how does that work?
> As launch costs fall, orbital data centers will leverage 24/7 solar energy and _passive cooling_, rapidly deploying to gigawatt-scale, avoiding permitting constraints on Earth.
It sounds like they want to use copper heatspreaders and radiators to dump gigawatts into space. From those little pods connected to the spine... without any active cooling components
Oh dear.
> the dawn/dusk SSO orbit, where the local mean solar time of passage for equatorial latitudes is around sunrise or sunset, so that the satellite rides the terminator between day and night. Riding the terminator is useful for active radar satellites, as the satellites' solar panels can always see the Sun, without being shadowed by the Earth.
For most regular datacenters, you want to run servers, and so you want low latency (proximity), sometimes high bandwidth, and reliability. Neither are really a factor for AI training.
Say you have a datacenter which is offline for 15 seconds every 30 minutes, which only does gigabits per second with 100ms+ latencies. What else can you do with that? For training AI though, these are just fine.
So, training AI is a type of compute that would benefit from space, especially for that legal constraint. (Not sure how they will cool the datacenter though)
There are many SSO's for all the times of the day, most SSO's are not permantly in sunlight.
The comment I replied to implied that any old SSO would have the property of being always in the sunlight, this isn't true.
The particular issue with a terminator SSO is that region will get crowded (sure, space is large) and one collision will seed debris to spoil it for everyone for some time.
> Riding the terminator is useful for active radar satellites ..
All sats need some level of power or another, not all want to ride the terminator, see (for one example)
https://modis.gsfc.nasa.gov/about/
in which different sats in the constellation travel at different times of day (why?).
I only skimmed the linked PDF but did they write anything about the risks or challenges of deploying consumer hardware into space? Is space-hardened hardware not required?
Either they are stupid or lying. Both doesn't make them look good.
Also omitting costs like:
- RND. You are unlikely to use off the shelve parts for cooling (radiators?), communications, fire safety, thrusters, you are basically building a satellite for a thing nobody has done before
- maintenance mechanisms. You can't just send a guy to replace the radiation hardened SSD and if you do, congrats you have to build a space station. You could just let it die, but then the math will math different in comparison to a terrestrial data center
- operational costs of satellite control personal and ground stations. Crunching AI in space is nice, but last time I checked the state of AI it needed tons of training data. Have fun paying for that connection into a orbit that has 24/7 solar, as earth is spinning below. Radio stations across the globe.
And these are just unmentioned costs a non-space self-thought tech person came up with in 5 minutes. Don't invest into people this incompetent or malicicious.
It seems both ridiculous and obvious.
———————-
There’s really no such thing as active cooling in space. You can move the heat from the component to the radiators with pumped water but ultimately the only way to get rid of heat in space is passively radiating it away.
And it’s very inefficient at removing heat unless there’s a large temperature differential. If the radiator heats up to 70C it can dissipate 785 watts per square meter of area facing into space. I guess assuming you have a front and also a back of a panel both radiating equally that could be 1570W per square meter of panel material. You can check it yourself with this equation: https://courses.lumenlearning.com/suny-physics/chapter/14-7-....
So for this “1 gigawatt” project you’d need 0.6 million square meters of double-sided radiators. Which is about the area of the Pentagon, or a quarter the area of Monaco. It would weigh around 6000 metric tons, which is the weight of half the trash produced in NYC in a day. This would require up to 300 Falcon Heavy launches for a total launch cost of $30 billion.
They say the launch cost for one 40MW unit (including radiators! and solar panels! and radiation shielding!) will be $5 million. That’s pretty laughable as just 25 of these 40MW units = 1GW. And 1/25 of $30 billion is … well over $1 billion.
Somehow, I estimate that JUST THE COOLING RADIATORS will cost >$1 billion to put into space. But they estimate they can put the whole everything in space for less than 0.5% of a very reasonable estimate.
EDIT: Actually, just launching enough H100’s to consume 40MW would cost at least $200 million. One H100 uses 700W so you’d need 57,000 of them to consume 40MW. Each H100 weighs 1.7kg so thats a total of 97 metric tons of H100’s. The falcon heavy can launch between 20-64 metric tons so you’ll need two launches at $100 million per launch.
I imagine if you run the multiple ground stations needed to maintain a data connection to an orbit that has 24/7 solar, those ground stations will be required to follow the jurisdiction of the (different) countries they are located at. Congrats you just multiplied your legal costs.
Why do so many ideas nowadays radiate the energy of edgy weed smoking teenagers trying to game the system with half-baked ideas? Did Elon make this popular?
There's one cool way of "passively cooling energy into space" from Earth
Build your datacenters in a cold place with clear skies and train at night.
Then you read this thread where tou will likely find more, that you didn't think of. That is a good lesson if you need one.
So if they just assume things run till they die, that space data center is a totally different kind of calculation compared to a terrestrial one.
And that just scratches the first inches of the massive underbelly of this ice berg of hidden costs.
But...
Its pretty cool, and I wouldn't want to discourage anyone from thinking about the possibility of actualizing their wildest imaginations. Just, you know, think bigger.
A common one is that you can have a license to use a piece of data, also for training, but that license is only valid in a certain location. That is the kind of legal constraint that currently determines where large training runs happen, and indirectly, where datacenters are built.
> Did Elon make this popular?
I'm assuming good faith on the writers of the original article, based on my experience in the area. That is not something that was popularised by Elon Musk, as far as I am aware.
This kind of pdf is what you get when your genius startup doesn't have a single engineer with half a brain and everybody does enough coke to believe you can beat physics with sheer enthusiasm.
That, or it is calculated fraud.
Again, this is con-think. If what you plan to do is deemed wrong in all jurisdictions, and the only solutions you come up with are elaborate plans about gaming that system by abusing loopholes, maybe it is time to stop, pause and consider if this is really all you can do with your creative energy.
Sure earning money is hard, creating a successful company is hard, but some people seem to be thinking it is only possible by tricking their way into it. A bit like that kid at school who spent many days to build a system to cheat on a exam, while he could just have learned the material in a few hours with less effort and the added benefit of the knowledge.
Most likely they'll just have a separate system to disconnect troublesome racks, once some treshhold is achieved the cluster will be replaced.
But it's also, the proposal is to build a 4km x 4km array in space, and the main reason to do this now rather than say a data center in the Sahara is permitting, skipping storage, and solar utilization. The last two are rather silly reasons when the cost of solar panels is taking a nosedive and cheap energy enables dumb storage solutions, and permitting, well, let's just take as given that it's right and permitting really is that hard. That still only gives you an advantage after you scale to the point your permits would be denied, and whereas a Falcon 1 didn't really have competition so could get away with cost inefficiency, a mini compute cluster in space is competing against a terrestrial server rack.
None of the arguments seem physically implausible. Launch can get to <$5m/Starship. You can build 20km² of solar and radiation. You can hook up 5 GW of compute with remote space vehicles. It's just really shockingly hard in an Apollo sort of way. Somehow I doubt the permitting will be easy either.
But the launch cost would remain the same as my original estimate because it was based on weight per “effective radiator area” of current best-practice space materials.
Neither would want me to invest money into their idea.
I think you could literally give this to a bunch of teenage nerds for an afternoon and get a pdf with more substance.
The first question isn’t whether it’s “possible” with enough money, it’s whether it could ever be within even an order of magnitude of “profitable”. The second question is why would you trust a company who provides these kinds of estimates with no reasonable explanation for these obvious massive discrepancies?
What are you basing this on? Just the GPU’s to utilize 40MW will weigh nearly 100 metric tons (40MW / 700W * 1.7kg). Thats an entire Starship payload just for the GPU’s and absolutely nothing else.
Starship launches currently are $100 million and after adding in solar panels and radiator panels and radiation shielding you’ll need several of those Starship launches per single “40MW compute unit”.
How do you get this down to “<$5 million”?
If SpaceX can really pull that off, my math above would look a lot different.
The stated long run goal was $2-3m, per Elon Musk, and if you put aside practicalities like limits on how big the market is, and then not having a robust heatshield design yet, it's fundamentally coherent, because of full reuse.
If you're looking for more practical medium term numbers, Gwynne Shotwell suggested around the $50m mark would be an early price goal.
You just need ballons (5m) and a city (20m). Easy, right? Now imagine me phrasing that without sarcasm and really trying to sell you on it, while still never doing the actual math.
That is the problem here. The idea is one that is designed to appeal to a specific audience. It is not an idea that makes sense to anybody who runs data centers or builds satellites.
Let's talk physics: In space there is no air, that means fans are useless. That means the extremely cold outside temperature is absolutely useless for cooling, because it is essentially a vacuum. The kind of vacuum used by the thermos company to keep your tea warm. Only the tea is now a data center emitting megawatts of heat. If you don't want the datacenter to melt you will need to get heat out and in space you can only do that by passive radiation, which is essentially copper radiating heat out. We have the math to calculate how big that radiator needs to be to be able to move energy out, and to no surprise it is going to habe to be significantly bigger than any active cooling solution used in a data center (just like passive coolers in PCs have to be bigger to allow the same power dissipation). Now mind they estimated the cost for cooling with 0 USD. I don't know how much copper they are going to but with 0 USD, but either they haven't the slightest clue what they are doing or they try to trick you. Keeping the temperature of satelites stable is a major challenge and if I have heard that, they know that too. Keep in mind this is just cooling, there are other significant parts they skip entirely, like communications, maintenance, RND costs, massive amounts of radiators..
A data center in space is more than taking a satellite and a data center and mashing them together. There are specific challenges to operating a data center in space, some of which may make it so uneconomic that it doesn't make sense to do it, just like a floating city. It looks cool in the drawings tho.
A kg in a heavy falcon costs ~1500 USD. So we land at 66 billion USD in launch cost for the copper alone.
I probably don't have to explain that building a radiator of that size in space isn't free either. And the stuff that gets the heat to those radiatoes is neither free nor lightweight either.
Yet cooling somehow costs them zero dollars.
Thesw kind of calculations are ballpark stuff. Even if they are a magnitude better these are still uneconomical numbers.
I should really get an AI to remake Guesstimate as a self-hostable site...
Um, isn't that a secondary benefit? They'd also act as solar shades.
I am btw. not even sure if thermal conductivity is the limiting factor here, they still need to radiate that out and that is a function of surface area if I remember correctly.
Edit: Higher thermal conductivity helps with the thermal dissipation within the radiator, it does not affect the area of the radiator needed. Although it could affect weight if you roll it out thin enough. Still, this is quickly becoming the opposite of "simply putting a data center into space" and more like "decades of research on other topics". And AI is the vehicle to sell that.
True. But this relies a lot on software and probably some specialized details in hardware as well
Your Dragon docking maneuvering requires way less calculations than your typical AI training. Hence it is ok to have HW do the same calculation multiple times and check the values.
This is different from AI training where you want the most reliability 24h/day
Another aspect that is entirely missing from that pdf that doesn't necessarily convince me these people have a plan.
Don't get me wrong, maybe space datacenters can be more efficient after decades of RND and maybe that RND is even worth it, but if you are the company who wants to convince me to give you money to reach that job you should not go like "cooling in space is free", because there is no such thing as free lunch in space engineering.
The low temperature of space is mentioned to trick non-critical people into thinking "wow smart, basically free cooling", when it is anything but. I am all for the idea of putting money into researching the topics needed to get those things going, but misleading investors like that is plainly wrong.
> As conduction and convection to the environment are not available in space, this means the data center will require radiators capable of radiatively dissipating gigawatts of thermal load. [...] This component represents the most significant technical challenge required to realize hyperscale space data centers.
and
> A 5 GW data center would require a solar array with dimensions of approximately 4 km by 4 km
> [...]
> A 1m x 1m black plate kept at 20°C can radiate about 850 watts to deep space, which is roughly three times the electricity generated per square meter by solar panels. As a result, these radiators need to be about one-third the size of the solar arrays, depending on the radiator configuration.
Seriously, what more of an acknowledgement do you want? The paper covers everything you are complaining about in pretty plain and frank language.
It is baffling me how much I'm defending this proposal because I don't actually think it's a sensible company but they really really do not do this.
The rest of the paper is a bit more sensible.
- Much easier power supply/handling
- Way less problems with cooling
- Much simpler hw requirements
- Sub-million dollar launches
And still failed
Firstly, let's note that the founding team is certainly not inexperienced. One of them has worked at both SpaceX and Microsoft on the datacenter side, one claims to have 10 years of experience designing satellites at AirBus and he has a PhD in materials science. And the CEO has a business background mostly but also worked on US national security satellite projects (albeit, at McKinsey).
They make a big deal of it being about AI training but it would seem like inference is a much better target to aim for. Training clusters are much harder than inference to build, the hardware obsoletes quicker, they have much higher inter-node connectivity, you need ultra-high bandwidth access to massive datasets and you therefore need a large cluster before anything is useful at all. Inference is a much easier problem. Nodes can work independently, the bandwidth needs are minimal and latency hardly matters either as there is an ever increasing number of customers who use inference in batch jobs. Offloading inference to the sky means the GPUs and power loads that do remain on earth can be fully dedicated to training instead of serving, which dodges power constraints on land for a while longer (potentially for as long as needed, as we don't know training will continue to scale up its compute needs whereas we can be fairly confident inference will).
If you target inference then instead of needing square kilometers of solar and radiator you can get away with a constellation of much smaller craft that scale up horizontally instead of vertically. Component failures are also handled easily, just like for any other satellite. Just stop sending requests to those units and deorbit them when enough internal components have failed. Most GPU failures in the datacenter are silicon or transceiver failures due to high levels of thermal cycling anyway, and if you focus on batch inference you can keep the thermal load very steady by using buffering on the ground to smooth out submission spikes.
Ionising radiation isn't necessarily a problem. As they note, AI is non-deterministic anyway and software architectures are designed to resilient to transient computation errors. Only the deterministic parts like regular CPUs need to be rad-hardened. Because you aren't going to maintain the components anyway you get possibilities that wouldn't make sense on earth,
A focus on inference has yet another advantage w.r.t. heat management. For training right now the only game in town is either TPUs (not available to be put on a satellite) or Nvidia GPUs (designed for unlimited power/cooling availability). For inference though there is a wider range of chips available, some of which are designed for mobile use cases where energy and thermal efficiency is prime. You could even design your own ASICs that trade off latency vs energy usage to reduce your energy/cooling needs in space.
Finally, although heat management in space is classically dealt with using radiators, if launch costs get really low you could potentially consider alternative approaches like droplet radiators or by concentrating heat into physical materials that are then ejected over the oceans. Because the satellites are unmanned it opens up the possibility of using dangerous materials for cooling that wouldn't normally be reasonable, like hydrogen or liquid sodium. This would mean regular "recooling runs" but if launch costs get low enough, maybe that's actually feasible to imagine.
I'm fairly confident that for the next decade that if you want to launch ~10 tons into orbit, it'll cost you $70M whether you launch on Falcon9 or Starship or Neutron or New Glenn. If you want to launch 100 tons, it'll cost you more even though it's the same Starship, just because nobody else can do that.
If you want to launch 1000 rockets you will be able to call up SpaceX and negotiate a much better price, but only if you have good negotiating power -- if you can convince them you won't launch at all if you don't get a good price.