You might need space for radiators, but there is plenty space in space.
This is one of those things that's not obvious till you think about it.
Radiators in space are extremely inefficient because there's no conduction.
Also you have huge heat inputs from the sun. So you need substantial cooling before you get around to actually cooling the GPUs.
This is so obvious, but it's so stupid and at this scale that people find it hard to believe.
radiators can be made as long as desirable within the shade of the solar panels, hence the designer can pracitically set arbitrarily low temperatures above the background temperature of the universe.
EDIT: people continue downvoting and replying with irrelevant retorts, so I'll add in some calculations
Let's assume
1. cheap 18% efficient solar panels (though much better can be achieved with multijunction and quantum-cutting phosphors)
2. simplistic 1360 W/m^2 sunlight orthogonal to the sun
3. an abstract input Area Ain of solar panels (pretend its a square area: Ain = L ^ 2)
4. The amount of heat generated on the solar panels (100%-18%) * Ain * 1360 W / m ^ 2, the electrical energy being 18% * Ain * 1360 W / m ^ 2. The electrical energy will ultimately be converted to computational results and heat by the satellite compute. So the radiative cooling (only option in space) must dissipate 100% of the incoming solar energy: the 1360 W / m^2 * Ain.
5. Lets make a pyramid with the square solar panel as a base, with the apex pointing away from the sun, we make sure the surface has high emissivity (roughly 1) in thermal infrared. Observe that such a pyramid has all sides in the shade of the sun. But it is low earth orbit so lets assume warm earth is occupying one hemisphere and we have to put thermal IR reflectors on the 2 pyramid sides facing earth, so the other 2 pyramid sides face actual cold space.
6. The area for a square based symmetric pyramid: we have
6.a. The area of the base Ain = L * L.
6.b. The area of the 4 sides 2 * L * sqrt( L ^ 2 / 4 + h ^ 2 )
6.c. The area of just 2 sides having output Area Aout = L * sqrt( L ^ 2 / 4 + h ^ 2 )
7. The 2 radiative sides not seeing the sun and not seeing the earth together have the area in 6.c and must dissipate L ^ 2 * 1360 W / m ^ 2 .
8. Hello Stefan-Boltzmann Law: for emissivity 1 we have the radiant exitance M = sigma * T ^ 4 (units W / m ^ 2 )
9. The total power exited through the 2 thermal radiating sides of the pyramid is then Aout * M
10. Select a desired temperature and solve for h / L (to stay dimensionless and get the ratio of the pyramid height to its base side length), lets run the satellite at 300 K = ~26 deg C just as an example.
11. If you solve this for h / L we get: h / L = sqrt( ( 1360 W / m ^ 2 / (sigma * T ^ 4 ) ) ^ 2 - 1/4 )
12. Numerically for 300K target temperature we get: h/L = sqrt((1360 / (5.67 * 10^-8 * 300 ^ 4)) ^ 2 - 1/4) = 2.91870351609271066729
13. So the pyramid height of "horribly poor cooling capability in space" would be a shocking 3 times the side length of the square solar panel array.
As a child I was obsessed with computer technology, and this will resonate with many of you: computer science is the poor man's science, as soon as a computer becomes available in the household, some children autodidactically educate themselves in programming etc. This is HN, a lot of programmers who followed the poor man's science path out of necessity. I had the opportunity to choose something else, I chose physics. No amount of programming and acquiring titles of software "engineer" will be a good substitute for physicists and engineers that actually had courses on the physical sciences, and the mathematics to follow the important historical deductions... It's very hard to explain this to the people who followed the path I had almost taken. And they downvote me because they didn't have the opportunity, courage or stamina to take the path I took, and so they blindly copy paste each others doomscrolled arguments.
Look I'm not an elon fanboy... but when I read people arguing that cooling considerations excludes this future, while I know you can set the temperature arbitrarily low but not below background temperature of the universe 4 K, then I simply explain that obviously the area can be made arbitrarily large, so the temperature can be chosen by the system designer. But hey the HN crowd prefers the layers of libraries and abstractions and made themselves an emulation of an emulation of an emulation of a pre-agreed reality as documented in datasheets and manuals, and is ultimately so removed from reality based communities like physics and physics engineering, that the "democracy" programmers opinions dominate...
So go ahead and give me some more downvotes ;)
If you like mnemonics for important constants: here's one for the Stefan Boltzman constant: 5.67 * 10^-8 W / m^2 / K ^ 4
thats 4 consecutive digits 5,6,7,8 ; comma or point after the first significant digit and the exponent 8 has a minus sign.
The ISS power/heat budget is like 240,000 BTU/hr. That’s equivalent to half of an Nvidia GB200 NVL72 rack. So two international space stations per rack. Or about 160,000 international space stations to cool the 10GW “Stargate” datacenter that OpenAI’s building in Abilene. There are 10,000 starlink satellites.
Starship could probably carry 250-300 of the new V2 Mini satellites which are supposed to have a power/heat budget of like 8kW. That's how I got 5,000 Starship launches to match OpenAI’s datacenter.
Weight seems less of an issue than size. 83,000 NVL72’s would weigh 270 million lbs or 20% of the lift capacity of 5000 starship launches. Leaving 80% for the rest of the satellite mass, which seems perhaps reasonable.
Elon's napkin math is definitely off though, by over an order of magnitude. "a million tons per year of satellites generating 100 kW of compute power per ton" The NVL72's use 74kW per ton. But that's just the compute, without including the rest of the fucking satellite (solar panels and radiators). So that estimate is complete garbage.
One note: If you could afford to send up one of your own personal satellites, it would be extremely difficult for the FBI to raid.
That's equivalent to a couple datacenter GPUs.
> You might need space for radiators, but there is plenty space in space.
Finding space in space is the least difficult problem. Getting it up there is not easy.
I’m not that smart, but if I were, I would be thinking this is an extended way to move the losses from the Twitter purchase on to the public markets.
[1] https://www.axios.com/2023/12/31/elon-musks-x-fidelity-valua...
[2] https://www.reuters.com/markets/deals/musks-xai-buys-social-...
[3] https://www.cnbc.com/amp/2026/02/02/elon-musk-spacex-xai-ipo...
Yes, you can overcome this with enough radiator area. Which costs money, and adds weight and space, which costs more money.
Nobody is saying the idea of data centers in space is impossible. It's obviously very possible. But it doesn't make even the slightest bit of economic sense. Everything gets way, way harder and there's no upside.
Obviously advertisers have not been fans. And it is a dying business. But rather than it dying, Elon has found a clever (and probably illegal) way to make it so that SpaceX, which has national security importance, is going to prop up Twitter/X. Now our taxpayer dollars are paying for this outrageous social network to exist.
Space is not empty. Satellites have to be boosted all the time because of drag. Massive panels would only worsen that. Once you boosters are empty the satellite is toast.
Building this is definitely not trivial and not easy to make arbitrarily large.
1) new technology improves vacuum heat radiation efficiency
2) new technology reduces waste heat generation from compute
All the takes I've seen have been focused on #1, but I'm starting to wonder about #2... Specifically spintronics and photonic chips.
I don't think dissipating heat would be an issue at all. The cost of launch I think is the main bottleneck, but cooling would just be a small overhead on the cost of energy. Not a fundamental problem.
In all the conversations I've seen play out on hacker news about compute in space, what comes up every time is "it's unviable because cooling is so inefficient".
Which got me thinking, what if cooling needs dropped by orders of magnitude? Then I learned about photonic chips and spintronics.
I am highly skeptical about data centers in space, but radiators don't need to be unshaded. In fact, they benefit from the shade. This is also being done on the ISS.
It was easy to support SpaceX, despite the racist/sexist/authoritarian views of its owner, because he kept that nonsense out of the conversation.
X is not the same. Elon is actively spewing his ultraconservative views on that site.
Now that these are the same company, there's no separation. SpaceX is part of Musk's political mission now. No matter how cool the tech, I cannot morally support this company, and I hope, for the sake of society, it fails.
This announcement, right after the reveal that Elon Musk reached out to Jeffrey Epstein and tried to book a trip to Little St. James so that he could party with "girls", really doesn't bode well.
It's a shame you can't vote these people out, because I loved places like Twitter, and businesses like SpaceX and Tesla, but Elon Musk is a fascist who uses his power and influence to attack some of the most important pillars of our society.
The JWST operates at 2kw max. That's not enough for a single H200.
AI datacenters in space are a non-starter. Anyone arguing otherwise doesn't understand basic thermodynamics.
There are commercial systems that can use open loop cooling (i.e. spray water) to improve efficiency of the panel by keeping the panel at a optimal temp of ~25C and the more expensive closed loop systems with active cooling recovers additional energy from the heat by circulating water like a solar heater in the panel back.
On Low Earth Orbits (LEOs), sure, but the traces of atmosphere that cause the drag disappear quite fast with increasing altitude. At 1000 km, you will stay up for decades.
This is an extremely stupid idea, but because of our shared delusion of capitalism and the idea that wealth accumulation at the top should be effectively limitless, this guy gets to screw around and divert actual human labor towards insane and useless projects like this rather than solving real world problems.
On Earth, you can vent the heat into the atmosphere no problem, but in space, there's no atmosphere to vent to, so dissipating heat becomes a very, very difficult problem to solve. You can use radiators to an extent, but again, because no atmosphere, they're orders of magnitude less effective in space. So any kind of cooling array would have to be huge, and you'd also have to find some way to shade them, because you still have to deal with heat and other kinds of radiation coming from the Sun.
It's easier to just keep them on Earth.
https://www.spectrolab.com/company.html
Twenty-five years after the ISS began operations in low Earth orbit, a new generation of advanced solar cells from Spectrolab, twice as efficient as their predecessors, are supplementing the existing arrays to allow the ISS to continue to operate to 2030 and beyond. Eight new arrays, known as iROSAs (ISS Roll-Out Solar Arrays) are being installed on the ISS in orbit.
The new arrays use multi-junction compound semiconductor solar cells from Spectrolab. These cells cost something like 500 times as much per watt as modern silicon solar cells, and they only produce about 50% more power per unit area. On top of that, the materials that Spectrolab cells are made of are inherently rare. Anyone talking about scaling solar to terawatts has to rely on silicon or maybe perovskite materials (but those are still experimental).
The whole concept is still insane though, fwiw.
Isn't this fixed by blackbody radiation equations?
people heavily underestimate radiative cooling, probably because precisely our atmosphere hinders its effective utilization!
lesson: its not because radiative cooling is hard to exploit on earth at sea level, that its similarily ineffective in space!
This is precisely why my didactic example above uses a convex shape, a pyramid. This guarantees each surface absorbs or radiates energy without having to take into account self-obscuring by satellite shape.
A very high end desktop pulls more electricity than the whole JWST... Which is about the same as a hair dryer.
Now you need about 50x more for a rack and hundreds/thousands racks for a meaningful cluster. Shaded or not it's a shit load of radiators
https://azure.microsoft.com/en-us/blog/microsoft-azure-deliv...
China has a land area greater than the USA. (Continental or otherwise.)
for a reasonable temperature (check my comment for updated calculations) the height of a square based pyramidal satellite would be about 3 times the side length of its base, quite reasonable indeed. Thats with the square base of the pyramid as solar panel facing the sun, and the top of the pyramid facing away, so all sides are in the shade of the base. I even halved my theoretical cooling power to keep calculations simple: to avoid a long confusing calculation of the heat emitted by earth, I handicapped my design so 2 of the pyramidal side surfaces are reflective (facing earth) and the remaining 2 side triangles of the pyramid are the only used thermal radiative cooling surfaces. Less pessimistic approaches are possible, but would make the calculation less didactic for the HN crowd.
for a 4 m x 4 m solar panel, the height of the pyramid would have to be 12 m to attain ~ 300 K on the radiator panels. Thats also the cold side for your compute.
for a 4 km x 4 km solar panel the height of the pyramid would be 12 km.
Either that or your talking out of your ass.
FYI a single modern rack consumes twice the energy of the entire ISS, in a much much much much smaller package and you'll need thousands of them. You'd need 500-1000 sqm of radiator per rack and that alone would weight several tonnes...
You'll also have to actively cool down your gigantic solar panel array
After that frankly society-destabilizing miracle of inventing competitive photonic processing, your goal of operating data centers in space becomes a tractable economic problem:
Pros:
- You get a continuous 1.37 kW/m^2 instead of an intermittent 1.0 kW/m^2
- Any reasonable spatial volume is essentially zero-cost
Cons:
- Small latency disadvantage
- You have to launch all of your hardware into polar orbit
- On-site servicing becomes another economic problem
So it's totally reasonable to expect the conversation to revolve around cooling, because we know SpaceX can probably direct around $1T into converting methane into delta-V to make the economics work, but the cooling issue is the difference between maybe getting one DC up for that kind of money, or 100 DCs.
The larger you make the area, the more solar energy you are collecting. More shade = more heat to radiate. You are not actually making the problem easier.
Imagine the capillary/friction losses, the force required, and the energy use(!) required to pump ammonia through a football-field sized radiator panel.
I wonder if Musk would be willing to let a journalist do a deep dive on all internal communications in the same way he did when he took over twitter.
for a target temperature of 300K that would mean the pyramid height would be a bit less than 3 times higher than the square base side length h=3L.
I even handicapped my example by only counting heat radiation from 2 of the 4 panels, assuming the 2 others are simply reflective (to make the calculation of a nearby warm Earth irrelevant).
"Radiators can shadow each other," this is precisely why I chose a convex shape, that was not an accident, I chose a pyramid just because its obvious that the 4 triangular sides can be kept in the shade with respect to the sun, and their area can be made arbitrarily large by increasing the height of the pyramid for a constant base. A convex shape guarantees that no part of the surface can appear in the hemispherical view of any other part of the surface.
The only size limit is technological / economical.
In practice h = 3xL where L was the square base side length, suffices to keep the temperature below 300K.
If heat conduction can't be managed with thermosiphons / heat pipes / cooling loops on the satellite, why would it be possible on earth? Think of a small scale satellite with pyramidal sats roughly h = 3L, but L could be much smaller, do you actually see any issue with heat conduction? scaling up just means placing more of the small pyramidal sats.
If we suddenly lose 2 orders of magnitude of heat produced by our chips, that means we can fit 2 orders of magnitude more compute in the same volume. That is going to be destabilizing in some way, at the very least because you will get the same amount of compute in 1% the data center square footage of today; alternatively, you will get 100-900x the compute in today's data center footprint. That's like going from dial-up to fiber.
but you'd rarely ever need it though: it just needs to rotate at a low angular velocity of 1 rotation per year to keep facing the sun.
2. That would also presumably work on earth, unless it somehow relied on low-gravity, and would also be cheaper to benefit from on earth.
No need to apply at NASA, to the contrary, if you don't believe in Stefan Boltzmann law, feel free to apply for a Nobel prize with your favorite crank theory in physics.
Without eventually moving compute to space we are going to have compute infringe on the space, energy, heat dissipation rights of meatbags. Why welcome that?!?
Sure, it occurs, but what does the Stefan–Boltzmann law tell us about GPU clusters in space?
In space or vacuum radiation is the best way to dissipate heat, since it's the only way.
I believe the reason the common person assumes thermal radiation is a very poor way of shedding heat is because of 2 factoids commonly known:
1. People think they know how a vacuum flask / dewar works.
2. People understand that in earthly conditions (inside a building, or under our atmosphere) thermal radiation is insignificant compared to conduction and convection.
But they don't take into account that:
1) Vacuum flasks / dewars use a vacuum for thermal insulation. Yes and they mirror the glass (emissivity nearer to ~0) precisely because thermal radiation would occur otherwise. They try their best to eliminate thermal radiation, a system optimized to eliminate thermal radiation is not a great example of how to effectively use thermal radiation to conduct heat. The thermal radiation panels would be optimized for emissivity 1, the opposite of whats inside the vacuum flask.
2) In a building or under an atmosphere a room temperature object is in fact shedding heat very quickly by thermal radiation, but so are the walls and other room temperature objects around you, they are reheating you with their thermal radiation. The net effect is small, in these earthly conditions, but in a satellite the temperature of the environment faced by the radiating surfaces is 4K, not a temperature similar to the object you are trying to keep cool.
People take the small net effect of thermal radiation in rooms etc, and the slow heat conduction through a vacuum flasks walls as representative for thermal radiation panels facing cold empty space, which is the mistake.
I provided the calculation for the pyramidal shape: if the base of a pyramid were a square solar panel with side length L, then for a target temperature of 300K (a typical back of envelope substitute for "room temperature") the height of the pyramid would have to be about 3 times the side length of the square base. Quite reasonable.
> Sure, it occurs, but what does the Stefan–Boltzmann law tell us about GPU clusters in space?
The Stefan-Boltzmann law tells us that whatever prevents us from putting GPU clusters in space, it's not the difficulty in shedding heat by thermal radiation that is supposedly stopping us.
If the base were a solar panel aimed perpendicular to sun, then the tip is facing away and all side triangles faces of the pyramid are in the shade.
I voluntarily give up heat dissipation area on 2 of the 4 triangular sides (just to make calculations easier, if we make them thermally reflective -emissivity 0-, we can't shed heat, but also don't absorb heat coming from lukewarm Earth).
The remaining 2 triangular sides will be large enough that the temperature of the triangular panels is kept below 300 K.
The panels also serve as the cold heat baths, i.e. the thermal sinks for the compute on board.
Not sure what you mean with wings, I intentionally chose a convex shape like a pyramid so that no part of the surface of the pyramid can see another part of the surface, so no self-obstruction for shedding heat etc...
If this doesn't answer your question, feel free to ask a new question so I understand what your actual question is.
The electrical power available for compute will be approximately 20% (efficiency of solar panels) times the area of the square base L ^ 2 times 1360 W / m ^ 2 .
The electrical power thus scales quadratically with the chosen side length, and thus linearly with the area of the square base.
> By directly harnessing near-constant solar power
Implies they would not spend half of their time in the dark.
Just look at a car. Maybe half a square meter of “radiator” is enough to dissipate hundreds of kW of heat, because it can dump it into a convenient mass of fluid. That’s way more heat than the ISS’s radiators handle, and three orders of magnitude less area.
Or do a simple experiment at home. Light a match. Hold your finger near it. Then put your finger in the flame. How much faster did the heat transfer when you made contact? Enough to go from feeling mildly warm to causing injury.
Also this assumes a flat surface on both sides. Another commenter in this thread brought up a pyramid shape which could work.
Finally, these gpus are design for earth data centers where power is limited and heat sinks are abundant. In the case of space data centers you can imagine we get better radiators or silicon that runs hotter. Crypto miners often run asics very hot.
I just don't understand why every time this topic is brought up, everyone on HN wants to die on the hill that cooling is not possible. It is?? the primary issue if you do the math is clearly the cost of launch.
These people are all smoking crack.
Twitter also has more (not total, but more) free speech than any other social networking site. For example, you are allowed to discuss empirical research on race, crime and IQ. That would get you rate limited or banned quickly on other websites, including HN.
Here's a big one: you can't put radiators in shadow because the coolant would freeze. ISS has system dedicated to making sure the radiators get just enough sunlight at any given time.
Here's a big one: you can't put radiators in shadow because the coolant would freeze. ISS has system dedicated to making sure the radiators get just enough sunlight at any given time.
[0] https://developer.nvidia.com/deep-learning-performance-train...
Click the "Large Language Model" tab next to the default "MLPerf Training" tab.
That takes 16.8 days on 128 B200 GPU's:
> Llama3 405B 16.8 days on 128x B200
A DGX B200 contains 8xB200 GPU's. So it takes 16.8 days on 16 DGX B200's.
A single DGX (8x)B200 node draws about 14.3 kW under full load.
> System Power Usage ~14.3 kW max
source [1] https://www.nvidia.com/en-gb/data-center/dgx-b200
16 x 14.3 kW = ~230 kW
at ~20% solar panel efficiency, we need 1.15 MW of optical power incident on the solar panels.
The required solar panel area becomes 1.15 * 10^6 W / 1.360 * 10^3 W / m ^ 2 = 846 m ^ 2.
thats about 30 m x 30 m.
From the center of the square solar panel array to the tip of the pyramid it would be 3x30m = 90 m.
An unprecedented feat? yes. But no physics is being violated here. The parts could be launched serially and then assembled in space. Thats a device that can pretrain from scratch LLaMa 3.1 in 16.8 days. It would have way to much memory for LLaMa 3.1: 16 x 8 x 192 GB = ~ 25 TB of GPU RAM. So this thing could pretrain much larger models, but would also train them slower than a LLaMa 3.1.
Once up there it enjoys free energy for as long as it survives, no competing on the electrical grid with normal industry, or domestic energy users, no slow cooking of the rivers and air around you, ...
this system would not be given such an orbit. Its trivial to decrease the cooling capacity of the radiators: just have an emissivity ~0 shade (say an aluminum foil) curtain obscure part of the radiator so that it locally sees itself instead of cold empty space. This would only happen during 2 short periods in the year.
The design issues of the ISS are totally different from this system.
its not exactly good advertisement for conductive or convective heat transfer if its really employing thermal radiation under the hood!
but do you want big tech to shit where you eat? or do you want them to go to the bathroom upstairs?
At some point I'm thinking the large resistance to the idea I am seeing in a forum populated with programmers is the salivation-inducing idea that all that datacenter hardware will eventually get sold for less and less, but if we launch them to space there won't be any cheap devalued datacenter hardware to put in their man-caves.
I am saddened too by the fact that the system is designed so that people like him can waste a large amount of economic and human capital.
they could go near a Lagrange point
there are so many options
heavier boats are also slower to accelerate or decelerate compared to smaller boats, does this mean we should ban container ships? having special orbits for megastructure lanes would seem a reasonable approach.
My example is optimized not for minimal radiator surface area, but for minimal mathematical and physical knowledge required to understand feasibility.
Your numbers are different because you chose 82 C (355 K) instead of my 26 C (300 K).
Near normal operating temperatures hardware lifetime roughly doubles for every 10 deg C/K decrease in temperature (this does not hold indefinitely of course).
You still need to move the heat from the GPU to the radiator so my example of 26 deg C at the radiator just leaves a lot of room against criticism ;)
The problem for 1 is how do you dissipate heat without being in contact with a lower temperature mass.
Creating a vacuum on earth would solve nothing as the heath would still have to escape the vacuum.
However, what do you reckon the energy balance is for launching the 1 GW datacenter components into space and assembling it?
You can prove that the lower efficiency can be managed, and they will still say the only thing they know: "Thermal radiation is not efficient".
The economics and energy balance is where I too am very skeptical, at least near term.
Quick back of envelope calculations gave me a payback time of about 10 years, so which is only a single order of magnitude off which can easily accumulate by lack of access to detailed plans.
I can not exclude they see something (or can charge themselves lower launch costs, etc.) that makes it entirely feasible, but also can't confirm its infeasible economically. For example I have no insight of what fraction of terrestrial datacenter establishment cost goes into various "frictions" like paying goverments and lawyers to gloss over all the details, paying permission taxes etc. I can see how space can become attractive in other ways.
Then again if you look at the energetic cost to do a training run, it seems MW facilities would suffice. So why do we read all the noise about restarting nuclear power plants or trying to secure new power plants strictly for AI? It certainly could make sense if governments are willing to throw top dollar at searching algorithmic / mathematical breakthroughs in cryptography. Even if the compute is overpriced, you could have a lot of LLM's reasoning in space to find the breakthroughs before strategic competitors do. Its a math and logic race unfolding before our eyes, and its getting next to no coverage.
Nobody said sending a single rack and cooling it is technically impossible. We're saying sending datacenters worth of rack is insanely complex and most likely not financially viable nor currently possible.
Microsoft just built a datacenter with 4600 racks of GB300, that's 4600 * 1.5t, that alone weights more than everything we sent into orbit in 2025, and that's without power nor cooling. And we're still far from a single terawatt.
a different question is the expected payback time, unless someone can demonstrate a reasonable calculation that shows a sufficiently short payback period, if no one here can we still can't exclude big tech seeing something we don't have access to (the launch costs charged to third parties may be different than the launch costs charged for themselves for example).
suppose the payback time is in fact sufficiently short or commercial life sufficiently long to make sense, then the scale didn't really matter, it just means sending up the system described above repeatedly.
as an example my points almost instantly fell down 15 points, but over the last 11 hours it has recuperated back to just a 1 point drop.
it's not because they don't like to write an apology (which I don't ask for) that they aren't secretly happy they learnt something new in physics, and in the end thats what matters to me :)
Yeah doesn't sound particularly feasible, sorry. Glad you know all the math though!
For a 230 kW cluster: 16 x DGX (8x)B200; we arrived at a 30m x 30m solar PV area, and a 90 meter distance from the center of the solar array to the tip of the pyramid.
1 GW = 4348 x 230 kW
sqrt(4348)= ~66
so launch 4348 of the systems described in the calculation I linked, or if you insist on housing them next to each other:
the base length becomes 30 m x 66 = 1980 m = ~ 2 km. the distance from center of square solar array to the tip of the pyramid became 6 km...
any of these systems would need to be shipped and collected in orbit and then assembled together.
a very megalomaniac endeavor indeed.
> Obviously advertisers have not been fans. And it is a dying business. But rather than it dying, Elon has found a clever (and probably illegal) way to make it so that SpaceX, which has national security importance, is going to prop up Twitter/X. Now our taxpayer dollars are paying for this outrageous social network to exist.
There is a difference between a dying business and and influential one though. Twitter is dying, but it is still influential.
Either it does or it doesn't make financial sense, and if it does the scale isn't the issue (well until we run into material shortages building Elon's Dyson sphere, hah).
The resistance to the idea is because it doesn’t make any sense. It makes everything more difficult and more expensive and there’s no benefit.
Low satellites are still cooler in the Earth's shadow than they would be in unshadowed orbits, but higher orbits are cooler than either. Not where you'd want to put millions of datacenters though.
Other than some libertarian fantasy of escaping the will of the non-billionaire people, the question remains: what is the advantage of putting information systems in space? The only rational answer: to host things that are both globally illegal and profitable.
To run just one cluster (which would be generally a useless endeavor given it is just a few dozen GPUs) would be equivalent to the best we've ever done, and you wonder why you're being downvoted? Your calculations, which are correct from a scientific (but not engineering) standpoint, don't support the argument that it is possible, but rather show how hard it is. I can put the same cluster in my living room and dissipate the heat just fine, but you require a billion dollar system to do it in space.
The main benefit is that solar panels go from a complicated mess of batteries + permitting to a very stable, highly efficient energy source.
Obligatory disclaimer: I'm not conservative, I dont particularly care for Elon or X or this merger. I just despise intellectual dishonesty and selective outrage.
The only intellectual dishonesty is “blaming it on the libs” argument. Ignoring the partisan arguments, the platform was quite literally being used by users to undress women and produce CSAM. [1] Just one of the many examples where you can argue the platform is toxic.
[1] https://www.reuters.com/legal/litigation/grok-says-safeguard...
It's you who didn't answer my question :)
Would you prefer big tech to shit where we eat, or go to the bathroom upstairs?
This is false, as I pointed out in the neighbor comment.
The reason I'm talking about computers on the ground using the atmosphere for cooling is because that's how things are done right now and that's the obvious alternative to space-based computing.
Why does it matter what I prefer? I'd love to see all industry in space and Earth turned into a garden. I'm not talking about what I want. I'm talking about the economics. I'm asking why so many people are talking about putting data centers in space when doing so would be so much more difficult than putting data centers on Earth. If your argument is that it's more difficult but it's worth the extra effort so we don't "shit where we eat," great, but that's the first time I've ever seen that argument put forth. None of the actual players are making that case.
In reality, probably radiator designs would rely on fluid cooling to move heat all the way along the radiator, rather than thermal conduction. This prevents the above problem. The issue there is that we now need to design this system with its pipes and pumps in such a way that it can run reliably for years with zero maintenance. Doable? Yes. Easy or cheap? No. The reason cooling on Earth is easier is that we can transfer heat to air / water instead of having to radiate it away ourselves. Doing this basically allows us to use the entire surface of the planet as our radiator. But this is not an option in space, where we need to supply the radiator ourselves.
In terms of scaling by instead making many very small sats, I agree that this will scale well from a cooling perspective as long as you keep them far enough apart from each other. This is not as great from the perspective of many things we actually want to use a compute cluster for, which require high-bandwidth communication between GPUs.
In any case, another very big problem is the fact that space has a lot of ionizing radiation in it, which means we also have to add a lot of radiation shielding too.
Keep in mind that the on-the-ground alternative that all this extra fooling around has to compete with is just using more solar panels and making some batteries.
Radiation hardening:
While there is some state information on GPU, for ML applications the occasional bit flip isn't that critical, so Most of the GPU area can be used as efficiently as before and only the critical state information on GPU die or host CPU needs radiation hardening.
Scaling: the didactic unoptimized 30m x 30m x 90m pyramid would train a 405B model 17 days, it would have 23 TB RAM (so it can continue training larger and larger state of the art models at comparatively slower rates). Not sure what's ridiculous about it? At some point people piss on didactic examples because they want somebody to hold their hand and calculate everything for them?