As with a lot of things, it isn't the initial outlay, it's the maintenance costs. Terrestrial datacenters have parts fail and get replaced all the time. The mass analysis given here -- which appears quite good, at first glance -- doesn't including any mass, energy, or thermal system numbers for the infrastructure you would need to have to replace failed components.
As a first cut, this would require:
- an autonomous rendezvous and docking system
- a fully railed robotic system, e.g. some sort of robotic manipulator that can move along rails and reach every card in every server in the system, which usually means a system of relatively stiff rails running throughout the interior of the plant
- CPU, power, comms, and cooling to support the above
- importantly, the ability of the robotic servicing system toto replace itself. In other words, it would need to be at least two fault tolerant -- which usually means dual wound motors, redundant gears, redundant harness, redundant power, comms, and compute. Alternately, two or more independent robotic systems that are capable of not only replacing cards but also of replacing each other.
- regular launches containing replacement hardware
- ongoing ground support staff to deal with failures
The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.
My feeling is that, a bit like starlink, you would just deprecate failed hardware, rather than bother with all the moving parts to replace faulty ram.
Does mean your comms and OOB tools need to be better than the average american colo provider but I would hope that would be a given.
And once you remove all the moving parts, you just fill the whole thing with oil rather than air and let heat transfer more smoothly to the radiators.
On Earth we have skeleton crews maintain large datacenters. If the cost of mass to orbit is 100x cheaper, it’s not that absurd to have an on-call rotation of humans to maintain the space datacenter and install parts shipped on space FedEx or whatever we have in the future.
I won't say it's a good idea, but it's a fun way to get rid of e-waste (I envision this as a sort of old persons home for parted out supercomptuers)
Consider that we've been at the point where layers of monitoring & lockout systems are required to ensure no humans get caught in hot spots, which can surpass 100C, for quite some time now.
It's all contingent on a factor of 100-1000x reduction in launch costs, and a lot of the objections to the idea don't really engage with that concept. That's a cost comparable to air travel (both air freight and passenger travel).
(Especially irritating is the continued assertion that thermal radiation is really hard, and not like something that every satellite already seems to deal with just fine, with a radiator surface much smaller than the solar array.)
> The company only lost six of the 855 submerged servers versus the eight servers that needed replacement (from the total of 135) on the parallel experiment Microsoft ran on land. It equates to a 0.7% loss in the sea versus 5.9% on land.
6/855 servers over 6 years is nothing. You'd simply re-launch the whole thing in 6 years (with advances in hardware anyways) and you'd call it a day. Just route around the bad servers. Add a bit more redundancy in your scheme. Plan for 10% to fail.
That being said, it's a complete bonkers proposal until they figure out the big problems, like cooling, power, and so on.
Second: you still need radiators to dissipate heat that is in oil somehow.
Also, making something suitable for humans means having lots of empty space where the human can walk around (or float around, rather, since we're talking about space).
Repair robots
Enough air between servers to allow robots to access and replace componentry.
Spare componentry.
An eject/return system.
Heatpipes from every server to the radiators.
It is really fucking hard when you have 40MW of heat being generated that you somehow have to get rid of.
Failure rates tend to follow a bathtub curve, so if you burn-in the hardware before launch, you'd expect low failure rates for a long period and it's quite likely it'd be cheaper to not replace components and just ensure enough redundancy for key systems (power, cooling, networking) that you could just shut down and disable any dead servers, and then replace the whole unit when enough parts have failed.
Side: Thanks for sharing about the "bathtub curve", as TIL and I'm surprised I haven't heard of this before especially as it's related to reliability engineering (as from searching on HN (Algolia) that no HN post about the bathtub curve crossed 9 points).
Are there any unique use-cases waiting to be unleashed?
Keep in mind economics is all about allocation of scarce resources with alternative uses.
It just seems funny, I recall when servers started getting more energy dense it was a revelation to many computer folks that safe operating temps in a datacenter should be quite high.
I’d imagine operating in space has lots of revelations in store. It’s a fascinating idea with big potential impact… but I wouldn’t expect this investment to pay out!
I agree that it may be best to avoid needing the space and facilities for a human being in the satellite. Fire and forget. Launch it further into space instead of back to earth for a decommission. People can salvage the materials later.
on one hand, I imagine you'd rack things up so the whole rack/etc moves as one into space, OTOH there's still movement and things "shaking loose" plus the vibration, acceleration of the flight and loss of gravity...
This effect can be somehow overcome by exercising while in space but it's not perfect even with the insane amount of medical monitoring the guys up there receive.
(And of course, the mostly reusable Falcon 9 is launching far more mass to orbit than the rest of the world combined, launching about 150 times per year. No one yet has managed to field a similarly highly reusable orbital rocket booster since Falcon 9 was first recovered about 10 years ago in 2015).
It's theoretically possible for sure, but we've never done that in practice and it's far from trivial.
In this case, I see no reason to perform any replacements of any kind. Proper networked serial port and power controls would allow maintenance for firmware/software issues.
Perhaps the server would be immersed in a thermally conductive resin to avoid parts shaking loose? If the thermals are taken care of by fixed heat pipes and external radiators - non thermally conductive resins could be used.
Redundancy is a small issue on Earth, but completely changes the calculations for space because you need more of everything, which makes the already-unfavourable space and mass requirements even less plausible.
Without backup cooling and power one small failure could take the entire facility offline.
And active cooling - which is a given at these power densities - requires complex pumps and plumbing which have to survive a launch.
The whole idea is bonkers.
IMO you'd be better off thinking about a swarm of cheaper, simpler, individual serversats or racksats connected by a radio or microwave comms mesh.
I have no idea if that's any more economic, but at least it solves the most obvious redundancy and deployment issues.
Treat each maintenance trip like an EVA (extra vehicular activity) and bring your life support with you.
And at sufficient scale, once you plan for that it means you can massively simplify the servers. The amount of waste a sever case suitable for hot-swapping drives adds if you're not actually going to use the capability is massive.
And then, there is of course radiation trouble.
So those two kinds of burn-in require a launch ti space anyway.
Programming and CS people somehow rarely look at that.
the more satellites you put up there, the more it happens, and the greater the risk that the immediate orbital zone around Earth devolves into an impenetrable whirlwind of space trash, aka Kessler Syndrome.
The analysis is a third party analysis that among other things presumes they'll launch unmodified Nvidia racks, which would make no sense. It might be this means Starcloud are bonkers, but it might also mean the analysis is based on flawed assumptions about what they're planning to do. Or a bit of both.
> IMO you'd be better off thinking about a swarm of cheaper, simpler, individual serversats or racksats connected by a radio or microwave comms mesh.
This would get you significantly less redundancy other than against physical strikes than having the same redundancy in a single unit and letting you control what feeds what, the same way we have smart, redundant power supplies and cooling in every data center (and in the racks they're talking about using as the basis).
If power and cooling die faster than the servers, you'd either need to overprovision or shut down servers to compensate, but it's certainly not all or nothing.
Underwater pods are the polar opposite of space in terms of failure risks. They don't require a rocket launch to get there, and they further insulate the servers from radiation compared to operating on the surface of the Earth, rather than increasing exposure.
(Also, much easier to cool.)
They would just keep the failed drives in the chassi. Maybe swap out the entire chassi if enough drives died.
But they didn't say just "gravity", they said "gravity well".
> "First, let us simply define what a gravity well is. A gravity well is a term used metaphorically to describe the gravitational pull that a large body exerts in space."
- https://medium.com/intuition/what-are-gravity-wells-3c1fb6d6...
So they weren't suggesting that it will be big enough to get past some boundary below which things don't have gravity, just that smaller things don't have enough gravity to matter.
"Large" is almost meaningless in this context. Douglas Adams put it best
> Space is big. Really big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist, but that's just peanuts to space.
From an education site:
> Everything with mass is able to bend space and the more massive an object is, the more it bends
They start with an explanation of a marble compared to a bowling ball. Both have a gravity well, but one exerts far more influence
https://www.howitworksdaily.com/the-solar-system-what-is-a-g...
The ISS makes almost 250KW in full light, so you would need approximately 160 times the solar footprint of the ISS for that datacenter.
The ISS dissipates that heat using pumps to move ammonia in pipes out to a radiator that is a bit over 42m^2. Assuming the same level of efficiency, that's over 6km^2 of heat dissipation that needs empty space to dissipate to.
That's a lot.
Every child on a merry go round experiences it. Every car driving on a curve. And Gemini tested it once as well. It’s a basic feature of physics. Now why NASA hasn’t decided to implement it in decades is actually kind of a mystery.
https://www.yorkspacesystems.com/
Short version: make a giant pressure vessel and keep things at 1 atm. Circulate air like you would do on earth. Yes, there is still plenty of excess heat you need to radiate, but dramatically simplifies things.
> And Gemini tested it once as well.
From Wikipedia:
They were able to generate a small amount of artificial gravity, about 0.00015 g
So yes, you need an effect 60 000 times stronger than this.
And you want that to be relatively uniform over the size of an astronaut so you need a very big merry go round.
Nuclear fission is also a basic feature of physics, that doesn't mean engineering a nuclear power plant is straightforward.
The Russians are the only ones who package their unmanned platform electronics in pressure vessels. Everyone else operates in vacuum, so no fans.
DC's aren't quite there yet, but the hot spots that do occur are enough to cause arc flashes which claim hundreds of lives a year.
When was the last time you spun yourself around in a desk chair?
Relevant tom Scott video: https://youtu.be/bJ_seXo-Enc?si=m_QjHpLaL8d8Cp8b
There is a lot of research, but it’s not as simple as operating under real gravity. Makes many movements harder and can result in getting sick.
That is, as hardware fails, the system looses capacity.
That seems easier than replacing things on orbit, especially if StarShip becomes the cheapest way to launch to orbit because StarShip launches huge payloads, not a few rack mounted servers.
Then you'd need vanes, agitators, and pumps to keep the oil moving around without forming eddies. These would need to be fairly bulky compared to fans and fan motors.
I'd have to see what an engineering team came up with, but at first glance the liquid solution would be much heavier and likely more maintenance intensive.
Don’t even get me started on the costs of maintenance. I am sweating bricks just thinking of the mission architecture for assembly and how the robotic system might actually look. Unless there’s a single 4 km long deployable array (of what width?), which would be ridiculous to imagine.