1. Voyager 2 has been pointing 2 degrees off from Earth
2. Been that way for a while and nobody noticed because very old computers.
3. Meaning that the probe has gone dark (ingress and egress comms are not possible)
4. However, both Voyager probes have software that tells them to routinely calibrate themselves every few months
5. Meaning that it should point at Earth in the next few months (most likely).
I wonder how it's temporary. Does the probe have a re-targeting function? The answer is in the original statement:
> "Voyager 2 is programmed to reset its orientation multiple times each year to keep its antenna pointing at Earth; the next reset will occur on Oct. 15, which should enable communication to resume. The mission team expects Voyager 2 to remain on its planned trajectory during the quiet period."
quote: “In the past, engineers have compared keeping the probes operational to keeping an old car running. The tech is severely outdated, yet it keeps ticking over – a trend often seen in the spacecraft of past decades.”
At some point us humans will probably simply have forgotten how to maintain them.
I beg anybody to rephrase it understandingly with using some units similar to football fields. Is it possible to launch a little cheap rocket with a transmitter just to correct Voyager's position?
For Voyager 2, 45 years of uptime in the hazardous space environment, billions of miles away, is simply incredible.
*Joe forces estimated
Using current technology we could probably make an object go faster than that so yes, it would be able to catch up.
However, we'd probably just put better instruments on this new object and make that the priority.
Im not sure if thats what you wanted but australias per day is my new favourite unit.
OK, OK, if the Klingons find it now, then it'd be a shame not to get some measurements. (The cameras, however, are off since decades.)
EDIT: This is about 7.4AU. If Jupiter and Saturn were in a line from Earth right now, this distance from Earth would be about the halfway point between the two gas giants. So no, we also won't be launching a rocket to go catch the beam just to re-establish comms.
We are talking about distances that are so big, there is no comparison that makes sense. Nothing else IS that big. The numbers are literally "astronomical". If you're struggling to wrap your head around it, you're doing it right.
"Space is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space." -- Douglas Adams
But luckily it automatically readjust itself to earth automatically every half year exactly for these events. So on 15.10 we will know, if it is really lost. In either case, the end of its mission is near anyway, because the nuclear batteries are near its end.
edit: Nasa has a blog post on this https://blogs.nasa.gov/sunspot/2023/07/28/mission-update-voy...
Well, technically, 15km/s IS "more than 10x the speed of sound". An average car, is, TECHNICALLY, more than twice the size of a bicycle.
It's the 24th century version of jacking with Wikipedia.
We are truly lost in a “The Deep” … as in absolute nothingness
Australians call it "soccer" too. Disambiguates it from Australian Rules, Rugby League and Rugby Union
Nope, not true. Not every second. Depends on the time of year. For a few months, each year, Voyagers actually get closer to Earth.
Instant messaging, video calls, the internet…
Part of this is just necessary for ability to learn for future missions. If something fails in space, you want to be able to figure out what happened so that you don't make the same mistake the next time. And you don't have a chance to send a second mission just to "replicate" the problem.
So you do things like build your test equipment into the probe so you can measure stuff while in operation. Or maybe make sure you have a switch for everything so that you can turn something on or off to see if the problem persists.
Propellent is finite, so you want to use it as rarely as possible.
A reaction wheel is by itself infinite (assuming it doesn't break), but eventually it saturates and you need to desaturate it, which basically means spinning the wheel the other way while spending propellent to maintain position.
All of this is to say, reorientation is an expensive process especially if refueling isn't an option.
I wonder, is there a technical publication elsewhere that has more substantial coverage for interested people?
please tell me you are being sarcastic!
“Only”.
It’s probably not worth it.
So to catch up, you would have to be faster. Let's say you were able to travel around the equator in 15 minutes, so you're gaining 30 minutes per equator. If my napkin math is right, it would take you roughly 45 years to catch up to voyager.
We could achieve slightly greater speed immediately after launch but we wouldn't be able to exploit the planetary gravity assists that accelerated the Voyager spacecraft.
But I also don't think most back yards can fit an antenna that big... search "NASA deep space network" on google images to get a scale of the antennas that are used to talk to voyager
Curious to know how this recalibration actually works. Any explainer that anyone can point to would be appreciated. Thanks!
Or that includes navigation / propulsion / course corrections / reorientation also?
(There is not enough "solar power" that can be harvested at that distance I presume)
If we spread out the communications to be less frequent and say bring it down to essentially a heartbeat signal once a month ..would it prolong the service life. Mostly for emotional reasons at that point :)
(1) https://www.theregister.com/Design/page/reg-standards-conver...
It's also near the end of its usable life so it wouldn't be worth it anyway.
And actually, according to https://voyager.jpl.nasa.gov/mission/status/ it's actually 19,936,472,690 km from Earth so I think like 20ish light hours or so.
If there is such an archive, or some approximation thereof, it would surely be fascinating to pore over it.
Any similarities with the real world are surely coincidental.
Also, I don't particularly like the speed of sound for this comparison. Most people think of speed of sound as speed of sound at about sea level pressure, in gas composed of around 78% nitrogen and 21% oxygen and at roughly 25C temperatures. But the speed of sound is highly dependant on the medium and its temperature and pressure. There actually can be sound waves in space (pressure waves in interstellar gas resulting from various astronomic phenomena) and they propagate at very wide range of speeds, typically somewhere between 10 and 100km/s.
The main reason to use "speed of sound" is because important things change when objects travel at little below or above speed of sound in the medium they are in. But this is only useful in relation to the actual medium the object travels through.
One place where it trips people up is when they are talking high altitude airplanes or rocketry. They are talking about something traveling at "X Mach", or "X times the speed of sound" and then I try to figure out if they mean X in relation to the speed of sound up there or the speed of sound at sea level. Just a nightmare trying to use it to convey speeds even within confines of our atmosphere.
https://voyager.jpl.nasa.gov/news/details.php?article_id=118
https://www.youtube.com/watch?v=586Zn1ct-QA
https://www.youtube.com/watch?v=vUvzgZt1Vug
There's a part 3 with a tour of the complex.
Earth is approximately 150 million km from the Sun
Sunlight intensity falls off with the square of distance (ignoring any additional small losses from space dust / scattering from gases etc), so twice the distance = a quarter the solar flux. At the Earth it's ~1361 watts per square meter.
Voyager 2 is approximately 133 times further from the Sun than Earth is, which means it receives optimistically 1361 / (133^2) = 0.07694 watts per square meter.
I found a JPL article [1] that says the RTG onboard Voyager produces 40% less power than it did at launch, and the Wikipedia article [2] says it produced 470W at launch, which means it makes ~280W now.
Wikipedia [3] suggests the solar panels available at the time of Voyager's launch in the late 1970s could convert ~10% of incoming solar power to electricity. Modern panels bring that up to 30% but the designers of Voyager did not have access to time travel.
So at present distance Voyager would need approximately 36000 square meters of solar panel to produce the same amount of power.
[1] https://www.jpl.nasa.gov/news/a-new-plan-for-keeping-nasas-o...
[2] https://en.wikipedia.org/wiki/Voyager_2#Power
[3] https://en.wikipedia.org/wiki/Solar_panels_on_spacecraft#His...
Also, it is possible that 2 degrees of misalignment still allows some fraction of the signal to be detected, but it not being strong enough to be decoded. The received signal strength and the beamwidth of the antenna could then be used to estimate how far off the mark the Voyager 2 dish is.
The term soccer comes from Oxford "-er" slang, which was prevalent at the University of Oxford in England from about 1875, and is thought to have been borrowed from the slang of Rugby School. Initially spelled assoccer, it was later reduced to the modern spelling.
“Football” almost always means soccer (association football) in the UK, but there are also things like rugby football and Gaelic football.
Edit to add: you need to disambiguate when other forms of football are popular (eg at Oxford university) but these days soccer is the most popular sport by a huge margin.
> there's no reason for the public at large to need it
As a member of said public, I would be curious to know. There's no need for taxpayer-funded agencies to operate in a cloak of darkness.
Most everything done by government should by default be open to the public, with an exceedingly high bar that must be met to be otherwise. Otherwise, you run into nonsensical things like how some details around the assassination of a president 60 years ago are still classified on "national security" grounds.
EDIT: Reply here with a link to your Muckrock.com FOIA request and I'll send you the $5.
or something, dunno.
I'm also the sort of person who thinks that all code written with public money should be open source.
If it's 15 billion miles away (sorry for my Freedom Units), it is 22 light-hours away, or 0.0026 light years away (unless my Google-fu is way off). If we could move at the speed of light, which we can't, it would still take nearly a day to get there. So if we were on the Enterprise moving at Warp 1, it would take a day to get there and reorient it back towards Earth. If we could move at Warp 10, we'd have already been there and fixed it.
OTOH, dB's are effectively a log scale, and NASA's "not good enough now" transmitter & antenna cost quite a few $million. What's your budget?
(Yeah - if the Arecibo radio telescope was still on operation, it might well have been capable of doing this.)
Also, empty space is huge and matter-things like Earth and the Sun are tiny compared to it.
The distance from Earth (1,0) to the new location (0.9994, 0.0349) is about 0.0349. We need to scale that back up to "real" units so multiplying it by 15 billion miles. And we get about 520 million miles. The earth is about 93 million miles from the Sun, so its max positional shift (under extremely improbable absolutely perfect conditions) would be ~180 million miles.
So there's no way we could regain contact with just yearly movement, even before we account for the fact that it's getting further and further away. 2 degrees intuitively sounds small, but on an astronomical scale it's huge and this sounds like a pretty major flub by NASA.
Simply turn the heaters off before transmitting and keep any transmission periods short enough that the electronics don't get too cold.
JPL probably have a bunch of tricks like this ready for when power levels drop. That 2025 estimate is 10 years old and I'd be surprised if it's final.
So I presume it uses its thrusters to impart a very small spin on one axis, and then on an orthogonal axis.
A mechanism records the signal strength as it sweeps all angles, and once the optimum direction is determined, the thrusters are fired in just the right way to counteract the spin and bring the craft to a halt at the optimum orientation.
Given this was programmed decades ago - the electromechanical system that does all this jugglery and runs reliably for so long would be a great case study for systems design.
Even the programming that ensures that this routine is triggered without fail every few months must also have gone through intense reliability testing.
It's pretty obvious that the people who managed to extend the lifetime of Voyager are very smart, based on all the tricks they had to do.
They are remotely configuring an old-tech device that is billions of kilometers away, with insane lag, and uncertainty that the underlying hardware is even responding properly.
Absolutely anything could have gone wrong at this stage.
They'll anyway investigate internally what happened, in order to hopefully, find a solution.
There is no need to spend resources to make the material public, if the goal is mostly to satisfy curiosity (though it's interesting).
Or only make sense when looking into source code that is long gone
Lots of formal processes capture what would otherwise be informal design decisions elsewhere. In this case, they probably have reams of pages detailing a failure mode effects analysis (FMEA). One mode is “oops, we sent the wrong command” and the document would define the specific design mitigation(s) for that outcome until it reaches an accepted risk threshold.
I feel like that line is somewhere between 5 and 15 for americans, and not "thousands". And probably at around "oh the handegg one, no, I have no idea how big one is in the first place" for rest of the world
But it's not aircraft ? It's trivial for spacecraft to achieve it
I skimmed though the Voyager document and it seems to have very good coverage of overall telecommunications system.
For the topic of the periodic calibration the following is all I could spot
>> Four 7-hour and two 0.5-hour attitude control calibration maneuvers are performed per spacecraft every year, each requiring 70-m station downlink coverage to ensure uninterrupted downlink telemetry.
While this is interesting in itself, it merely states the schedule but doesn't satisfy my curiosity about the exact mechanism used to do the recalibration.
Thanks nevertheless. Interesting reads here.
https://www.jpl.nasa.gov/news/nasa-mission-update-voyager-2-...
But yeah, it is not comparable as the challenges for spacecraft and planes are completely different.
To reach the point 2 degrees from earth would take 1.64 years at that speed.
To reach that point before October 15th it would need to travel about 9x faster than falcon 9 second stage or almost twice as fast as the fastest spacecraft in history.
But it would need significant additional time and fuel to slow down such that it didn't immediately blow past that point and become useless, so it would need an even higher speed.
[1] Joking aside, they obviously can not, Voyager is missing the Earth by 4.5 AE. How wide is the beam, how precisely do they have to aim the antenna to maintain communication?
I remember when bricking something meant it was totally unrecoverable. Now it means "temporarily not working but will automatically heal".
Possible, maybe. Little or cheap, definitely not. Both Voyager probes relied on a unique alignment of the planets in the outer solar system that allowed them to get a series of speed boosts using gravity assists from the gas giants. If we wanted to launch a rocket anytime in the near future that would be able to catch up with Voyager 2 we'd probably have to rely on good old fashioned brute force (rocket power). But then if you want the rocket to catch up in the next thousand years it's going to need REALLY big ass rockets to catch up with Voyager... and if you want it to rendezvous with Voyager instead of just zipping past, it will need to haul more rockets all the way out to Voyager so it can slow down and match speeds (which means even bigger rockets to launch from earth, etc.).
tl;dr - space is big and the rocket equation is brutal.
"Unbricking" will hopefully work automatically, because there is no other option. But that can also fail and there is no way to know, or influence it.
I use bricking in the definition of mobile phone tinkerers .. there are many results for unbricking btw, but I just checked and with the first result it seems that Apple now uses unbricking for activating a new device. Because technically before, it is also just a brick - but here I would agree, that it is not a appropriate term, but rather should be for somehow broken devices.
>>I beg anybody to rephrase it understandingly with using some units similar to football fields.
More like it can go from Earth to Moon in like 8 hours(or so).
32 billion kilometers is about 100 times the distance a satellite travels from earth to Mars. [1]
That Earth-Mars trip is estimated in the same article to take 4 months, so figure 400 months or 30+ years to shoot another satellite out to reach Voyager 2.
This is ignoring planetary slingshot math, the extra speed to 'catch' voyager 2, and surely lots of other details. Personally I find years and "mars" to be more intuitive in this case than trillions of football fields.
[1]https://mars.nasa.gov/mars2020/timeline/cruise/#:~:text=The%....
It is billions km away.., is the earth that noisy compare to solar wind and cosmic rays?
Of course there are operational details that we don't need to be made aware of, but for an incident as big as this there's no reason to at least know how it happened and what could be changed to prevent it from happening again.
The JPL doco (>>36941433 >>36942321 ) calls it "Command Loss".
The carrier signal from Earth is also powerful on a particular frequency and polarization. While there's definitely noise at the receiver it's looking for a very specific signal so can filter out everything it's not expecting. We do the same thing on the Earth side, filtering out noise to recover the very weak signal received from Voyager.
Do you want the military to “work efficiently” if that means little to no oversight? How about Congress?
Oversight and accountability to the citizenry is a foundational principle in a functioning democracy.
It may have seemed that way to you, but actually no. "Bricked" has generally referred to devices that are likely straightforwardly recoverable, but for a lack of documentation from the manufacturer.
But that value is not zero, and replacing it costs quite a bit - both money and time. Asking how and why this happened is a valid inquiry.
Saying that and then countering with your own.
Yeah, the pen dropping is a bit over the top, but as of now the claim is that this situation is planed for and will resolve itself. A report now wont tell us anything of significance. It will get interesting if the realignment fails.
> Oversight and accountability to the citizenry is a foundational principle in a functioning democracy.
I don't see micromanagement in that list.
That’s true, but a failsafe automatic reorientation mode after two weeks with no communication from Earth might be a useful feature
That's some impressive science there, not like there is a deep-space GPS.
Does it look for the sun and figure out from there?
Precisely. 'Bricking' something means it is unrecoverable and is irreversible.
No idea at what point in time the definition was changed to mean 'temporarily not working'.
- NASA informs the public immediately, and then makes the details available later after they've had time to compile the news and information into a format useful for the public
- NASA waits to inform the public until said report is finished
or perhaps you're after option c:
- NASA's network drives are open to the www in read-only mode, because, you know, 'open by default' entails realtime information (even though he doesn't actually care 99.9999% of the time. yet, someone should deliver this functionality, without it costing the taxpayer extra).
NASA routinely makes a LOT of data open to the public. Like, you can get very detailed JWST data directly from NASA. Probably far more detailed than you'd ever care to, because NASA does care about exactly your concern.
Actually, many agencies publish very detailed data if you care to look.
"... and celestial referencing instruments (Sun sensor/Canopus Star Tracker) to maintain pointing of the high-gain antenna toward Earth"
The hardware is the same, but they've updated, patched, and rewritten the software that's running in it throughout the years.
I'm not suggesting that the failsafe mode wasn't originally considered, and implemented, but simply that it doesn't have to be the case. They could have made changes to it over time.
For science, I would want to do an enquiry anyway - I'm just commenting on the financial/accountability aspect.
In other words it was effectivly a brick to me.
But since it was not a surface pro (I considered buying instead of that one), I could open it and disconnect the battery.
And in effect, unbricking it. Quite trivial fix sure, but nearly impossible with many modern devices, where the battery is glued in.
My point is, not every mode of operation is desired, especially if you cannot change it. Then you might as well have a brick in terms of usefulness.
This is what the Freedom of Information Act is for:
https://www.nasa.gov/centers/armstrong/FOIA/request.html
The report may not exist yet, so you may need to wait.
It was not making its own decisions, to achieve some goal.
I got to visit most everything there, including the 70m telescope. It was just a cool space tech nerd day of tours, presentations, and sunshine.
The dichotomy of the 70m antenna is interesting is that it broadcasts 450 kilowatts of power out into space, but has to receive and decode, "as small as 1 billionth of 1 billionth of 1 watt" signals from the space craft.
One of the reasons its on a military base is to restrict the airspace above it so that they don't accidentally cook some aircraft that happens to overfly the antenna when it's transmitting.
It's truly astonishing they're able to pull that off, frankly.
This type of thinking also follows from decades of experience.
For some reason the software engineering world largely abandoned esteem and respect for all of the above.
A favorite anecdote of Voyager.
Paraphrasing, "You carry around more computing power in your pocket than what is on Voyager. I'm not talking about your phone, I'm talking about your key fob".
The data Golay encoded, but not encrypted. That's exhausting enough for the 1/2 dozen NAND gates up there that make up its computer.
This has a good breakdown: https://www.reddit.com/r/Asimov/wiki/seriesguide/
edit: Somehow I got Foundation mixed up with Banks' Culture series. I think I have gotten through most of Foundation if not all but I've had a hard time with the Culture series, there I usually start with Player of Games..
What is bricked vs recoverable has always greatly depended on time and effort, individual skill level, available hardware/software tools, documentation, crypto keys, physical access, willingness to replace individual parts etc.
Sometimes, even within an org, some teams e-waste expensive devices that aren't bricked deeper than what other teams recover from as part of everyday workflow.
Taking a typical network device as an example, where do you draw the line? Driving to a remote location to plug the cable into another port, pressing a reset button, booting from USB, flashing a new firmware with TFTP, plugging in an external or internal console cable, opening the case and soldering a header to get access to the console, doing the same with no documentation, or an unknown (but maybe Google-able or reverse engineerable) password, flashing firmware with JTAG, shipping the device back to the engineers (or shipping an engineer to the device)...? It's always been arbitrary.
Also, it's not a technical term with a rigid definition, hence "soft-bricking"
Doesn't even matter if voyager is heading towards it or not, it's still crazy far away. Voyager is still on our doorstep as far as interstellar distances go.
Frankly before your comment I wasn't going to complain because I saw the tantrum you threw when people corrected you on the usage of "bricked" but maybe next time spell the month to avoid ambiguity.
If you are able to fix it then it is not bricked.
One device may be bricked to one person but not to another. But that must still be the definition, right? Otherwise the word has no meaning.
[0] https://www.goodreads.com/book/show/35921.On_Basilisk_Statio...
Is micromanaging what you're claiming is a strawman in my position? I'm not claiming you are saying the military doesn't need oversight, I'm probing with a concrete example where you draw the line on what constitutes a reasonable threshold of accountability. Note my statements were framed as questions to get clarification; that's not a strawman.
Your micromanaging claim is however another strawman statement. I guess I could use clarification on your point. Your equating to micromanaging is misapplied IMO. "Micromanaging" would be a direct democratic vote on most or all issues, IMO. That's not what's being asked for here here. What seems to be asked for is transparency. Access to information is not the same as having authority to make all decisions. But it is paramount in a government when people elect representatives who make decisions (or appoint those who do). The big issue I'm asking is: where is the reasonable 'trust, no need to verify' stance when it comes to public/govt work? Can we just trust tens of millions of dollars on construction projects, but not when it gets to hundreds of millions? What about aerospace? Do we say it's fine to go ahead with limited accountability when it comes to billion-dollar robotic missions, but not when there's a safety-critical application?
>A report now wont tell us anything of significance.
What makes you so confident? A report can tell us if processes were followed appropriately and, if not, if anyone was held accountable for not following them. I'd say that is pretty significant if you care about governmental fraud, waste, and abuse.
An embedded device with a failed bootloader update is considered “bricked”, even if you can open it up and reflash it with a valid bootloader.
I don’t know why folks are so insistent on gatekeeping the word, as if doing so demonstrates some superior personal knowledge.
In optimal orientation, Voyager's signal peaks at -160dBm when received on the 70m dishes. Now it's shooting 2 degrees off which means the signal misses earth by hundreds of millions of kilometres. What kind of magical high gain antenna do you envision that could still receive it, assuming money isn't a problem?
Provided that V2 still has enough propellent to make this adjustment.
https://en.wikipedia.org/wiki/Voyager_2#Communications
https://en.wikipedia.org/wiki/Parabolic_antenna#Beamwidth
How about this antenna? https://en.wikipedia.org/wiki/Very_Large_Array
Or this one? https://en.wikipedia.org/wiki/Five-hundred-meter_Aperture_Sp...
Normally I'd have marked the entire subthread offtopic, but hutzlibu's comment deserves to be at the top, even if it does use the word "bricked" wrong.
Here is a photo from Voyager 1 at a distance of 4Bn miles:
If that's not enough for human scale understanding, it's gone the same distance Earth goes in its orbit in 34 years.
[1] https://en.wikipedia.org/wiki/International_Cometary_Explore...
I mean, we're all human, I've made some really shitty fatal errors hacking untested code onto production servers.
https://voyager.jpl.nasa.gov/news/details.php?article_id=129
More realistically, even if say we did have a light speed capable craft, we'd be limited by the forces for accel and decel to have a starting speed of 0 relative to Earth, and ending speed of 0 relative to Voyager. The limits would be determined by the withstanding capability of the bodies of the pilots, or the mechanical strength if it were another drone.
I don't feel like doing the math properly, so I asked GPT4 this question:
> let's say we wanted to move a human astronought in a spacecraft to the current position of Voyager 2 which is 22 light-hours away. We want to get there as fast as possible, and arrive at a co-moving frame with Voyager, which has a relative velocity of 32000 miles per hour moving away from the Earth. Assuming the astronaut can withstand a g-force maximum on their body of 6G, and using that to inform the maximum accelleration and deceleration for the voyage, how long does it take to reach the Voyager 2 co-moving frame?
Response:
Firstly, let's convert everything to consistent units. Here we'll use hours for time, miles for distance, and miles per hour^2 for acceleration.
1 G is approximately equal to 22 miles/hour^2. Therefore, 6G is approximately 132 miles/hour^2. The distance to Voyager 2 in miles is 22 light-hours * 670616629 miles/light-hour = approximately 14753561838 miles.
Assuming we accelerate at 6G until halfway, then decelerate at 6G for the second half, we can use the kinematic equation:
d = 0.5 * a * t^2
Solving for t when d equals half the total distance gives:
t = sqrt((2 * d) / a)
Substituting in the values:
t = sqrt((2 * (14753561838 miles / 2)) / 132 miles/hour^2) = 84331.5 hours
So, the total time to reach Voyager 2 while maintaining a maximum acceleration/deceleration of 6G is approximately 84331.5 hours * 2 = 168663 hours, or about 19.25 years.
This is a rough estimation and doesn't take into account the relative velocity of Voyager 2 or relativistic effects at high speeds.
It also assumes constant acceleration and deceleration, which wouldn't be feasible with today's technology due to fuel constraints.
-------------
So our 22 hour lightspeed trip would still take 19 years if it had a human onboard.
The biggest limitation with sci-fi travel isn't breaking relativity, it's finding a way to accelerate squishy meat-bags. This person will be living at an effective weight of nearly 1000lbs for 20 years, which will really mess them up.
For the sake of the longevity of our pilot, I asked to re-run the calculation limiting the on-board gravity to 1G. It now takes 114 years to reach Voyager's current position, which by then will be much farther. It turns out we will never catch up, and reaching it is impossible if we limit our craft to earth like conditions necessary for human health.
https://voyager.jpl.nasa.gov/news/details.php?article_id=129
https://skyriddles.wordpress.com/2023/07/03/stereo-a-comes-h...
"OOPS, NASA gave V2 the wrong number to phone home. Engineers of old have last laugh and reassure 'it's OK, V2 will sort it out'"
To imply this was a software bug is a pretty silly representation - the system was poorly engineered and didn't have proper contingencies for sensor disagreement. This is pretty clearly a design/engineering error with a software component.
Besides, the guy said "rarely ever matter" for a reason, not "explicitly never impact things"... Bit of a silly comment from you IMO
This probably wasn't your intention, but putting it in terms like this, for me anyway, actually drives home just how short a distance the Voyager probes have travelled.
On juniper, it's "commit confirmed".
Many moons ago when I was hands-on and stressed about migrations & config, my team lead at the time would say exactly the same thing - his wife is a doctor and her job is way more stressful - People die. And I bought into it as a relief for a while.
But... I work on a payroll system. My team does impact people. Mistakes can have important negative consequences to real live individuals - from stress invoked in trying to call help centre and fix their paycheques, to disconnected utilities if they don't get paid correctly/timely, to other downstream consequences.
Any number of other IT systems have significant consequences - e.g. airline ticket systems, airbnb bookings, etc. I feel the "nobody died" is a double-edged sword: it can help relieve people of the daily sense of artificial stress, urgency and grind that management may impose; but also builds a false dichotomy / unreasonably binary threshold on when our job matters / impacts ...
In the case of the 737MAX, the software was a design around a physical constraint; that doesn't mean the software doesn't matter. Most software is designed as a workaround of a certain physical or mental constraint.
As the amount of coordination increases, the number of failure modes tends to grow quite fast. That's why software failures in physical, safety-critical systems are not trivially corrected. There are a lot of second order effects that need to be considered.
Most folks don't really know how to use say Android fastboot or recovery modes either, yet we wouldn't call a device with a wiped system partition "bricked".
Most "bricks" are things like a bootloader getting erased. Reflashing that through the standard process of JTAG or another debug protocol is a straightforward action (after all, the manufacturer has to get the first bootloader on there to begin with). The port pinout and config info just hasn't been publicly documented by the manufacturer, which is what pushes it into the domain of "experts".
I would love to see a day when redundancy like this is just a standardized, accepted practice rather than a stand-up debate. Easier said than done of course.
Add civil engineering to that nowadays - both buildings and roads.
Sure, there are regulations and licensing, but quite often the entity financing the whole thing cares little about such things.
"- African dung beetles orient to the starry sky to move along straight paths
- The beetles do not orientate to the individual stars, but to the Milky Way"
https://www.cell.com/current-biology/fulltext/S0960-9822(12)...
https://www.science.org/content/article/dung-beetles-navigat...
https://www.nasa.gov/sites/default/files/ivv_grubb_nasa_ivv_...
On the other hand, at one time there was a physical "proof test model" of the Voyagers.
https://www.jpl.nasa.gov/images/pia21734-voyager-test-model-...
Clients who want NASA quality can have it if they bring NASA budgets and timelines.
https://space.stackexchange.com/questions/43803/how-did-the-...
In cases where fault tolerance isn't as robust, it's for the same reasons as other disciplines you mentioned: budget and importance.
The pilot couldn't even turn MCAS off originally. That's not a software thing, that's a "who the F designed this" thing.
The main contingency with most software is that you fix it.
It’s the cost when something fucks up.
If I’m holding my phone near a cliff, and I rely on it for navigation and I’m hours from civilization, I’m a little more careful, not because I’m normally super careful. It’s because — in that specific scenario — losing my phone would cost me so much and the chance of it happening is much more likely.
Space companies spend a little extra because the cost is years of development and billions of dollar evaporating in a few seconds.
And there are software teams in certain industries that dot their I’s and cross their T’s as well.
Even on some dumb CRUD app, if it’s a critical piece of code that the rest of the software hinges upon, you spend a little extra time because the cost of fuck up is so major.
Or you’re launching a product and you have a sign up that will seed your user base, you damn well make sure it works.
"The high-gain antenna has a beamwidth of 0.5° for X-band, and 2.3° for S-band."
At 130-150 AU, the earth is always within about 0.4° of the sun. Since commands are sent on S-band, pointing directly at the sun gets a pretty good signal.
It fails like buildings near fault lines, because the ground moves under them. Think broken dependencies, operating system obsolescence, et cetera.
You can reorient as much as you want with a reaction wheel, and the only cost is electricity.
Then "totally unrecoverable" is rare and the term bricked has always been relative. Your bricked device may be as good as new to someone who has a JTAG adaper and knows how to use it.
Jokes aside I think it's mostly a value/cost thing. NASA's software has different requirements and failure scenarios than most software developers (in this context I will not call them software engineers) have to care about. Verifiable correctness is harder to predict, and in most devs' roles it's easier to just try something and see what happens, rather than know what'll happen up front.
- Schedule the reboot
- do my changes
- Make sure everything was working properly
- Go get lunch
- Notice a bunch of pages and alarms about a firewall going offline
- Rush back to my office
- Login to the firewall
- Schedule the reboot
- Re apply the changes
- Test it again
- CANCEL THE FING REBOOT THIS TIME
- Eat my now cold lunch
But do you have reason to believe they're working on a detailed public report?
Because if they're not, then you missed option "NASA informs the public immediately, but never makes the details available" which would be unfortunate.
Also they probably already answered a lot of these questions internally during the last week, so it wouldn't hurt to put some of that information out.
Nah, these systems are simple and incredibly well documented. A ton of people have operated them, too. They'll be fine.
I'd expect something like that to happen to a university cubesat lol.
I would love to see a picture of your computer pulling itself up by the straps on its physical boots the next time you press reset. Bleeding when a process is "killed".
Even something as superficially similar to real-world behaviour a "queueing" is implmented in a very different way in software, for the most part.
It's got to be possible to launch some in space now and have them go faster than voyager 2, so that the outside can be explored faster?
My impression from friends working in other engineering disciplines is that software engineering works similarly to other fields: the more risk to human lives is involved, the more testing, redundancy, etc is involved.
Because that part is a side benefit not worth launching for, and the main motivation (grand tour of the outer planets) for the Voyagers relied on a once-in-175-years alignment of the planets.
But maybe we’ll have nice probes ready to launch in the 2150s next time the alignment happens.
And come on, you can sanity check those numbers.
"1 G is approximately equal to 22 miles/hour^2."
Think about this. That means it would take you an hour to reach 22mph.
The real speed you'll be going is 22 miles per second.
And sqrt(14753561838 / 132) is 10572, not 84331. That's not even close.
Also it used the Voyager 1 distance which is significantly different.
Once we fix all the numbers, each half of the trip is sqrt(20 billion km / 60m/s^2) which is a week. So two weeks total.
At 1G, each half is 16.4 days, so it takes a month total.
sudo wg-quick down wg0
Nice one, mate. Had to drive back to log in and bring up that interface. I still do this, FWIW, but now I use `at` to schedule "up" 1 min in future haha. So far so good though it's not smart :)> Are you both misunderstanding that? Your link says:
> Launched in 1977, the Voyager 2 spacecraft is more than 12 billion miles (20 billion kilometers) from Earth, using five science instruments to study interstellar space. To help keep those instruments operating despite a diminishing power supply, the aging spacecraft has begun using a small reservoir of backup power set aside as part of an onboard safety mechanism. The move will enable the mission to postpone shutting down a science instrument until 2026, rather than this year.
> Switching off a science instrument will not end the mission. After shutting off the one instrument in 2026, the probe will continue to operate four science instruments until the declining power supply requires another to be turned off. If Voyager 2 remains healthy, the engineering team anticipates the mission could potentially continue for years to come.
Going from 5 science instruments to 4 in 2026 is hardly "dead."
Technically, the speed of sound depends on the medium, and 15km/s is much slower than the speed of sound in interstellar space. (Which the sources I can find give at ~100km/s.)
and we are charging our phones daily....
“Lucky”, only in the sense that (1) completing a large government project on time, and (2) not having some kind of disaster (particularly, at launch) screw up the mission require a certain degree of luck of luck on top of planning and execution (though, not relying completely on that luck is also why there were two Voyagers): we got all the gravity boosts because the mission was planned around an alignment that enabled it to do that and visiting each of the outer planets (which was really the main goal; the beyond the solar system part was gravy.)
An apropos and famous example is the Ariane 5 rocket mishap. The same validated software from the Ariane 4 was used, but the hardware design changed. Specifically, the velocity of the Ariane 5 exceeded that of its predecessor and exceeded the 16-bit variable used.
Worked on me, I guess :-)
More seriously, Mikrotik routers have a nice feature where they will rollback your config change if the connection you’re configuring one over stops responding to keepalives. Like a lot of Microtik features, it’s probably copied from some Serious Business network OS, but I wouldn’t know.
Standardisation - in the big 'E' Engineering world, there would be a recognised international standard for Web Apps that ensured/enforced that all Web Apps supported this functionality, or they would not be approved for use.
Another factor is Accountability. A senior Software 'Engineer' would have to take personal responsibility (liability, accountability) that the software product they are producing and/or overseeing met all these requirements and personally sign off that these standards have been met. If the product were to fail at any point and it was determined that the cause was negligence in following the standard, any damages sought (not common, but not unheard of) would ultimately find their way to the accountable individual and their insurance.
In cases where budgets/importance don't allow for this level of scrutiny, there would still be paperwork signed by the producer of the software and the client acknowledging deviation from the standard and waiving any recourse for doing so.
[0] https://www.baen.com/on-basilisk-station.html
[1] https://www.baen.com/readonline/index/read/sku/0743435710
In this instance, I was adding itables rules to a host. I wrote a script that add all the rules to enable expected network traffic, then set the default policy to DROP. Before running this script, I scheduled another script to be run which would delete all the rules I'd added. I did not remember to set the default policy to ALLOW.
The script runs, everything looks good. Five minutes later, pagers start going off.
Thankfully we were able to remotely power-cycle the host and didn't have to drive down to the datacenter in order to fix the issue.
- Chapter 17, A Deepness in the Sky
It makes no sense to use that term in this article not to mention it is usually spelled klick.
I wouldn't know who came first, but it's a feature of JunOS (Juniper) as well: every config apply first applies the config, then waits for confirmation on the terminal where it was ran. If confirmation isn't given within X seconds, it reverts the config change.
# ifconfig eth0 down; ifconfig eth0 up
that said, I have done this: # reboot
...on the wrong systemVoyager 2 is 160.7 AU.
Light falls off in brightness to the distance squared. So the sun will be 160.7^2 = 25824.5 times fainter for Voyager 2 than it is from Earth. (Since Earth is at 1AU)
The apparent magnitude of the sun from Earth is -26.72. Each step in magnitude is multiplying by 2.512. (2.512^5 = 100, so 5 steps of magnitude is a factor of 100).
log2.512(25824.5) = 11.0295.
11.0295 + -26.72 = -15.6905.
The apparent magnitude of the full moon is only −12.74 (lower is brighter). So for Voyager 2 the sun is still several times brighter than we see the moon. The sun is still many many times brighter than the next brightest star in the sky, Sirius, which has an apparent magnitude of −1.46.
Sources: Voyager 2 distance is https://voyager.jpl.nasa.gov/mission/status/ all else is Wikipedia.
ISO 8601 works for everyone.
It avoids confusing the Americans who otherwise put the month in the wrong place.
It avoids being ambiguous for everyone who may otherwise be worried that it was written by an American with the month in the wrong place, when the day is less than 13.
2023-10-09 is the 9th of October and it's clear to everyone regardless.
It also has the benefit of sorting chronologically if sorted "by name" when used in a filename as it's largest unit on the left, smallest on the right.
No it isn't. It's 20 billion kilometers. It's closer than Voyager 1.
In a way this sounds like a way to keep that from happening.
it's a lower bar for entry, any kid can run a compiler -- it's harder to acquire a bulldozer and planning permits.
similarly if you look at 'diy' or 'garage' engineering you can find all sorts of hazardous/poorly-built/never-should-have-existed death traps. How many people have died in recent years from fractal burning?
it's still engineering -- they're building their own tools -- but it's within a realm (DIY/maker) that historically has undersold the dangers inherent with the things.
Why? Mostly because they're self-taught, mentorless, and without the direction within their education to be taught the importance of engineering rigor, similar to the kid given the compiler who starts making forkbombs and goofy dialogs to prank their friends.
Second order calculations use careful analysis of the signal pattern in telemetry data- IIRC you can see a slow stretch of the phase which can be used to estimate distance and velocity with high accuracy.
Voyager, along with Apollo, stand as the finest examples of human engineering done yet- we got a bunch of people to the moon and back, and built a probe that still operates 50 years later... farther than anything else humans have launched... I'd be lucky if I can deploy my web app once a week.
Any time I’m ever doing a pfctl to change my router’s firewall rules I schedule a “revert to the n-1 rule set” job two minutes from now to avoid the trip to the basement & hunt for the serial cable.
I do +2 because once I was too close to the next minute, fat-fingered the update command & instinctively corrected it. But the change had reverted before I hit enter on the new update, so of course I blocked too much traffic and wedged my SSH connection, triggering the cursing and trip to the basement with the serial cable.
Imagine deploying a billion dollar piece of hardware and hoping that it has enough intelligence to keep itself from burning up before you can reestablish contact!
there is totally standardization. At the building block level. TCP/IP, Protocols on top of that, language standards etc.
Web Apps are complex, why would there be a standard? Just like there's no standard for cars, other than for some of their components like wheels or headlights.
Not sure who came first, but OpenWRT does this if you make a breaking change in the webinterface, and connectivity is lost for 60 sec, it will rollback the changes.
See the Australian Design Rules (which happens to form the basis of most UNECE and Canadian transport regulations) if you want to see how detailed they are.
He'd set the resolution to 1024x768 on the 800x600 panel, once I fixed that I was good to go!
Many industrial sites use 450kW or more (see any bauxite processing plant) so are coupled with a power plant or have agreements for dedicated generation at nearby plants.
That's what happens when there's no liability. Even if you're a billion dollar corporation, you can just slap some standard legal boilerplate disclaimer on the license agreement and absolve yourself of all responsibility even when hackers pwn the private information of your customers. They can't complain since technically the contract says there were no guarantees.
Some version if this legal boilerplate can be found in virtually all software licenses, including open source ones:
THE SOFTWARE IS PROVIDED "AS IS",
WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT.
IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING
THE SOFTWARE BE LIABLE FOR ANY DAMAGES OR OTHER LIABILITY,
WHETHER IN CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF
OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
IN THE SOFTWARE.
Start holding developers and especially corporations directly accountable and liable for any problems and I guarantee they'll start respecting it the very next second.To continue the analogy from earlier - standards wouldn't mean all web applications would have to be designed, programmed and work exactly the same way, but it would mean that they would need to be formally tested (to an approved test plan), and to use your example, would need to demonstrate that each of those layers of fallbacks (as dictated by the standard and covered in the test plan) operate correctly in order to be certified.
If anything, I think software has a huge advantage over physical world engineering in that testing can be replicated at virtually no cost whenever a change is made to the design. I shudder to think how many cars get trashed in order to meet vehicle safety testing requirements.
Some failures are fairly common, and individual failures might be fairly inert but have more serious consequences if they are cascaded with another specific failure.. for example, cruise control enable + failure of steering wheel control pad _and_ previously undetected failure of brake sensor/brake light circuit = cruise control stuck ON. Actually, this failure is inert if the cruise control is OFF when it happens. Contrived example but you get the idea ...
I have seen a lot of FMEDA (and other tool) use lately to combat concerns with cascading failure, but not sure what's currently standard at NASA or how they deal with this. I would think cascading failure would be their expected scenario on a 10+ year unmanned mission.
I am paraphrasing what he said as a non-technical person: Voyager has both a dish receiver, and a pole antenna. The dish is the usual mechanism for comms but in an emergency such as this they would send commands to the other antenna. To do this they would turn the main tracking station dish up to max, and send a "TURN AROUND!" signal out.
But prior to that they had to alert the local electricity grid, and the local air traffic control to not have any planes flying over at the time!
I guess the Voyagers are too far away for this manoeuvre now.
Here is the Australian Standard for Caravan and light trailer towing components, Part 1: Towbars and towing brackets
https://store.standards.org.au/product/as-4177-1-2004
There are thousands of these documents covering everything to do with transport from the vehicles to the reflectivity of street signs.
The regulation (at least in my state) is that only engineers who are registered as Registered Engineers are permitted to carry out professional engineering services in this state.
- begin change control at 4:55pm on Friday before Christmas
- schedule reboot
- paste changes
- make sure everything is working properly
- leave security key on desk
- go to christmas party
- firewall goes offline, pages go off
- remotely log into firewall with phone
- rush back to office to get security key
- accidentally type init 1 hanging server
- discount datacenter remote hands not picking up the phone
- rush to datacenter to power cycle server
- :(I agree. It probably doesn’t make much difference now. There are very unlikely to be any important data that will be lost. The mitigation process they already implemented seems prudent and sufficient
During earlier parts of the mission, like planetary encounters, it would make a difference. Even a two week wait could be much too long in those circumstances.
In vanilla, all antennas are omnidirectional and as long as there is a working solar panel on a probe, it can be used as a relay sat.
But it does gives you a similar feeling when you schedule a maneuver on a point where the planet/moon blocks your signal, and you only realize after it's too late
Here's how I would describe the possible analysis approaches in broad terms, avoiding terminology that NASA does not officially use.
- Start from the hazard of being pointed in the wrong direction and work backwards to identify the causes, forming a tree.
- Start from the event of commanding the wrong direction and work forwards to identify mitigations or the lack thereof, also forming a tree.
- Start from looking at a component or subsystem, list all the ways it can fail without regard for the application. Then consider the application and work up towards the causes/events.
- Close any gaps between the top-down and bottom-up approaches.
"Software Fault Tree Analysis (SFTA) is a top-down approach to failure analysis which begins with thinking about potential failures or malfunctions (What could go wrong?) and then thinking through all the possible ways that such a failure or malfunction could occur. Fault Tree Analysis (FTA), is often used by the hardware teams to identify potential hazards that might be caused by failures in hardware components or systems, but with the SFTA, the software isn’t considered the hazard, but it can be a cause or contributor when considered in the context of the system."
"The Software Failure Modes and Effects Analysis (SFMEA) is a bottom up approach where each component is examined and all the possible ways it can fail are listed. Each possible failure is traced through the system to see what effect it might have on the system and to determine if it results in a hazardous state. Then the likelihood of the failure and the severity of the system failure can be considered."
But, to the earlier post, these are driven by hard requirements; specifically adherence to NASA STD 7150.2 and NPR 7150.2. Developers/contractors can tailor/waive them with pre-approval but, in general, they tend to go in the direction of less requirements, not more. This may all be moot because I think Voyager pre-dates any of those requirement documents and I'm not sure what existed in the late 1970s.
Supposedly you could literally slap them out of the air if you were at the muzzle, when they had just begun accelerating.
They were in the James Bond movie You Only Live Twice.
I did this exact thing in the small last night - wanted to work on fixing a faulty switch, so my wife and I get on the intercom system on our landline phone so she can tell me when the correct breaker is off.
And of course, breaker #1 is the one that controls the intercom, severing our connection.
Weird discussion since we're talking about a piece of hardware that is working fine and doing exactly what it was told to do. They just pointed it in the wrong direction and need to wait for it to recover, which it is set up to do.
We wouldn't say a server with the router IP address misconfigured was "bricked." (or maybe we would... I guess the jargon changes, but that would seem pretty crazy to me)
- my quick calculation shows just over 7 hours 6 minutes based of google saying the moon is 384,400 km away.
- It took 3 days for Apollo 11 to reach the moon.
Edit: The comment you deleted right before I could reply was good! I think people would enjoy and benefit from your description of how the process works if you're willing to repost it.
As you noted the reliability requirement did in fact flow down from an engineering requirement which is why they exceeded the minimum FMEA standards. There's no official guidance on where and how exactly to track that information so they put it in the usual place but in an unusual way. The lack of a standard during Voyager's time probably impacted the visibility of the work more than the substance.
Being drowned out is harder than you might think. The maximum data rate of a weak signal is 1.4 x [bandwidth] x [signal-to-noise ratio]. If you transmit across a 200MHz band, and your signal is a million times weaker than the noise, you can do hundreds of bits per second.
LOL. Outside of computer dorks what not, it's not very common for people to have a large selection of cables to go digging through. Maybe a drawer full of decrypt USB cables that came as chargers to a phone, but most people I know lose those cables and are constantly buying replacements. So expecting the average person to have a box full of random cables suggests to me you might be living in a bit of a bubble.
The New Horizons probe was launched at much faster speed than Voyagers, actually beating the record of the absolute fastest launch in history, but because of not getting those gravity assists it will never overtake Voyagers.
hell, even homeless people such as myself tend to have giant collections of random cables, even when they don’t actually have anything that requires charging.