People always underestimate the high level NASA works at. Everyone bitches and moans, especially Musk simps, about how long SLS took to make and its expense, but it worked right the first time. In the case of the Voyager spacecraft, they are working with tech so old, all the original engineers are retired or dead. NASA rocks.
I understand your point and completely agree that NASA has produced some amazing technological feats, but we could probably use a different example than the SLS to highlight their accomplishments. Even with supposedly repurposed rocket engines and technology from the Shuttle era, that project is billions of dollars over budget and years behind schedule. If you want to highlight how amazing it is that SLS has actually flown with all the political manipulations associated with it, then I’d probably agree with you in that sense. This is no criticism of the engineers, but to completely ignore the issues of this project as a whole, not just financially related, seems to be a bit disingenuous.
Here’s a good article from Berger talking about what the Government Accountability Office thinks of the project: https://arstechnica.com/space/2023/09/nasa-finally-admits-what-everyone-already-knows-sls-is-unaffordable
The budget wasn’t really relevant to my point. And it did work correctly the first time.
All I’m saying is you could choose a better example, which NASA is full of them.
But lets say I built you a car that already came with an engine and some other important things, just to make it quicker and cheaper to get that car in your hands. Unfortunately, you want me to complete work on the car in five different states and use components from those areas. Guess what, the car is now about $5 million over budget and 5 years behind schedule. Not only that, but we encountered issues during the first test that are going to require more fixes ($$$) and more delays for the second test.
In this situation, you’re saying it’s great, it ran correctly the first time because it went down the road and back, and budgets and timelines don’t matter. I’m saying ehhhh, not really - we’re over budget by millions, delayed by years, and there were issues, even though we repurposed stuff that was in a car that actually ran a few years back. It’s great we built the car, but the project itself isn’t something that I would showcase as my best work.
I just have to imagine how interesting of a challenege that is. Kinda like when old games only had 300kb to store all their data on so you had to program cool tricks to get it all to work.
No yeah, it’s like that plus the thing is a light day away, and on top of that malfunctioning on a hardware level. Incredible
To me, the physics of the situation makes this all the more impressive.
Voyager has a 23 watt radio. That’s about 10x as much power as a cell phone’s radio, but it’s still small. Voyager is so far away it takes 22.5 hours for the signal to get to earth traveling at light speed. This is a radio beam, not a laser, but it’s extraordinarily tight beam for a radio, with the focus only 0.5 degrees wide, but that means it’s still 1000x wider than the earth when it arrives. It’s being received by some of the biggest antennas ever made, but they’re still only 70m wide, so each one only receives a tiny fraction of the power the power transmitted. So, they’re decoding a signal that’s 10^-18 watts.
So, not only are you debugging a system created half a century ago without being able to see or touch it, you’re doing it with a 2-day delay to see what your changes do, and using the most absurdly powerful radios just to send signals.
The computer side of things is also even more impressive than this makes it sound. A memory chip failed. On Earth, you’d probably try to figure that out by physically looking at the hardware, and then probing it with a multimeter or an oscilloscope or something. They couldn’t do that. They had to debug it by watching the program as it ran and as it tried to use this faulty memory chip and failed in interesting ways. They could interact with it, but only on a 2 day delay. They also had to know that any wrong move and the little control they had over it could fail and it would be fully dead.
So, a malfunctioning computer that you can only interact with at 40 bits per second, that takes 2 full days between every send and receive, that has flaky hardware and was designed more than 50 years ago.
Is there a Voyager 1, uh…emulator or something? Like something NASA would use to test the new programming on before hitting send?
And you explained all of that WITHOUT THE OBNOXIOUS GODDAMNS and FUCKIN SCIENCE AMIRITEs
Finally I can put some take into this. I’ve worked in memory testing for years and I’ll tell you that it’s actually pretty expected for a memory cell to fail after some time. So much so that what we typically do is build in redundancy into the memory cells. We add more memory cells than we might activate at any given time. When shit goes awry, we can reprogram the memory controller to remap the used memory cells so that the bad cells are mapped out and unused ones are mapped in. We don’t probe memory cells typically unless we’re doing some type of in depth failure analysis. usually we just run a series of algorithms that test each cell and identify which ones aren’t responding correctly, then map those out.
None of this is to diminish the engineering challenges that they faced, just to help give an appreciation for the technical mechanisms we’ve improved over the last few decades
pretty expected for a memory cell to fail after some time
50 years is plenty of time for the first memory chip to fail most systems would face total failure by multiple defects in half the time WITH physical maintenance.
Also remember it was built with tools from the 70s. Which is probably an advantage, given everything else is still going
SWEs have new standards now, and i think we should hold them to it. Considering how shit most modern websites are these days. I think it’s only going to be beneficial.
Say that to corporate. I’m perfectly willing (eager, even) to write actually good software, but I’m forced to work within a budget and on top of the pile of despair we call “tech stack”. Everything is about 20 orders of magnitude more complex than it needs to be, nobody has time to do anything properly and everything is always kind of burning.
Keep in mind too these guys are writing and reading in like assembly or some precursor to it.
I can only imagine the number of checks and rechecks they probably go through before they press the “send” button. Especially now.
This is nothing like my loosey goosey programming where I just hit compile or download and just wait to see if my change works the way I expect…