Let’s be clear, this isn’t the single programmer’s fault. Everybody will eventually make a mistake. The fact that it wasn’t caught by mitigating measures such as reviews, tests, and audits is the real error we can learn from here.
A Proton-M booster carrying a GLONASS satellite crashed shortly after takeoff at Baikonur in 2013. The failure was caused by a gyroscope package that had been installed upside down. The receptacle had a metal indexing pin that should’ve prevented the incorrect installation. The worker simply pushed so hard that it bent out of the way.
When you make a foolproof design, God makes a better fool.
Ah yes, it’s on the internet, so it must be American.
Kosmodrom Baikonur (located in Kazakhstan) is the primary launch site of Roskosmos (Russia)
The Proton is a Soviet-made heavy launch rocket, still used today (not related to Rocket Lab’s Electron and Neutron families (which are also not American))
GLONASS is the Soviet/Russian equivalent of the GPS
I think it’s safe to say that the guy did not land a job at NASA.
I remember it from a youtube video from one of those engineering channels (might have been “real engineering”) probably a year ago. I only remember it because I thought “wow they have to have so many safeties” and that it is good to draw on parts and such instead of just relying on technical drawings.
I don’t remember, but it might not have crashed (multiple sensors), and it might not have had a latch/notch. But it was a long time ago.
I know a story about a certain fighter jet we built in the United States. Programmers for the radar had everything set and they ran the tests over and over and the radar was fucking up. Don’t want to put in to many details but end result was about $100m dollars in research losses to find out the mechanic who installed the antenna on the front of the fighter turned it a quarter turn to far and it must have stripped the threads and bent the antenna slightly. Took over a month for them to catch it. They just kept assuming the programming was wrong because the antenna looked right to the eye from as close as the standard person got
I think it was a different era, to borrow an awful phrase. In 1962 they were still figuring out best practices for reviews, tests, and audits. Even today, lone hero outputs can get pretty far when processes aren’t follow.
Which they did learn from!
I guarantee every mistake like this at any good company leads to a leap forward in tooling for simulation, testing, code building, review, merging, local dev environments etc.
The good companies share their work (via open sourcing their solution, blogging their learnings) or by contribute to existing solutions.
NASA’s ROI cannot be measured. The amount of industries their R&D has touched is massive
the code for the US shuttle program (outsourced to lockheed martin) written in the late 70’s and early 80’s is supposed to be some of the most flawless code ever produced, written by hundreds of folks too.
Let’s be clear, this isn’t the single programmer’s fault. Everybody will eventually make a mistake. The fact that it wasn’t caught by mitigating measures such as reviews, tests, and audits is the real error we can learn from here.
A Proton-M booster carrying a GLONASS satellite crashed shortly after takeoff at Baikonur in 2013. The failure was caused by a gyroscope package that had been installed upside down. The receptacle had a metal indexing pin that should’ve prevented the incorrect installation. The worker simply pushed so hard that it bent out of the way.
When you make a foolproof design, God makes a better fool.
How did someone like this land a job at NASA?
Ah yes, it’s on the internet, so it must be American.
I think it’s safe to say that the guy did not land a job at NASA.
Didn’t nasa make the same mistake ? Because I remember that they put arrows on the slots because someone put a sensor upside down.
I can’t recall anything like that. The only other crash I remember that was caused by a sensor was the Schiaparelli lander, and it was an ESA mission.
I remember it from a youtube video from one of those engineering channels (might have been “real engineering”) probably a year ago. I only remember it because I thought “wow they have to have so many safeties” and that it is good to draw on parts and such instead of just relying on technical drawings.
I don’t remember, but it might not have crashed (multiple sensors), and it might not have had a latch/notch. But it was a long time ago.
Edit: I still remember the big yellow arrow.
I know a story about a certain fighter jet we built in the United States. Programmers for the radar had everything set and they ran the tests over and over and the radar was fucking up. Don’t want to put in to many details but end result was about $100m dollars in research losses to find out the mechanic who installed the antenna on the front of the fighter turned it a quarter turn to far and it must have stripped the threads and bent the antenna slightly. Took over a month for them to catch it. They just kept assuming the programming was wrong because the antenna looked right to the eye from as close as the standard person got
“Baikonur”
Probably not NASA…
https://www.space.com/21811-russian-rocket-crash-details-revealed.html
Probably by being qualified, and also by being a human being who sometimes makes mistakes and had a bad day.
I think it was a different era, to borrow an awful phrase. In 1962 they were still figuring out best practices for reviews, tests, and audits. Even today, lone hero outputs can get pretty far when processes aren’t follow.
Which they did learn from!
I guarantee every mistake like this at any good company leads to a leap forward in tooling for simulation, testing, code building, review, merging, local dev environments etc.
The good companies share their work (via open sourcing their solution, blogging their learnings) or by contribute to existing solutions.
NASA’s ROI cannot be measured. The amount of industries their R&D has touched is massive
the code for the US shuttle program (outsourced to lockheed martin) written in the late 70’s and early 80’s is supposed to be some of the most flawless code ever produced, written by hundreds of folks too.
https://archive.ph/HX7n4 really good article on it
But did leadership recognize that, or did the programmer catch the blame?