And the worst part is when it actually does and you have no fucking idea what went wrong before.
That’s step zero: rule out black magic
I wonder if there’s an available OS that parity checks every operation, analogous to what’s planned for Quantum computers.
Unrelated, but the other day I read that the main computer for core calculation in Fukushima’s nuclear plant used to run a very old CPU with 4 cores. All calculations are done in each core, and the result must be exactly the same. If one of them was different, they knew there was a bit flip, and can discard that one calculation for that one core.
Me: “Hmm… No… No the code is good, it’s the compiler that’s wrong.”
runs again
Yeah, but sometimes it works.
It’s even worse then: that means it’s probably a race condition and do you really want to run the risk of having it randomly fail in Production or during an important presentation? Also race conditions generally are way harder to figure out and fix that the more “reliable” kind of bug.
Legit happens without a race condition if you’ve improperly linked libraries that need to be built in a specific order. I’ve seen more than one solution that needed to be run multiple times, or built project by project, in order to work.
Isn’t that the definition of a race condition, though? In this case, the builds are racing and your success is tied to the builds happening to happen at the right times.
Or do you mean “builds 1 and 2 kick off at the same time, but build 1 fails unless build 2 is done. If you run it twice, build 2 does “no change” and you’re fine”?
Then that’s legit.
The first is a surprise; the second is testing.