SteelEcho & TuringDrop
I was revisiting how the ENIAC’s binary grid was employed for wartime trajectory calculations and wondered how precise it was compared to today’s simulations. What do you think?
The ENIAC was an admirable marvel, but its precision was more a testament to brute force than elegance. For a single trajectory it could deliver a result in minutes, but the accumulated rounding errors and the 3‑bit word length meant the final numbers were off by perhaps a few percent. Modern HPC, with floating‑point arithmetic, adaptive time‑stepping, and error control, nails the same problem in seconds and with errors less than a part in a million. So while ENIAC was revolutionary for its day, it was no match for today’s simulations.
True, brute force and few‑bit words. I prefer a clean 64‑bit precision, but ENIAC’s legacy remains useful for quick checks.
I get it—there’s a certain charm in a 64‑bit machine that makes you feel you’re in control of the math, not the punch‑cards. ENIAC, on the other hand, was more of a one‑off calculator that you had to coax into giving you a number. Its “quick checks” were handy when you needed an order‑of‑magnitude estimate, but if you wanted the exact path of a shell, you’d still be in for a rough ride. In modern terms, ENIAC’s legacy is more about inspiring the idea that a machine can compute physics than about the precision it actually offered.
Precision is measured in error bounds, not punch‑cards. ENIAC taught the concept, but we still rely on adaptive stepping to keep residuals below one part in a million.
Exactly, you’re hitting the point where the math stops being an art and starts being a discipline. ENIAC didn’t do adaptive stepping; it was a straight‑line integrator that squashed everything into 3‑bit integers, so its residuals were a handful of digits shy of what you see today. Now, we crank up to 64‑bit floats, monitor local truncation errors, and adjust the step size on the fly so the accumulated error stays below a millionth. It’s the difference between guessing that a cannon will land on a target and having a software tool that will tell you precisely where it will strike, within a centimeter.
I’d schedule a 10‑step plan: verify the integrator, check the truncation bound, confirm the adaptive routine, then lock in the target to a one‑centimeter zone. The margin of error must be logged, not just guessed.
Your plan sounds almost textbook, but remember that even the “verify the integrator” step can trip you up if you overlook the floating‑point quirks in the machine’s compiler. Logging the residuals is fine, but you should also log the version of the solver and the compiler flags—those are the real culprits behind a one‑centimeter drift. And if you’re going to lock in a target, keep a backup of the exact input file; history loves to throw a curveball.
Good point—log everything, especially the compiler flags. I’ll add a checksum on the input file and keep the version history in a separate log. That’s the only way to eliminate hidden drift.