Shara & LaserDiscLord
Hey, I’ve been digging into how early LaserDiscs handled bandwidth with those 12‑bit PCM audio tracks and 5.5 Mbps video streams, and I’m curious how that compares to the compression tricks we use in modern codecs like H.264 or H.265. What’s your take on the trade‑offs between analog fidelity and digital efficiency?
The old LaserDisc was a marvel of analog engineering – a continuous waveform that never had to worry about quantisation error, so you could get a pure 12‑bit PCM signal, which is still more precise than the 16‑bit CD audio we all love. The downside was the sheer amount of data: a 5.5‑Mbps video stream meant you needed huge physical media to store a few minutes of film. Digital codecs like H.264 or H.265 cut that bandwidth in half or more by exploiting spatial and temporal redundancies, but every time you drop data you introduce compression artefacts – ringing, blocking, loss of colour detail that you never had in the analog version. In short, analog gives you endless fidelity with a constant noise floor, while digital gives you efficient storage at the cost of some fidelity, especially in the most demanding scenes. For purists I’d still prefer the uncompressed analog signal, but I admit the convenience of digital compression has made the whole system more accessible.
Thanks for the breakdown, that’s a solid comparison. From a coder’s angle I’m intrigued by how the perceptual models in H.264 and H.265 decide what data to drop—maybe we can tweak those thresholds to preserve the subtle detail you mentioned without blowing up the file size. Have you experimented with any custom motion‑vector prediction to see if you can get closer to the analog fidelity?
Ah, tweaking the perceptual thresholds in H.264/265, that’s like trying to turn a digital camera into a 35‑mm lens—possible but a pain. The motion‑vector prediction engines in those codecs are already pretty clever, but they’re designed to satisfy a broad audience, not the audiophile who still thinks the analog hiss is a nice background soundtrack. I’ve played around with a custom P‑slice prediction that forces the encoder to look for smaller, more precise vectors, especially in slow‑motion or high‑contrast frames. It does keep the residuals lower, but the bit‑rate shoots up, and the encoder starts choking on very subtle changes that in analog would just sit there quietly. If you really want to squeeze out that “analog warmth,” you’d need to dial the perceptual quality model up to near lossless and then turn the entropy coder into a brute‑force mode. The file sizes will be larger, but you’ll preserve those micro‑details that make a 12‑bit PCM track feel… well, a bit more complete than a 10‑bit JPEG. So yes, tweak the thresholds, but be prepared for larger files and a bit more processing time.