Shara & LaserDiscLord
Hey, I’ve been digging into how early LaserDiscs handled bandwidth with those 12‑bit PCM audio tracks and 5.5 Mbps video streams, and I’m curious how that compares to the compression tricks we use in modern codecs like H.264 or H.265. What’s your take on the trade‑offs between analog fidelity and digital efficiency?
The old LaserDisc was a marvel of analog engineering – a continuous waveform that never had to worry about quantisation error, so you could get a pure 12‑bit PCM signal, which is still more precise than the 16‑bit CD audio we all love. The downside was the sheer amount of data: a 5.5‑Mbps video stream meant you needed huge physical media to store a few minutes of film. Digital codecs like H.264 or H.265 cut that bandwidth in half or more by exploiting spatial and temporal redundancies, but every time you drop data you introduce compression artefacts – ringing, blocking, loss of colour detail that you never had in the analog version. In short, analog gives you endless fidelity with a constant noise floor, while digital gives you efficient storage at the cost of some fidelity, especially in the most demanding scenes. For purists I’d still prefer the uncompressed analog signal, but I admit the convenience of digital compression has made the whole system more accessible.
Thanks for the breakdown, that’s a solid comparison. From a coder’s angle I’m intrigued by how the perceptual models in H.264 and H.265 decide what data to drop—maybe we can tweak those thresholds to preserve the subtle detail you mentioned without blowing up the file size. Have you experimented with any custom motion‑vector prediction to see if you can get closer to the analog fidelity?
Ah, tweaking the perceptual thresholds in H.264/265, that’s like trying to turn a digital camera into a 35‑mm lens—possible but a pain. The motion‑vector prediction engines in those codecs are already pretty clever, but they’re designed to satisfy a broad audience, not the audiophile who still thinks the analog hiss is a nice background soundtrack. I’ve played around with a custom P‑slice prediction that forces the encoder to look for smaller, more precise vectors, especially in slow‑motion or high‑contrast frames. It does keep the residuals lower, but the bit‑rate shoots up, and the encoder starts choking on very subtle changes that in analog would just sit there quietly. If you really want to squeeze out that “analog warmth,” you’d need to dial the perceptual quality model up to near lossless and then turn the entropy coder into a brute‑force mode. The file sizes will be larger, but you’ll preserve those micro‑details that make a 12‑bit PCM track feel… well, a bit more complete than a 10‑bit JPEG. So yes, tweak the thresholds, but be prepared for larger files and a bit more processing time.
Sounds like a classic trade‑off loop – squeeze the precision and you hit the bandwidth wall. Maybe try an AV1 test run; it’s built on similar motion‑vector ideas but with a more aggressive entropy coder and a finer‑grained QP system. If you keep the QP very low and let the model stay near‑lossless, the file sizes grow but you might hit that “analog warmth” sweet spot with less coding overhead. Just keep an eye on the decoder load; those micro‑details can really tax the CPU if you push it too far.
AV1’s QP ladder is a nice playground for that analog‑warm feel, but you’ll still see the CPU spike once you get into the 10‑20‑bit‑per‑pixel regime. It’s a fine line between a buttery‑smooth picture and a processor that hiccups every time it sees a new micro‑detail. Keep the QP low, but don’t forget to test on a real‑world decoder – the old 12‑bit PCM trick will sound great only if the hardware can keep up.
That’s a good point – the real bottleneck is often the decoder side. I’d run a quick profiling on a low‑power ARM core with AV1 hardware acceleration, if available, and compare the frame‑time jitter with a software decoder on a desktop. Also, try mixing in some lossless intra‑frames at key points; it can keep the encoder leaner while still preserving those micro‑details you care about. It’s all about finding that sweet spot where the CPU stays below the real‑time threshold and you still get the analog‑like fidelity.
Sounds like a plan – just remember that every extra lossless intra‑frame is a memory‑and‑CPU cost, so watch the heap usage on that ARM core. If you can hit sub‑16‑bit QPs with the hardware acceleration kicking in, you’ll get the analog feel without the jitter. Just don’t let the decoder go into a spin‑lock; a few micro‑seconds of latency can still kill that buttery smoothness you’re after. Good luck, and keep those analog vibes alive.