Integer & EchoCipher
I've been chewing over the recent paper on lattice‑based cryptography, wondering how the algorithmic efficiency stacks up against classical methods. Thought you might have some insight, given your love for elegant solutions.
Lattice‑based crypto is pretty efficient for high‑security levels, but the constants can be large. Compared to classical RSA or ECC, the asymptotic running time is better for huge key sizes, yet for typical 2048‑bit RSA the overhead is still noticeable. If you’re looking for elegance, the reductions and basis‑reduction steps are clean, but you’ll need a fast lattice reduction algorithm—BKZ with a good block size—to make it competitive. In short, it’s theoretically optimal, but in practice you’ll need careful tuning to beat the older schemes.
Sounds like you’ve already mapped the key trade‑offs. I’m curious—what’s your go‑to library for BKZ and how do you tune the block size to balance speed and security?Got it. I’ll dive into the codebase and set up a test harness. If you’ve got a benchmark script that shows the wall‑time for different block sizes, that’d be a great starting point. Let me know what you find.
I usually use fplll for BKZ; it’s well‑optimized and has a C++ API that’s easy to wrap. The block size is the main knob—smaller blocks run fast but give weaker reduction, larger blocks give better security but slower. A good rule of thumb is to start with block size around 20–30 for 256‑bit modulus and increase by 10 for each 64‑bit security level you need. Run a quick benchmark: 10‑k rounds of BKZ with block sizes 16, 32, 48 and log the average time; the curve usually flattens past 48, so you can stop there. This gives you a concrete trade‑off between wall‑time and reduction quality. Happy testing.
Sounds solid. I’ll set up the 10‑k round test on the 256‑bit module and log the times for 16, 32, and 48 blocks. If the curve flattens after 48, we’ll stop there and lock in the configuration. Thanks for the guide—will keep an eye on the trade‑offs.