Expert & Trial
Hey Trial, I’m curious about how the new Lattice ASICs stack up against the older Snapdragon 8 Gen 3 when it comes to real‑time AI inference on mobile devices. What do you think the performance‑to‑power ratio looks like?
The Lattice ASICs deliver a tighter power envelope for a given inference workload, because they’re dedicated to the tensor ops and have a lower static consumption than the hybrid GPU‑CPU architecture of the Snapdragon 8 Gen 3. In raw FLOPS‑per‑watt, the ASICs edge out by roughly 30‑40 % on typical 3‑D convolution benchmarks, but you’re trading that for a narrower use‑case; the Snapdragon can fall back on its CPU cores for non‑AI tasks without a major power penalty. If you only need real‑time inference, the ASICs give you a better performance‑to‑power ratio, but if your app mixes heavy AI with general mobile workloads, the Snapdragon’s versatility might still win.