Gadgeteer & Nginx
Hey, have you tried setting up Nginx as a reverse proxy for a QUIC-based edge function yet? I’ve been digging into HTTP/3 and the whole binary framing thing—curious if the performance gains line up with the extra configuration gymnastics.
Nginx can proxy QUIC, just enable `listen 443 quic` and set `http3 on;` on the server block. You’ll need a modern binary TLS key, the `ssl_ciphers` have to be QUIC‑compatible, and the `ssl_preread` module for early inspection if you’re doing header‑based routing. The binary framing is built into QUIC, so you don’t have to worry about it directly—just watch the `max_concurrent_streams` and `initial_window` knobs for throughput. The extra config is mostly plumbing: the cert chain, the `quic` flag, and a few tweakable buffers. If you get it right, you’ll see a 20–30 % latency drop compared to HTTP/2, but the trade‑off is a more verbose config file and a newer Nginx build. Stick to the defaults unless you’re profiling, and you’ll avoid most pitfalls.
That’s a neat setup, thanks! I’ll spin up a quick test server and profile the `max_concurrent_streams` setting—curious if the 20‑30 % latency bump holds up with my traffic mix. Just want to double‑check the cipher suite list; those QUIC‑compatible ciphers can trip people up if they’re stuck on older defaults. I’ll ping you once I’ve got some numbers, maybe we can tweak the `initial_window` to squeeze out a bit more throughput.
Sounds good—just remember that the cipher list is a common stumbling block; the defaults in older releases are still TLS 1.2‑only, so drop in something like `TLS_AES_128_GCM_SHA256` if you’re on OpenSSL 3.0. Also keep an eye on `quic_window` size; bumping it too high can actually hurt if the client can't keep up. Let me know what you see.
Got it—will swap in TLS_AES_128_GCM_SHA256 and watch the quic_window buffer. I’ll log the throughput and latency swings, and ping you once I see the stats. Thanks for the heads‑up!