CodeWhiz & Player1
CodeWhiz CodeWhiz
Hey, I was just digging into Unity's profiler the other day and noticed some neat ways to shave milliseconds off the game loop. Want to swap notes on the best techniques for keeping frame rates smooth?
Player1 Player1
Sounds dope! Yeah, let’s dive in – I’ve been tweaking batching, culling, and GC spikes for a while. What’s been your go‑to trick?
CodeWhiz CodeWhiz
My go‑to is always keeping a tight eye on the Update loop – I move heavy math into Burst‑compiled jobs, use static batching for static meshes, and make sure I pool objects instead of Destroy/Instantiate. Then I turn on the profiler to catch any hidden GC spikes, tweak the batch sizes, and adjust culling distances until the frame time is stable. How about you, what’s your main focus?
Player1 Player1
Nice! I’m all about keeping the render pipeline chill – I usually drop in some GPU instancing and make sure I never let the main thread handle huge meshes. Also, I’ll pre‑load a few textures and do a quick check for hidden DrawCalls, just so the GPU doesn’t freak out. What about your Burst jobs, any cool tricks you’re using to keep them lean?
CodeWhiz CodeWhiz
That’s a solid pipeline setup. For Burst jobs I keep them lean by reusing NativeArrays with a pool instead of allocating per frame, using IJobParallelForBatch so the compiler can split work into 64‑element chunks, and making sure all data is in contiguous structs to let SIMD kick in. I also avoid any boxing or generic types inside the job, keep the function static, and use the BurstCompile attribute with options to inline small helpers. The trick is to profile the job itself with the Burst profiler and cut any extra memory copies that don’t touch the GPU. How do you handle texture streaming in your loop?
Player1 Player1
Texture streaming is kinda my “free‑for‑all” game – I keep a queue of low‑res mip chains that load in the background while the main thread focuses on logic. I use an async load system that kicks off the GPU upload as soon as the data’s ready, so the frame doesn’t stall. I also make a rule: never let a texture sit on the CPU side longer than a frame, otherwise you hit those nasty spikes. And I keep a small pool of “blank” textures so I can swap out quickly if a level change needs a fresh batch. What tricks do you use to keep your textures from taking over the RAM?
CodeWhiz CodeWhiz
I usually keep texture memory in check by compressing everything to ASTC or ETC2 if the platform supports it, then dropping unused atlas slots right away. I also set a hard limit on the total GPU memory used by textures and have a small background thread that evicts the least‑recently‑used atlases before the limit is hit. For level changes I rebuild the atlas on the fly with only the assets that are actually visible, and I keep a tiny “fallback” atlas for UI or placeholders so I never waste space on blank texels. Finally, I make sure to call Release on textures as soon as the material is swapped out, so the driver can free the memory. What about your compression pipeline?
Player1 Player1
Cool tricks! For my compression I just let Unity’s own ASTC / ETC2 do the heavy lifting too, but I’ll force a quick pre‑compress step in the editor so I never hit the build bottleneck. I also keep a tiny custom “compress on the fly” script that flips a texture to ASTC if the platform supports it and kicks off a mip chain right after import. That way I can swap a big tex in the scene without pulling the whole game into a lag‑storm. Got any hacks for that one tricky “over‑sized” sprite that never quite fits?