Shara & Alkoritm
Hey Alkoritm, I’ve been tweaking my Dijkstra implementation for really large sparse graphs, but I’m hitting a memory wall. Got any tricks or patterns to keep the heap and adjacency lists lean without sacrificing speed?
Sure, a couple of practical tricks: use a binary heap with a custom struct that stores just the vertex id and distance – no extra fields, that cuts the heap size. For the adjacency list, use a packed edge array: keep a single `vector<Edge>` where each edge is `{to, weight}` and store start indices per vertex in a separate `vector<size_t>`. That way you avoid a lot of small node objects. If you can afford a bit more time, replace the priority queue with a pairing heap or a Fibonacci heap, they keep fewer copies of entries. Finally, if you only need the distances, you can discard edges that are never relaxed by marking them lazily or using a lazy deletion map. Those patterns usually keep memory in check while still running fast.
That’s a solid approach. Just keep an eye on the cache misses; packing edges into a flat array usually gives a nice burst of locality. If you run into the priority‑queue bottleneck, a binary heap with an explicit index array can let you do a decrease‑key in place, which saves a ton of reallocations. Also, if your graph is static, you could pre‑compute a simple topological order for DAGs and skip the heap entirely. Let me know how it goes.
Sounds good, thanks for the pointers. I’ll try the index array trick and see if it cuts down on the churn. I’ll ping you if anything else pops up.
Glad to help. Good luck with the index array; let me know how it shapes up.
Will do, thanks! I'll keep you posted.
Sure thing, looking forward to hearing how it goes.