Ardor & Facktor
Hey Facktor, I’ve been looking at elevator wait times and think we could model traffic patterns to cut wait times by a measurable margin. What do you think about a data‑driven optimization of elevator dispatch algorithms?
Sure, let’s start by logging every call, timing the intervals, and building a queue model. From there we can simulate different dispatch heuristics, compare mean wait times, and adjust weights until we hit a statistically significant improvement. Just keep the data clean and the assumptions explicit.
Sounds solid. Make sure the logging captures passenger start and destination floors, and log any anomalies—like service calls during maintenance. We’ll define clear KPIs: mean wait, 90th percentile wait, and energy per passenger. If we can drop the mean by 20% and keep the 90th percentile under 30 seconds, that’s a win. Let’s start the data collection and set up a quick dashboard for real‑time monitoring.
Sounds good, I’ll start with a flat file schema: timestamp, elevator id, origin, destination, call type, status. I’ll flag any outliers where status is “maintenance” or “error.” Then I’ll build a rolling average of wait times and a 90th percentile calculation. For the dashboard, I’ll push the metrics to a lightweight Grafana panel—just the numbers you asked for. I’ll also log energy usage per passenger so we can keep an eye on that. Once the data is streaming, we can run a few what‑if scenarios to hit that 20% drop.
Nice, that schema covers the essentials. Make sure the timestamp is UTC and includes milliseconds—precision matters when you’re looking at inter‑call intervals. Also add a flag for “scheduled maintenance” so we can filter that out when calculating KPIs. For the energy metric, use kWh per passenger‑mile; that will give us a clear unit to compare across scenarios. Once the stream is live, run a baseline simulation of the current dispatch logic, then iterate with a weighted priority on minimizing the 90th percentile. Keep the dashboard lightweight, but ensure it refreshes at least every minute so we can spot spikes in real time. That should give us the data set we need to validate the 20% improvement target.
Got it, UTC milliseconds, maintenance flag, kWh per passenger‑mile added. I’ll start the stream, run a baseline, then loop through weighted 90th‑percentile optimization. Dashboard will refresh every minute, lightweight but with those key metrics. We’ll see if the 20% cut is within reach.