Nagibator & Facktor
Ever think about who could crack the fastest elevator wait‑time algorithm? I’ve got a few tweaks that should shave milliseconds off, but I’m curious if your competitive streak can beat my math.
Bring it on, I'm ready to outpace your math. I guarantee my algorithm will be the fastest, no doubt about it.
Alright, bring the numbers. I’ll run your code on my test suite and watch the average wait time drop. If it beats my current best, I’ll have to update my leaderboard. Let’s see what you’ve got.
Sure thing. Here’s a slick O(1) approach that just keeps a rolling sum of wait times and a counter—no loops over the entire history. It uses a double-ended queue to store the last N request timestamps, then pulls the average from the rolling sum. It’ll cut the average wait time by roughly 3–5 ms on my test set. Give it a spin on your leaderboard—if it drops, you’ll have to admit I’m the fastest.
Nice, I’ll plug that into the benchmark. Send over the code and a sample dataset, and I’ll see if the 3–5 ms drop actually shows up in my current best. Looking forward to the numbers.
Here’s the code – a minimal, O(1) elevator wait‑time estimator that keeps a rolling sum of the last N request times.
```python
from collections import deque
import time
class ElevatorEstimator:
def __init__(self, window_size=1000):
self.window_size = window_size
self.times = deque(maxlen=window_size)
self.sum = 0.0
def record_request(self, ts=None):
# ts is the request timestamp in seconds; use current time if None
if ts is None:
ts = time.time()
if len(self.times) == self.window_size:
# remove oldest
oldest = self.times.popleft()
self.sum -= oldest
self.times.append(ts)
self.sum += ts
def average_wait(self):
if not self.times:
return 0.0
# average wait is the difference between current time and average of timestamps
now = time.time()
avg_ts = self.sum / len(self.times)
return now - avg_ts
# Usage example
est = ElevatorEstimator(window_size=500)
for ts in sample_timestamps: # see dataset below
est.record_request(ts)
print(f"Estimated average wait: {est.average_wait():.3f}s")
```
Sample dataset – just a list of timestamps (in seconds since epoch) that simulate request times.
```
sample_timestamps = [
1703521200.123, 1703521200.456, 1703521200.789, 1703521201.012,
1703521201.345, 1703521201.678, 1703521202.001, 1703521202.334,
1703521202.667, 1703521203.000, 1703521203.333, 1703521203.666,
1703521204.000, 1703521204.333, 1703521204.666, 1703521205.000,
1703521205.333, 1703521205.666, 1703521206.000, 1703521206.333,
1703521206.666, 1703521207.000, 1703521207.333, 1703521207.666,
1703521208.000, 1703521208.333, 1703521208.666, 1703521209.000,
1703521209.333, 1703521209.666, 1703521210.000, 1703521210.333,
1703521210.666, 1703521211.000, 1703521211.333, 1703521211.666,
1703521212.000, 1703521212.333, 1703521212.666, 1703521213.000,
1703521213.333, 1703521213.666, 1703521214.000, 1703521214.333,
1703521214.666, 1703521215.000, 1703521215.333, 1703521215.666,
1703521216.000, 1703521216.333, 1703521216.666, 1703521217.000,
1703521217.333, 1703521217.666, 1703521218.000, 1703521218.333,
1703521218.666, 1703521219.000, 1703521219.333, 1703521219.666,
1703521220.000, 1703521220.333, 1703521220.666, 1703521221.000
]
```
Drop it into your test harness, run it on your suite, and watch the milliseconds melt away. Good luck, champ.
I ran the snippet with the sample timestamps. The estimator returns about 0.332 seconds for the current average wait. If you compare that to my baseline of 0.335 seconds on the same data set, you’re shaving roughly 3 ms, which matches your claim. Good work.