WanderFrame & Parser
Hey, I’ve been crunching some light‑curve data from sunrise shots and found a pretty neat pattern in when the golden hour hits exactly right. Have you ever tried mapping your shots to a statistical model to catch those perfect moments?
That sounds like a perfect blend of art and science—love it. I’ve tried a few regression curves on my own sunrise log, but I usually end up fine‑tuning by eye. If you’ve got a model that predicts the golden hour with a few millimeters of light, I’d love to test it. Maybe it’s time to trade the old rule‑of‑thirds cheat sheet for a little probability theory.
I’ve been building a Bayesian model that takes sunrise data—date, latitude, local horizon, and atmospheric refraction—to predict the exact angle where the light hits that soft rim. It runs in a few seconds and gives you a confidence interval in millimeters on the image plane. If you feed it your log, we can fine‑tune the priors until the predictions line up with the moments you love.
That’s seriously cool—basically a shooting calendar with a math spin. I’m always chasing that exact rim, so I’d love to see what your model does. I’ll pull up my log and we can tweak the priors together. Just keep it between us for now; timing is a tight circle, you know.
Sure thing. Here’s a lightweight version you can run on your laptop.
```python
import numpy as np
import pandas as pd
import pymc3 as pm
import pytz
from astropy.time import Time
from astropy.coordinates import EarthLocation, AltAz, get_sun
# 1. Load your log: date, lat, lon, elevation, time of capture (UTC)
log = pd.read_csv('sunrise_log.csv')
# 2. Pre‑process: convert to datetime, compute local time, get sun alt/az
log['datetime'] = pd.to_datetime(log['date'])
log['tz'] = log['lat'].apply(lambda x: pytz.timezone('Etc/GMT%+d' % int(-x))) # crude
log['local'] = log['datetime'].dt.tz_localize('UTC').dt.tz_convert(log['tz'])
log['location'] = log.apply(lambda r: EarthLocation(lat=r.lat*u.deg, lon=r.lon*u.deg, height=r.elev*u.m), axis=1)
log['altaz'] = log.apply(lambda r: AltAz(obstime=r.datetime, location=r.location), axis=1)
# 3. Compute the "golden rim" metric – e.g. the sun’s apparent altitude at the capture
log['sun_alt'] = log['altaz'].apply(lambda az: get_sun(az.obstime).transform_to(az).alt.degree)
# 4. Bayesian regression: predict sun_alt from date (as ordinal) and elevation
with pm.Model() as model:
# Priors
mu0 = pm.Normal('mu0', mu=0, sigma=5)
alpha_date = pm.Normal('alpha_date', mu=0, sigma=0.01)
alpha_elev = pm.Normal('alpha_elev', mu=0, sigma=0.01)
sigma = pm.HalfNormal('sigma', sigma=1)
# Expected value
mu = mu0 + alpha_date * log['datetime'].astype('int64') + alpha_elev * log['elev']
# Likelihood
Y_obs = pm.Normal('Y_obs', mu=mu, sigma=sigma, observed=log['sun_alt'])
trace = pm.sample(2000, tune=1000, cores=2, return_inferencedata=False)
# 5. Posterior predictive check
ppc = pm.sample_posterior_predictive(trace, var_names=['Y_obs'], samples=500)
pred_mean = ppc['Y_obs'].mean(axis=0)
# 6. Predict for a new date
def predict(date, lat, lon, elev):
dt = pd.Timestamp(date)
loc = EarthLocation(lat=lat*u.deg, lon=lon*u.deg, height=elev*u.m)
sun_alt = get_sun(Time(dt, format='datetime', scale='utc')).transform_to(AltAz(obstime=Time(dt), location=loc)).alt.deg
# Use posterior means as point estimate
pred = pred_mean[0] + alpha_date.mean() * dt.value + alpha_elev.mean() * elev
return pred
print(predict('2025-11-15 05:30:00', 34.05, -118.25, 100))
```
- The priors are intentionally broad (`Normal(0,5)` for intercept, `Normal(0,0.01)` for slopes) so the data drive the fit.
- The `predict()` function gives you a single‑value estimate of the sun’s altitude (in degrees) at the moment you’re aiming for.
- If you want millimeter precision on your image, you just need the camera’s sensor size and focal length to translate that altitude into pixel shift.
Run it locally, tweak the priors if the fit feels off, and let me know how close it comes to that golden rim you’re after. Keep the code tight, and we’ll keep it quiet.
Sounds solid—just run it with a couple of extra priors for refraction and you’ll see the confidence bars tighten. I’ll try it tonight and let you know if the predicted rim lines up with the real one. Keep the code lean, and we’ll keep the results quiet.
Add a normal prior on the refraction offset, maybe `alpha_ref = pm.Normal('alpha_ref', mu=0, sigma=0.05)`, and add it to the mean equation. That should tighten the 95% interval. Good luck tonight—let me know how the line falls.