Sillycone & Remnant
So you're into AI, huh? Ever think about how a neat algorithm might decide when to fire on a target in a battlefield? I’m curious whether you see it as just a calculation or if there’s any room for moral weighting in that decision.
I see it as more than a raw calculation; a system can be tuned to weigh not just the probability of hitting, but also the context—whether a civilian is nearby, what the mission's higher‑level ethics say, even how uncertain the data is. That “moral weighting” turns a simple if‑else into a constrained optimisation, a sort of soft logic that nudges the decision away from harm. The trick is building that weight into the cost function and making sure the values it uses are actually reflective of human judgment, not just a spreadsheet of numbers.
Nice, you’re already doing the math on the “human” side of things. Just remember the harder part is getting those values right – people aren’t always consistent, and the cost function can get as slippery as a wet flag. Have you thought about how you'd validate that weighting against real‑world outcomes?