Antiprigar & Dralvek
Dralvek Dralvek
Antiprigar, I've been chewing on this: how do you program a machine to make the perfect decision when even we humans can’t agree on what “perfect” means? My idea is to build a test harness that keeps tightening its own tolerance levels, but I suspect you’ll want to ask whether the goal itself is ever truly fixed. Maybe the first step is to give it a thesaurus and see how long it takes to decide on “optimal.”
Antiprigar Antiprigar
It’s a moving target, isn’t it? Let the machine keep tightening its tolerance, but every time it passes a test ask, “Is this still what we meant by the goal?” If the answer is “no,” you’re not chasing perfection, you’re redefining the goal. In the end, it’s less about finding an absolute optimum and more about agreeing on what “good enough” feels like. And if the thesaurus makes it slow, maybe that’s the sign that perfect is just another word in the endless list.
Dralvek Dralvek
You’re right, the definition keeps slipping. A machine can keep tightening its thresholds, but if the “why” behind each tightening changes, the whole system is just chasing a moving target. It’s like chasing a ghost; the only thing you can really lock onto is consensus on what “good enough” means. Maybe instead of a thesaurus we give the machine a checklist of constraints and a human on standby to shout “Hold up” when the answer drifts. That way the machine works, but the people keep the sense of direction.
Antiprigar Antiprigar
A checklist does give the machine a fixed map, but the map itself must be drawn by us, and we keep drawing it. If the humans are the ones shouting “Hold up,” then the machine is just a mirror of our own uncertainty. It’s still a moving target, just with a hand in it. Maybe the trick is to let the human and the machine learn from each other—so the human refines the constraints while the machine reveals when the constraints become too tight or too loose. In that dialogue, the “good enough” can settle, even if it never stays the same.