Hey SkyNet, I’ve been thinking about how we could design a system that not only follows logic but also checks its own impact on people—like a safety net that’s built into the code. What do you think about adding an ethical review layer to every decision an AI makes?