Suppose you make a general rule, ie. "I won't eat any cookies". Then you encounter a situation that legitimately feels exceptional , "These are generally considered the best cookies in the entire state". This tends to make people torn between two threads of reasoning:
1) Clearly the optimal strategy is to make an exception this one time and then follow the rule the rest of the time.
2) If you break the rule this one time, then you risk dismantling the rule and ending up not following it at all.
How can we resolve this? For a very, very long time I didn't want to take option 2) because I felt that taking a sub-optimal strategy was irrational. It may seem silly, but I found this incredibly emotionally compelling and I had no idea of how to respond to this with logic. Surely rationality should never require you to be irrational? It took me literally years to work out, but 1) is an incredibly misleading way of framing the situation. Instead, we should be thinking:
3) If future you will always make the most rational decision, the optimal strategy is going to clearly be to make the exception. If there is a chance that it will cause you to fall off the path, either temporarily or permanently, then we need to account for these probabilities in the expected value calculation.
As soon as I realised this was the more accurate way of framing 1), all of its rhetorical power disappeared. Who cares about what is best for a perfectly rational agent? That isn't you. The other key benefit of this framing is that it pushes you to think in terms of probability. Too often I've thought, "I can make an exception without it becoming a habit". This is bad practise. Instead of thinking in terms of a binary "Can I?" or "Can't I?", we should be thinking in terms of probabilities. Firstly, because 0% probability is unrealistic, secondly because making a probability estimate allows you to better calibrate over time.
We can also update 2) to make it a more sophisticated argument as well.
4) If you break the rule this one time, then you risk dismantling the rule and ending up not following it at all. Further, humans tend to heavily biased towards believing that their future selves will make the decisions that they want it to make. So much so, that attempting the calculate this probability is hopeless. Instead, you should only make an exception if the utility gain would be so much that you would be willing to lose the habit altogether.
We still have two different ways of thinking of the problem, but at least they are more sophisticated than when we started.
(This article was inspired by seeing: The Solitaire Principle. I wanted to explain how I've progressed on this issue and I also wanted to have a post to link people to which is much shorter)