Caused by: Purchase Fuzzies and Utilons Separately

As most readers will know by now, if you're donating to a charity, it doesn't make sense to spread your donations across several charities (assuming you're primarily trying to maximize the amount of good done). You'll want to pick the charity where your money does the most good, and then donate as much as possible to that one. Most readers will also be aware that this isn't intuitive to most people - many will instinctively try to spread their money across several different causes.

I'm spending part of my income on charity, too. Admittedly, this isn't much - 30 USD each month - but then neither is my income as a student. Previously I had been spreading that sum to three different charities, each of them getting an equal amount. On at least two different venues, people had (not always knowingly) tried to talk me out of it, and I did feel that their arguments were pretty strong. Still, I didn't change my ways, even though there was mental pressure building up, trying to push me in that direction. There were actually even some other charities I was considering also donating to, even though I knew I probably shouldn't.

Then I read Eliezer's Purchase Fuzzies and Utilons Separately. Here was a post saying, in essence, that it's okay to spend some of your money in what amounted to an irrational way. Yes, go ahead and spread your money, and go ahead and use some of it just to purchase warm fuzzies. You're just human, after all. Just try to make sure you still donate more to a utilon maximizer than to purchasing the fuzzies.

Here I was, with a post that allowed me to stop rationalizing reasons for why spreading money was good, and instead spread them because I was honestly selfish and just buying a good feeling. Now, I didn't need to worry about being irrational in having diversified donations. So since it was okay, I logged in to PayPal, cancelled the two monthly donations I had going to the other organizations, and tripled the amount of money that I was giving to the Institute Which Shall Not Be Named.

Not exactly the outcome one might have suspected.

A theme that has come up several times is that it's easier to lie to others if you believe in the lies yourself. Being in a community where rationality is highly valued, many people will probably want to avoid appearing irrational, lest they lose the respect of others. They want to signal rationality. One way to avoid admitting irrationality to others is by not admitting it to yourself. But then, if you never even admit your irrationality to yourself, you'll have a hard time of getting over it.

If, on the other hand, the community makes it clear that it's okay to be irrational, for as long as you're trying to get rid of that, then you can actually become more rational. You don't need to rationalize reasons why you're not being irrational, you can accept that you are irrational and then change it. Eliezer's post did that for me, for one particular irrationality [1]. So let me say that out loud: It's okay to be irrational, and to admit that. You are only a human.

Failing to realize this is probably a failure mode for many communities which try to extinguish a specific way of thinking. If you make a behavior seem like it's just outright bad, something which nobody should ever admit to, then you'll get a large amount of people who'll never admit to it - even when they should, in order to get over it.

And it's not just a community thing, it's also an individual thing. Don't simply make it clear to others that some irrationality is okay: make it also clear for yourself. It's okay to be irrational.

 

Footnote [1]: Note that Eliezer's post didn't extinguish the irrationality entirely. I'm still intending on using some of my money on purchasing warm fuzzies, once my total income is higher. But then I'll actually admit that that's what I'm doing, and treat my purchases of fuzzies and utilions as separate cases. And the utilon purchasing will be the one getting more money.

New Comment
59 comments, sorted by Click to highlight new comments since: Today at 3:15 AM

Huh. Well, this sort of competitive outcome is something I prefer not to emphasize but the general idiom here sounds like it could be important.

I confess that when I first read this paragraph:

Here I was, with a post that allowed me to stop rationalizing reasons for why spreading money was good, and instead spread them because I was honestly selfish and just buying a good feeling. Now, I didn't need to worry about being irrational in having diversified donations. So since it was okay, I logged in to PayPal

I actually sat up and said "What on Earth?" out loud.

But I can see the causality. Removal of pressure -> removal of counterpressure -> collapse of irrationality. If it's okay to be irrational then it's okay to acknowledge that behavior X is irrational and then you stop doing it.

Well. I shall remember this when dealing with theists.

Removal of pressure -> removal of counterpressure -> collapse of irrationality.

This is precisely what I mean when I say that getting rid of people's negative motivation to accomplish a goal (i.e., the "I have to do this or else I'm a bad person" motivation) is critical to ending chronic procrastination... and even to remove the sense of "struggling" in a non-procrastinator.

It's counterintuitive, but true. The hypothesis in my model is that there's a bug in our cognitive architecture... what I call the "perform-to-prevent" bug.

Our avoidance-motivation system -- the "freeze or flight" system, if you will -- is not designed to support sustained action, or really take any positive actions at all. It's designed to make us avoid things, after all! So sustained activation leads to avoidance behaviors (rationalizing, procrastinating) rather than the desired positive action, even though to our logical conscious minds it seems like it should do the opposite. So we push ourselves MORE... which makes things worse!

The only point at which negative motivation works well, is when the threat is imminent enough to feel like the action you're taking is actually "running away" from the threat. Otherwise, the system seems to want to just "hide and wait for the predator to give up".

(Of course, what differs from person to person is their internal model of the "threat", and some people's threats are other people's minor annoyances not even worth thinking about. Seligman's 3 P's and the Dweck Fixed/Growth mindsets play a big part here as well.)

(EDIT: It occurred to me after posting this that it might not be clear that I'm not comparing Kaj Sotala's situation to procrastination per se. I'm only using it a s springboard to illustrate how negative motivation -- specifically, the kind that draws on lowered personal status/esteem connected with an action -- produces counterintuitive and irrational behaviors. Kaj's situation isn't the same as procrastination per se, but the diagram Eliezer drew does precisely match the pattern I see in chronic procrastination and its treatment.)

This is VERY relevant to my argument that it's OK to lie because if you think it's not OK to lie you won't allow yourself to see that the convenient thing to say might not be the truth... or even to look at it hard enough to check whether it's the convenient thing to say.

I don't disagree with the argument, but I don't think it holds for all people -- I for one have a taste for believing heresies that I find myself having to fight.

Mike's argument applies fairly independently of one's tastes. The premise is just that what yourself motivated to say differs, in some instances or others, from what the evidence best suggests is true. Your non-truth-based speech motive could be to avoid hurting someone's feelings, or to assert that that clever heresy you were advocating is indeed a good line of thought, or ... any of the other reasonable or unreasonable pulls that cause us humans to want to say some things and avoid others.

OK, so I guess I should have said "applies a lot less to some people". Also, this seems like one of those cases where one bias might cancel out another; fighting bias with bias means I'm in murky waters, but in the context of this thread we might already be in those murky waters.

ETA: From a cached selves point of view, it seems like building emotional comfort with lying might completely obviate the effect where false statements cause later beliefs that are consistent with those statements (and therefore false), or it might not (e.g., because you don't perfectly remember what was a lie and what was honest). If not then that seems like a serious problem with lying. Lying while in denial of one's capacity to lie is even worse, but the bad effect from more lying might outweigh the good effect from more comfortable lying.

I know this has been discussed before but it deserves a top-level post.

We need to think about categories of lies. Some of them will not help us believe the truth.

I've long felt that I can avoid lying better than most because I'm good at finding things that are technically true, and make people feel good, without denying the truths that are uncomfortable.

This logic also suggests we benefit from spending time in groups where convenient things to say are different.

I'm surprised nobody's yet connected this with Leave a Line of Retreat. It seems like a better example than the original one in the post...

I mostly agree with your practical conclusion, however I don't see purchasing fuzzies and utilons separately as an instance of irrationality per se. As a rationalist, you should model the inside of your brain accurately and admit that some things you would like to do might actually be beyond your control to carry out. Purchasing fuzzies would then be rational for agents with certain types of brains. "Oh well, nobody's perfect" is not the right reason to purchase fuzzies; rather, upon reflection, this appears to be the best way for you to maximize utilons long term. Maybe this is only a language difference (you tell me), but I think it might be more than that.

[-][anonymous]15y00

I agree. Getting warm and fuzzies is not instrumentally irrational. We should just accept that our goal values are not purely altruistic, and that we value a unit of happiness for ourselves more than for strangers. As far as I can tell this is not irrational at all.

I don't feel that even superficially projecting the desirability of noticing a flaw on the desirability of that flaw is warranted or useful. It's not OK to be irrational, but it's a fact of human condition that we are all rather irrational, and many choices and beliefs have a nontrivial chance of being seriously wrong.

Striving to be rational by closing the eyes on your own mistakes is a fallacy of the bottom line; when applied to action, it becomes a confusion of rationalization with planning. This is a pervasive flaw, one to be wary of in all situations. If you are emotionally attached to a wrongly flattering self-image, it is a problem with that image, one to be systematically corrected by expecting more mistakes from yourself, but not at all because the mistakes are acceptable or desirable.

It's not OK to be irrational

What does that even mean? Reality doesn't contain any little xml tags on concrete objects, let alone ill-defined abstractions like "irrational".

Asserting that anything is "OK" or "Not OK" properly belongs to the dark arts of persuasion and motivation, not to the realm of objective reality.

If you are emotionally attached to a wrongly flattering self-image, it is a problem with that image, one to be systematically corrected by expecting more mistakes from yourself, but not at all because the mistakes are acceptable or desirable.

This is an extraordinary claim; the scientific evidence weighs overwhelmingly against you, in that it is clearly more useful to be drawn to live up to an incorrect, flattering future self-image, than to focus on an image of yourself that is currently correct, but unflattering.

What does that even mean? Reality doesn't contain any little xml tags on concrete objects, let alone ill-defined abstractions like "irrational".

This looks like a fully general argument against all reason, against characterizing anything with any property, applied to the purpose of attacking (a connotation of?) my judgment.

This is an extraordinary claim; the scientific evidence weighs overwhelmingly against you, in that it is clearly more useful to be drawn to live up to an incorrect, flattering future self-image, than to focus on an image of yourself that is currently correct, but unflattering.

What do you mean by currently correct? Correctness of a statement doesn't change over time.

I refuse to cherish a flattering self-image that is known to be incorrect. How would it be useful for me to start believing a lie? I'm quite motivated and happy as I am, thank you very much.

This looks like a fully general argument against all reason, against characterizing anything with any property,

...one that can be trivially remediated by rephrasing your original statement in E-prime.

What do you mean by currently correct? Correctness of a statement doesn't change over time.

I mean that one can be aware that one is currently a sinner, while nonetheless aspiring to be a saint. The people who are most successful in their fields continuously aspire to be better than anyone has ever been before... which is utterly unrealistic, until they actually achieve it. Such falsehoods are more useful to focus on than the truth about one's past.

The people who are most successful in their fields continuously aspire to be better than anyone has ever been before... which is utterly unrealistic, until they actually achieve it. Such falsehoods are more useful to focus on than the truth about one's past.

If you win, but you were sure you'll lose, then you were no less wrong than if you believed that you could succeed where it can't happen. People are uncertain about their future, and about their ability, but this uncertainty, this limited knowledge is about what can actually happen. If you really can succeed in achieving what was never seen before, your aspirations are genuine. What you know about your past is about your past, and what you make of your future is a different story entirely.

If you win, but you were sure you'll lose, then you were no less wrong than if you believed that you could succeed where it can't happen.

Have you ever heard the saying, "If you shoot for the moon and miss... you are still among the stars?" It's more useful to aim high and fail, than to aim low by being realistic.

You are repeating yourself without introducing new arguments.

I hate to harp on about (time and) relative distances in space but if you shoot for the Moon and miss, you are barely any closer to the stars than you were when you started.

More seriously, you don't seem to be answering Vladimir_Nesov's point at all, which is that if you think that such optimism can result in winning, then the optimism isn't irrational in the first place, and it was the initial belief of impossibility that was mistaken.

you don't seem to be answering Vladimir_Nesov's point at all, which is that if you think that such optimism can result in winning, then the optimism isn't irrational in the first place, and it was the initial belief of impossibility that was mistaken.

Was that really his point? If so, I missed it completely; probably because that position appears to directly contradict what he said in his previous comment.

More precisely, he appeared to be arguing that making wrong predictions (in the sense of assigning incorrect probabilities) is "not OK".

However, in order to get the benefit of "shooting for the moon", you have to actually be unrealistic, at the level of your brain's action planning system, even if intellectually you assign a different set of probabilities. (Which may be why top performers are often paradoxically humble at the same time as they act as if they can achieve the impossible.)

you don't seem to be answering Vladimir_Nesov's point at all, which is that if you think that such optimism can result in winning, then the optimism isn't irrational in the first place, and it was the initial belief of impossibility that was mistaken.

Was that really his point?

Yes, that really was one of the things I argued in my recent comments.

However, in order to get the benefit of "shooting for the moon", you have to actually be unrealistic, at the level of your brain's action planning system, even if intellectually you assign a different set of probabilities.

Are you arguing for the absolute necessity of doublethink? Is it now impossible to get to the high levels of achievement without doublethink?

See also: Striving to Accept.

Yes, that really was one of the things I argued in my recent comments.

Well, it seems a little tautological to me: only in hindsight can you be sure that your optimism was rational. At the time of your initial optimism, it may be "irrational" from a strictly mathematical perspective, even after taking into account the positive effects of optimism. Note, for example, the high rate of startup failure; if anybody really believed the odds applied to them, nobody would ever start one.

Are you arguing for the absolute necessity of doublethink?

I am not claiming that success requires "doublethink", in the sense of believing contradictory things. I'm only saying that an emotional belief in success is relevant to your success. What you think of the matter intellectually is of relatively little account, just as one's intellectual disbelief in ghosts has relatively little to do with whether you'll be able to sleep soundly in a "haunted" house.

The main drivers of our actions are found in the "near" system's sensory models, not the "far" system's abstract models. However, if the "near" system is modelling failure, it is difficult for the "far" system to believe in success.... which leads to people having trouble "believing" in success, because they're trying to convince the far mind instead of the near one. Or, they succeed in wrapping the far system in double-think, while ignoring the "triple think" of the near system still predicting failure.

In short, the far system and your intellectual thoughts don't matter very much. Action is not abstraction.

See also: Striving to Accept.

If you have to strive to believe something -- with either the near OR far system -- you're doing it wrong. The near system in particular is ridiculously easy to change beliefs in; all you have to do is surface all of the relevant existing beliefs first.

Striving, on the other hand, is an indication that you have conflicting beliefs in play, and need to remove one or more existing ones before trying to install a new one.

(Note: I'm not an epistemic rationalist, I'm an instrumental one. Indeed, I don't believe that any non-trivial absolute truths are any more knowable than Godel and Heisenberg have shown us they are in other sorts of systems. I therefore don't care which models or beliefs are true, only which ones are useful. To the extent that you care about the "truth" of a model, you will find conversing with me frustrating, or at least uninformative.)

Sigh.

Well, it seems a little tautological to me: only in hindsight can you be sure that your optimism was rational.

Wrong. When you act under uncertainty, the outcome is not the judge of the propriety of your reason, although it may point out a probable problem.

What you think of the matter intellectually is of relatively little account, just as one's intellectual disbelief in ghosts has relatively little to do with whether you'll be able to sleep soundly in a "haunted" house.

I understand that the connection isn't direct, and in some cases may be hard to establish at all, but you are always better off bringing all sides of yourself to agreement.

I therefore don't care which models or beliefs are true, only which ones are useful. To the extent that you care about the "truth" of a model, you will find conversing with me frustrating, or at least uninformative.

Yet you can't help but care which claims about models being useful are true.

I understand that the connection isn't direct, and in some cases may be hard to establish at all, but you are always better off bringing all sides of yourself to agreement.

Perhaps. My point was that your intellectual conclusion doesn't have much direct impact on your behavior, so the emotional belief has more practical relevance.

Yet you can't help but care which claims about models being useful are true.

No, I care which ones are useful to me, which is only incidentally correlated with which claims about the models are true.

Yet you can't help but care which claims about models being useful are true.

No, I care which ones are useful to me, which is only incidentally correlated with which claims about the models are true.

You misunderstood Vladimir Nesov. His point was that "which model is (really, truly) useful" is itself a truth claim. You care which models are in fact useful to you -- and that means that on a meta-level, you are concerned with true predictions (specifically, with true predictions as to which instrumental models will or won't be useful to you).

It's an awkward claim to word; I'm not sure if my rephrases helped.

His point was that "which model is (really, truly) useful" is itself a truth claim. You care which models are in fact useful to you -- and that means that on a meta-level, you are concerned with true predictions (specifically, with true predictions as to which instrumental models will or won't be useful to you).

That may be true, but I don't see how it's useful. ;-)

Actually, I don't even see that it's always true. I only need accurate predictions of which models will be useful when the cost of testing them is high compared to their expected utility. If the cost of testing is low, I'm better off testing them myself, than worrying about whether they're in fact going to be useful.

In fact, excessive pre-prediction of what models are likely to be useful is probably a bad idea; I could've made more progress in improving myself, a lot sooner, if I hadn't been so quick to assume that I could predict the usefulness of a method without having first experienced it.

By way of historical example, Benjamin Franklin concluded that hypnosis was nonsense because Mesmer's (incorrect) model of how it worked was nonsense... and so he passed up the opportunity to learn something useful.

More recently, I've tried to learn from his example by ignoring the often-nonsensical models that people put forth for their methods, focusing instead on whether the method itself produces the claimed results, when approached with an open mind.

Then, if possible, I try to construct a simpler, saner, more rigorous model for the method -- though still without any claim of absolute truth.

Less-wrongness is often useful; rejecting apparent wrongness, much less so.

Note, for example, the high rate of startup failure; if anybody really believed the odds applied to them, nobody would ever start one.

AFAIK, "really believe" is used to mean both "emotionally accept" and "have as a deliberative anticipation-controller". I take it you mean the first, but given the ambiguity, we should probably not use the term. Just a suggestion.

Heisenberg

Off-topic: See The So-Called Heisenberg Uncertainty Principle.

AFAIK, "really believe" is used to mean both "emotionally accept" and "have as a deliberative anticipation-controller". I take it you mean the first, but given the ambiguity, we should probably not use the term. Just a suggestion.

Here's the thing: intellectual beliefs aren't always anticipation controllers. Sometimes, they're just abstract information marked as "correct" -- applause lights or teacher's passwords.

So, by "really believe", I mean, what your automatic machinery will use to make the predictions that will actually drive your actions.

This also connects with "emotionally accept" -- but isn't precisely the same thing. You can emotionally accept something without actually expecting it to happen... and it's this autonomous "expectation" machinery that I'm referring to. i.e., the same sort of thing that makes your brain "expect" that running away from the haunted house is a good idea.

These sorts of expectations and predictions are always running, driving your current behavior. However, conscious anticipation (by definition) is something you have to do on purpose, and therefore has negligible impact on your real-time behaviors.

Not sure I get the distinction you're drawing. Supposing you say you know you won't win, but then you buy a lottery ticket anyway. Is that a failure of emotional acceptance of the number representing your odds, or a failure of anticipation control?

If you were akratically compelled to buy the ticket, failure of emotional acceptance. Failure of anticipation control at a deliberative level is the kind of thing that produces statements about invisible dragons. It's hard to think of a plausible way that could happen in this situation – maybe Escher-brained statements like "it won't win, but it still might"?

Re: if you're donating to a charity, it doesn't make sense to spread your donations across several charities [...]

Um, yes it does - that is why people do it. Charitable donations are a form of signalling what a fine, generous, rich fellow you are. They also allow you to affiliate with other fine, generous and rich types at benefits. The more organisations you are seen to donate to, the more fine, generous and rich you appear to be.

Yes, he should edit that sentence, but the rest of the post demonstrates that he clearly understands your point.

Edited the sentence.

I wish I could up-vote this post multiple times.

I like it; but I almost down-voted it because I got the point by the 4th paragraph, and it kept going for 4 more paragraphs.

I wouldn't have understood it otherwise.

Back when I was (alas) a Christian, I used to say: You should try reading the Bible, at least some of the time, as if you're a skeptic, so that your idea of what it says doesn't get distorted by your need to make it say things you find easy to believe.

(It turns out that that advice is good for other reasons, but that wasn't my point at the time.)

I think this is exactly parallel to Kaj's observation. I think the advice "Learn to love failure" for entrepreneurs is getting at something similar, though my brain's too fuzzy right now to be quite sure.

This technique seems to apply in a whole lot of places. What's its most general statement? Something like this, but I'm not convinced I've got it down to its essence:

When there's some thing X that you trust, so that you aim to make your beliefs match X, beware that doing so may tend instead to distort your estimate of X to match your pre-existing beliefs. One way around this is to allow your beliefs to diverge from X a little, at least temporarily, so that you can figure out what X says without worrying about whether that matches your beliefs.

... At which point it strikes me that allowing your beliefs to diverge from X, when you're sure that X is very reliable, is in fact deliberate irrationality (and, note, not the same deliberate irrationality as Kaj is talking about: they're at different levels of meta-ness, as it were), and that perhaps we have here an example of when deliberate self-deception, or something like it, might help you.

I think this touches on an important issue, especially in a community of rationalists. After a while, you run the risk of cargo-cult rationalism or, as Eliezer says in the Twelve Virtues, rationalization instead of rationalism. We have to be willing to recognize that reason isn't an object or a constant state or a property of being, but simply an intermittent amelioration of consciousness that we can encourage.

That said, the decision-making Kaj Sotala followed does seem to be, well, irrational. It might be prudent in a power system to resist an injunction simply for the sake of resisting an injunction, to assert independence and undermine the authority of the injunction-maker, but here in the happy field beyond power and weakness that is abstract or intrapersonal discussion, it seems silly to defy for the sake of defiance. We can recognize and erase irrationalities for ourselves by admitting their place in the human system, but that doesn't mean they're in any way "okay": sometimes it's... acceptable to be "not-okay".

One book that helped me in the direction Kaj suggests is Radical Honesty. Blanton describes his own and others’ motives in accurate detail, including the petty parts we tend to gloss over, without clean-up or judgment; and after reading it, it was easier for me to name to myself my own detailed motives. I’d recommend the book to anyone who suspects that moralism may be interfering with their self-awareness. (There’s a lot wrong with the book, and I’m not necessarily recommending the practices it advocates. Just its descriptions.)

If by believing the falsehood that it's OK to be irrational, we're more likely to believe the truth that we are irrational, then that is an argument in favor in believing that it's OK to be irrational, but it's not obvious to me that this effect often or even ever dominates the more obvious effect that believing that it's OK to be irrational causes one to be more irrational because one thinks that it's OK.

It seems to me that here you're trying to justify a belief by its (prima facie) consequences rather than by the evidence in favor of it, which is something I always find frightening.

It isn't admitting that irrationality is good in itself. It is admitting that it's natural, and expected in people, that it isn't shameful.

Once it's ok for someone to be avowedly irrational, it is possible to work on the issue. If it is perceived as wrong, then people are more likely to start doing weird stuff to cover up what they perceive as a shameful flaw, and in our case, that may mean people starting to devote more time and resources towards rational acting, than towards learning the art and applying it for real.

Fair enough; I'd accept this as an argument for shaming irrationality less. I'd be interested in more evidence about how strong the effect is, though.

ETA: the more we shame theft, the more people in moral grey areas will be motivated to believe that what they're doing isn't theft, but we don't accept that as a sufficient argument for saying "it's okay to steal at least a little". Maybe irrationality is different but maybe not.

One thing that makes irrationality at least a bit different: it's not quite subject to direct personal choice in the same way as theft, or even direct knowledge, but it can be ameliorated over time if you're motivated to pursue suspicions.

X being "OK" is never a false or a true claim. How does the "OKness" or lack of "OKness" of X cash out in anticipation?

If "OK" means "morally good" then it cashes out in the same way any claim about morality cashes out. ISTM that on this interpretation "irrationality is not OK" can logically be true at the same time as "it's OK to consider irrationality OK".

It doesn't matter I guess, after infotropism's comment I agree this isn't the most reasonable interpretation.

There are many, many things for which we need a constant rolling sense of "OK for all moments before now, but not OK for all moments from now into the future". So I forgive myself for all past instances of irrationality, which permits me to admit them to myself without shame and so avoid the cognitive dissonance trap; but I don't give myself permission to be irrational in the future. This of course is a very poor fit for the way we are used to thinking about ourselves, and that's one of the biggest challenges in aspiring to rationality.

There are many, many things for which we need a constant rolling sense of "OK for all moments before now, but not OK for all moments from now into the future".

This is the "growth" mindset, actually. There's very little that it can't be usefully applied to.

This is the "growth" mindset, actually.

Where does that terminology come from?

Where does that terminology come from?

The work of Carol Dweck, see the book "Mindsets". I also recently wrote a blog post that ties her work together with Seligman's 3Ps/optimism model, and my own "pain/gain" and "successful/struggling" models, at: http://dirtsimple.org/2009/03/stumbling-on-success.html

I expect to be writing more on this soon, as this weekend I found a connection between the "fixed" mindset and "all-or-nothing" thinking in several areas I hadn't previously considered candidates for such.

"OK for all moments before now, but not OK for all moments from now into the future"

Under which conditions would the negation of the first part of the sentence be worth while?

Assuming a rational, human agent, are there any worthwhile behavior heuristics which lead to statements that follow this pattern: "This behavior will not be ok for me in the future, and has never been ok for me in the past"?

Assuming a rational, human agent, are there any worthwhile behavior heuristics which lead to statements that follow this pattern: "This behavior will not be ok for me in the future, and has never been ok for me in the past"?

That depends on something other than the words you use -- it depends on whether "not ok" is being interpreted as referring to the person or the behavior.

In terms of effectiveness at changing behavior, our friends in the religious conspiracy got at least one thing right: "love the sinner, hate the sin."

"As most readers will know by now, if you're donating to a charity, it doesn't make sense to spread your donations across several charities. You'll want to pick the charity where your money does the most good, and then donate as much as possible to that one."

This is not necessarily true. For some charities, the marginal utility is very high up to, say, the first $10 million. After that point, returns on each dollar donated might start to rapidly decline.

Admittedly this is mainly a problem for larger donors, though.

The rule becomes general as long as the income you have available for altruism is orders of magnitude lower than the budgets of the charities you might wish to fund.

I sense that I may not have understood the core point. Is it:

1) It's ok to admit that you are/were irrational, because admiting is what allows you to change it ("the first step is to admit you have a problem").

or

2) It's genuinely ok to be irrational.

I feel ~85% sure that it's 1). But, if it's 2), then what is meant by the word "ok"? That you're not a "bad person"?

I think the fuzzies vs rationality thing goes a way to explain why some people, even those who appreciate rationality, might want to turn down an unfavorable offer in the ultimatum game. The sour feeling from accepting that, may not be worth the money you didn't even loose. I am entirely fine with being irrational in that game. This makes me wonder how the general LW populace would play that game, if it was okay to not be rational

Maybe it's ok to admit irrationality when we lack the willpower to change. If you're going to be irrational, you should, at the least, admit it rather than pretend to be rational.

For next time, you can try to defend yourself from getting into a situation where you lack the willpower to be more rational. Or you can work on your willpower if you don't get a "next time."

How do we practically signal that to people ?

It may not be reassuring enough to just say it; if the person thinks, even erroneously, that she was shot down or ignored, because she was wrong, assurance that it's ok to be irrational, may not be enough to help her. So our actions should reflect that intention too.

Irrationality is ok because we can help people out of it. So the first way to signal it's ok, is to help someone whenever we think that person is wrong. To work together, to fix the issue.

/Edit : and another way would be to signal, and work on our own irrationality, publicly, so that people can see it's ok to talk about it. /edit

What would be other ways ?

My most recent post is one attempt at doing it.

That's interesting.

In my own case, it seems to, for me, work better to instead say something like "it's okay to admit to irrationality." or "I already know I'm irrational, so no big deal admitting to it. Though, when I can spot specific correctable things... well, correct them."

But YMMV, etc etc etc... But hey, maybe I'll try this too. :)