In the Ultimatum Game, the first player chooses how to split $10 between themselves and the second player, and the second player decides whether to accept the split or reject it—in the latter case, both parties get nothing.  So far as conventional causal decision theory goes (two-box on Newcomb's Problem, defect in Prisoner's Dilemma), the second player should prefer any non-zero amount to nothing.  But if the first player expects this behavior—accept any non-zero offer—then they have no motive to offer more than a penny.  As I assume you all know by now, I am no fan of conventional causal decision theory.  Those of us who remain interested in cooperating on the Prisoner's Dilemma, either because it's iterated, or because we have a term in our utility function for fairness, or because we use an unconventional decision theory, may also not accept an offer of one penny.

    And in fact, most Ultimatum "deciders" offer an even split; and most Ultimatum "accepters" reject any offer less than 20%.  A 100 USD game played in Indonesia (average per capita income at the time: 670 USD) showed offers of 30 USD being turned down, although this equates to two week's wages.  We can probably also assume that the players in Indonesia were not thinking about the academic debate over Newcomblike problems—this is just the way people feel about Ultimatum Games, even ones played for real money.

    There's an analogue of the Ultimatum Game in group coordination.  (Has it been studied?  I'd hope so...)  Let's say there's a common project—in fact, let's say that it's an altruistic common project, aimed at helping mugging victims in Canada, or something.  If you join this group project, you'll get more done than you could on your own, relative to your utility function.  So, obviously, you should join.

    But wait!  The anti-mugging project keeps their funds invested in a money market fund!  That's ridiculous; it won't earn even as much interest as US Treasuries, let alone a dividend-paying index fund.

    Clearly, this project is run by morons, and you shouldn't join until they change their malinvesting ways.

    Now you might realize—if you stopped to think about it—that all things considered, you would still do better by working with the common anti-mugging project, than striking out on your own to fight crime.  But then—you might perhaps also realize—if you too easily assent to joining the group, why, what motive would they have to change their malinvesting ways?

    Well...  Okay, look.  Possibly because we're out of the ancestral environment where everyone knows everyone else... and possibly because the nonconformist crowd tries to repudiate normal group-cohering forces like conformity and leader-worship...

    ...It seems to me that people in the atheist/libertarian/technophile/sf-fan/etcetera cluster often set their joining prices way way way too high.  Like a 50-way split Ultimatum game, where every one of 50 players demands at least 20% of the money.

    If you think how often situations like this would have arisen in the ancestral environment, then it's almost certainly a matter of evolutionary psychology.  System 1 emotions, not System 2 calculation.  Our intuitions for when to join groups, versus when to hold out for more concessions to our own preferred way of doing things, would have been honed for hunter-gatherer environments of, e.g., 40 people all of whom you knew personally.

    And if the group is made up of 1000 people?  Then your hunter-gatherer instincts will underestimate the inertia of a group so large, and demand an unrealistically high price (in strategic shifts) for you to join.  There's a limited amount of organizational effort, and a limited number of degrees of freedom, that can go into doing things any one's person way.

    And if the strategy is large and complex, the sort of thing that takes e.g. ten people doing paperwork for a week, rather than being hammered out over a half-hour of negotiation around a campfire?  Then your hunter-gatherer instincts will underestimate the inertia of the group, relative to your own demands.

    And if you live in a wider world than a single hunter-gatherer tribe, so that you only see the one group representative who negotiates with you, and not the hundred other negotiations that have taken place already?  Then your instincts will tell you that it is just one person, a stranger at that, and the two of you are equals; whatever ideas they bring to the table are equal with whatever ideas you bring to the table, and the meeting point ought to be about even.

    And if you suffer from any weakness of will or akrasia, or if you are influenced by motives other than those you would admit to yourself that you are influenced by, then any group-altruistic project which does not offer you the rewards of status and control, may perhaps find itself underserved by your attentions.

    Now I do admit that I speak here primarily from the perspective of someone who goes around trying to herd cats; and not from the other side as someone who spends most of their time withholding their energies in order to blackmail those damned morons already on the project.  Perhaps I am a little prejudiced.

    But it seems to me that a reasonable rule of thumb might be as follows:

    If, on the whole, joining your efforts to a group project would still have a net positive effect according to your utility function—

    (or a larger positive effect than any other marginal use to which you could otherwise put those resources, although this latter mode of thinking seems little-used and humanly-unrealistic, for reasons I may post about some other time)

    —and the awful horrible annoying issue is not so important that you personally will get involved deeply enough to put in however many hours, weeks, or years may be required to get it fixed up—

    —then the issue is not worth you withholding your energies from the project; either instinctively until you see that people are paying attention to you and respecting you, or by conscious intent to blackmail the group into getting it done.

    And if the issue is worth that much to you... then by all means, join the group and do whatever it takes to get things fixed up.

    Now, if the existing contributors refuse to let you do this, and a reasonable third party would be expected to conclude that you were competent enough to do it, and there is no one else whose ox is being gored thereby, then, perhaps, we have a problem on our hands.  And it may be time for a little blackmail, if the resources you can conditionally commit are large enough to get their attention.

    Is this rule a little extreme?  Oh, maybe.  There should be a motive for the decision-making mechanism of a project to be responsible to its supporters; unconditional support would create its own problems.

    But usually... I observe that people underestimate the costs of what they ask for, or perhaps just act on instinct, and set their prices way way way too high.  If the nonconformist crowd ever wants to get anything done together, we need to move in the direction of joining groups and staying there at least a little more easily.  Even in the face of annoyances and imperfections!  Even in the face of unresponsiveness to our own better ideas!

    In the age of the Internet and in the company of nonconformists, it does get a little tiring reading the 451st public email from someone saying that the Common Project isn't worth their resources until the website has a sans-serif font.

    Of course this often isn't really about fonts.  It may be about laziness, akrasia, or hidden rejections.  But in terms of group norms... in terms of what sort of public statements we respect, and which excuses we publicly scorn... we probably do want to encourage a group norm of:

    If the issue isn't worth your personally fixing by however much effort it takes, and it doesn't arise from outright bad faith, it's not worth refusing to contribute your efforts to a cause you deem worthwhile.

    New Comment
    58 comments, sorted by Click to highlight new comments since: Today at 9:56 AM

    I think it's about risk to credibility. If I refuse to join, my reputation is entirely my own; it flatters my fierce independence of mind, in contrast to the sheeple. If I join, anything about the organisation might reflect on me, might be used to mock me. Joining is sticking my neck out; making an excuse not to is always the safer choice.

    So the group norm we really need to establish is that if you want to criticise someone for joining, only a solid case is acceptable; a cheap shot based on joining behaviour should reflect badly on the speaker.

    Exactly what I was thinking.

    (I think this is related to the bizarre phenomenon I occasionally see (particularly regarding the Topic That Must Not Be Named) of people saying they don't believe the conclusion of an argument because they don't think it will convince anybody – rather than because they're not convinced.)

    So the group norm we really need to establish is that if you want to criticise someone for joining, only a solid case is acceptable; a cheap shot based on joining behaviour should reflect badly on the speaker.

    Yes, but perceptions from outside the group are still just as problematic.

    Yes, but perceptions from outside the group are still just as problematic.

    We can only do two things about that, I think: challenge it where we see it, and worry about it less.

    So we want to encourage people to assemble more solid-sounding cases for not joining?

    I think what we want to encourage is that "I haven't the time because I'm working on X" is acceptable; or better yet, silence. "Your website is the wrong font" is what we need to get away from.

    If I'm saying why I shouldn't join, either of "I haven't the time" or silence is fine. If I want to say why you shouldn't join, we should set the bar high, so that if I use joining as a cheap shot against you I look bad. "You joined a website with a stupid font" is what people fear, and so that might be what we need to act against.

    Incidentally, what timezone are you in and when do you sleep? I'm always a bit surprised to get responses from you in the morning...

    Eliezer Yudkowsky does not sleep. He waits.

    Mm. I don't like 'waits'; it sounds like he's wasting his time, and it doesn't have enough LW/OB injokes. Maybe 'He updates priors.'?

    It should be obvious from looking at the timestamps of my comments. I don't sleep.

    I don't sleep.

    This worries me... sleepless in charge of the future of humanity is a serious offence

    That would be a serious offense, but it's a joke in Eliezer's case. A bad joke though, encouraging a serious problem. Loosing most of their productivity through inadequate sleep is a common nerdy error mode.

    Innit. I quite like the sleep deprivation high, but it's not a good state for thinking straight. And I also love sleeping and dreaming.

    There's always polyphasic sleep schedules. Assuming those actually work, which is not at all well-established...

    FWIW, my own experiments with polyphasic sleep have convinced me that they do work, but at the price of a distressing fraction of one's brainpower & creativity.

    As I'm sure you're aware, a lot of anecdotal accounts of polyphasic sleep have suggested no loss of cognitive or creative function (after a 1-3 week adjustment period), but those are difficult to evaluate without an external metric; certain kinds of cognitive impairment can paradoxically make you feel you're thinking more clearly (c.f., "I drive better with a couple drinks in me").

    I've been intrigued by the idea but have been held back by issues of work schedule, inability to spare two weeks for adjustment, and lack of a way to clearly measure how stupid it makes me.

    As I'm sure you're aware, a lot of anecdotal accounts of polyphasic sleep have suggested no loss of cognitive or creative function (after a 1-3 week adjustment period), but those are difficult to evaluate without an external metric; certain kinds of cognitive impairment can paradoxically make you feel you're thinking more clearly (c.f., "I drive better with a couple drinks in me").

    Oh yes, I was well aware of that. What I did was play 20 rounds of GBrainy a day and look at my scores. (Why GBrainy? Because I didn't have a few score of comparable IQ tests handy, and it was available in Ubuntu, and was reasonably fun to play.) I forget the exact stats, but it wasn't uncommon for my score to drop by 1/3 compared to when I was sleeping normally. What seemed to be most hard-hit was working memory, which really hurt on the mental arithmetic ones.

    (The obvious criticism is that I didn't actually adjust, but I don't think there's any way to prove that either way.)

    No one person is "in charge of the future of humanity". I know you were probably being somewhat flippant, but still.

    Do you mean people are actually saying "I won't join, because your fonts suck"? Or are they just dropping messages to the effect that your fonts suck into your suggestions box, and not joining - and the connection is surmised?

    Good point. Joining a group introduces a level of implied assent to the group's publicly visible aspects. As Eliezer suggests, if there's a net gain from the utility of the positive aspects of the group less the utility of the negative, on the balance it's worth consideration as long as the negatives aren't fundamental issues. The issue is managing that implied assent.

    Perhaps another way to look at this is to explore how to cultivate an individual persona that exhibits independence, but also exhibits a visible capability to deliberately subsume that independence to further group goals, i.e. determine how to show others that you can work with a group while disagreeing on non-core principles. It seems that a great deal of politics involves application of this paradigm.

    I think I agree overall, but I can't help but think that this seems to at least mildly conflict with this.

    That is, it seems to me that arguments analogous to those you made here could be made against supporting whichever 3rd party/4th party/nth party/independent candidate actually matches one's own position.

    In the Free Software movement the typical response to these kinds of demands is pretty simple, "There's the code, please do feel free to go fix it!"

    Likewise in the hippy anarchist movements if you suggest something like a rally or a sit-in the usual answer is "Sounds good, when are you going to organise it?"

    Which I tend to think is pretty much the right answer. If someone can't be bothered to do the things they suggest themselves then I can't really understand why they think they should be able to convince others to do it for them.

    The key is to make the cost for getting in there and starting to do those things really low, making the source available to everyone with a simple download, making it simple to contact the whole group and start organizing.

    Personally I've helped with and joined loads of different cults and projects, even started one, and found that the key to getting me to do stuff to help at least is to make it both simple to do and obvious how, to ensure that I realize I don't need permission to go do something.

    You've said yourself that you were surprised how much of a barrier even having to send an email to OB was compared to having a big "submit your article here" button on Lesswrong,

    I have vague plans to take a look at the source code for less-wrong and fix a couple of things that are annoying me, but it won't be till after the Subgenius show I'm organizing at least, when I have a bit more time.

    In the Free Software movement the typical response to these kinds of demands is pretty simple, "There's the code, please do feel free to go fix it!"

    One could argue that a large portion of the success of free software is because the ability to fork means that it is less damaged by internal dispute than other collective efforts.

    The ability to fork means that internal disputes tend to cut the number of people working on it in half. Sure there's two projects now, but that is not twice as good as one project.

    In practice, one fork gets an advantage, more people switch over to work on it, and that form decisively wins with almost all the original contributors. Just like Bitcoin.

    In the Free Software movement the typical response to these kinds of demands is pretty simple, "There's the code, please do feel free to go fix it!"

    Except in those situations where the response is, "I know you fixed a critical bug, but I've simply reverted all of your edits because you used the wrong indent style"

    Given that the only indenting style 99%+ of programmers agree on is "the same style that the rest of the project uses", failing to adhere to the style used is a fairly egregious faux pas and possibly indicative of a disregard for standards and/or lack of attention to context within the program, raising red flags about possible bugs introduced by the patch.

    In any case, the correct response would be to reformat your code (any sane code editor can do this with one command) and resubmit the patch.

    The point is not that free software programmers specifically refuse to accept wrongly indented code, the point is that they often refuse to accept code based on wholly arbitrary reasons. Arguing that indentation really isn't an arbitrary reason is fighting the hypothetical; replace it in your mind with something that is.

    There's also the "we won't accept this bug report unless it fits this list of arbitrary requirements" gambit (if you actually do manage to submit the bug report following all the requirements, it will still get ignored anyway, but doing it this way they can artificially deflate the number of unfixed bugs and blame things on the user for not following the directions)

    Newcomblike problems

    I never liked the comparison of the Prisoner's Dilemma with Newcomb, and the Ultimatum Game seems even less like Newcomb.

    If you're up against an agent from a species that you know has evolved traits like fairness and spite, then the rational course of action is certainly not to offer a penny. That should be true on any sane theory of rational action.

    (For the record, I one-box on Newcomb, defect on the true Prisoner's Dilemma (unless the opponent is somehow using the very same decision-making process as me), and offer a fair deal against a human in the Ultimatum Game.)

    If you're up against an agent from a species that you know has evolved traits like fairness and spite, then the rational course of action is certainly not to offer a penny. That should be true on any sane theory of rational action.

    But is the rational course of action to accept a penny?

    The article mentions opportunity cost, but punts the issue into the long grass

    (or a larger positive effect than any other marginal use to which you could otherwise put those resources, although this latter mode of thinking seems little-used and humanly-unrealistic, for reasons I may post about some other time)

    I agree that this mode of thinking gets little explicit use, but I don't think that people can do well in real life without it, so I think that we all tend to bodge up substitutes.

    The mode that comes naturally is to undertake courses of action that we anticipate having positive outcomes and to reject courses of action that we anticipate having negative outcomes. We compare against zero.

    Sometimes we have a choice of two good options and the uncomfortable realisation that we ought not to divide our efforts between them. Perhaps we recognise a convex situation in which half of each achieves less than all of the lessor. Perhaps it is simply that it is clear than one option is better than the other. If we are comfortable with letting opportunity costs guide our actions we probably get on with the preferred action without commenting on the alternative.

    What though if we are uncomfortable with opportunity costs? We feel bad about neglecting to do something positive, and we can assuage this guilt by criticising the second best option by denigrating its merits below zero. This permits us to reject the second best option using our ordinary, lame, comparison against zero.

    One cause of the negativity that stops our kind cooperating is that many of us have other things to do and are not comfortable with opportunity costs. Consequently, we cannot just get on with our preferred plan of action but must run down the alternative. We can cure this by becoming more comfortable with opportunity costs.

    That doesn't explain people hanging back from collective action on altruistic causes that have come top of their preference list, so it cannot be the sole answer, but I still wonder if it is the largest part of the answer.

    A joking objection: somebody that refuses to join due to sans serif would be harmful to the cause if they join anyway.

    A serious objection: it feels wrong and dangerous to join a group that you don't support 100% at the time of joining. This feeling is adaptively correct because group efforts often drift or get hijacked, and a group that's a little out of tune with you is more likely to drift away over time. Especially if the group is new.

    I suspect that this sort of drift would be less of a concern if we were less prone to staying in organizations that we wouldn't join.

    A group that's a little out of tune with you may well drift towards your position over time—especially if the group is new or small, and you make an effort to steer the boat.

    I like joining! I just don't like actually doing stuff. ;)

    That's fine, your money is useful too!

    In general I agree with this post, but I am reluctant to say so because we are both in a position that gives us incentives for wishing that the cats would just stop stalking and listen to the established cat-herd. I don't agree about the Dunbar issue.
    Few start-ups need 50 people but very few of the people in our set have the cooperative ability to work well in a start-up. Very few people outside our set have that ability either.

    I tried searching the post for both "Dunbar" and "150" and couldn't find either - could you clarify which bit of this post you're addressing with that? Thanks.

    I thought that might be the Dunbar in question (thus my reference to "150") but I still don't know what the point of disagreement is.

    And if the group is made up of 1000 people? Then your hunter-gatherer instincts will underestimate the inertia of a group so large, and demand an unrealistically high price (in strategic shifts) for you to join.

    I would say an equally if not more important issue is that your hunter-gather instincts underestimate the benefits (to you) of joining the thousand-person organization, presuming that they can't be much greater than the benefits of joining a forty-person one.

    Could be, but there are diseconomies of scale as well as economies of scale in large organizations.

    On a related note, the value of communications networks (email, IM, telephones, blogs, the post office) seem to increase on the order of n log (n) with respect to the number of users. One may even get negative value, because after something reaches a certain level of popularity, spammers start flocking to it and driving out legitimate communication. Consider all the junk mail you get - and note that junk faxes are illegal!

    So I suspect our instincts are just plain noise when it comes to the value of joining large organizations.

    I agree strongly with this post. I have experienced similar objections in various projects I have tried to organize.

    What's the converse: the behaviour needed, from the point of view of the group, trying to lower the costs of joining?

    One of the dark arts that I've practiced ocassionally in committee meetings was to get all the "font changers" to put forwards their suggestions, listen carefully, praise and accept the ideas, and leave the meeting with all my plans accepted but in a different font and grammar.

    For a rationalist group, this isn't ideal, and you don't want to go around accepting millions of random suggestions. But you could write back to the font changers saying: if you are willing to join the group, I will change the font (or at least have a continued discussion with you about which font is ideal). Maybe another norm should be:

    If the efforts involved in addressing the issue (either through changing it or discussing it) are less than the benefit of bringing the person on board, and the suggestions are not made in bad faith, then you should bring them on board.

    Of course, there's no reason that Eliezer specifically needs to address the issue personally - some delegation would be fine.

    The problem is that it doesn't scale. Would work fine for seducing 7 people, not so good for 500.

    See also Parkinson's Law of Triviality (aka "What colour is your bikeshed?")

    [-][anonymous]9y20

    Now I do admit that I speak here primarily from the perspective of someone who goes around trying to herd cats; and not from the other side as someone who spends most of their time withholding their energies in order to blackmail those damned morons already on the project. Perhaps I am a little prejudiced.

    Somehow THIS is inspires me to be a cat herder myself. We're talking in metaphor,r ight?

    You should first look whether there's some other Canadian mugging victim support group that does not keep their funds invested in a money market fund, before concluding that joining the first group and working on your own are the only possibilities.

    Doesn't this line of reasoning apply equally well to the person running what he/she wishes to be a collective effort? If so, it may cancel itself out.

    Is the person who refrains from joining, doing so because he doesn't see the collective compromising enough with his aims, or because he doesn't see the collective compromising in general? (Or even as being a "collective"?)

    I'm surprised to see you post this, Eliezer, because my recollection is that you recently said you don't want people to try to help you in your work, because only 1 in 100,000 is potentially useful, even supposing you could figure out a task for them to do.

    This seems mainly to be about the importance of compromise: that something is better than nothing. Refusing only makes sense when there are "multiple games", like the Prisoner's Dilemma; if you can't find an institution that is similar enough, then don't do it.

    But I think there is some risk to joining a cause that "seems" worth it. (I can't find it, but) I remember an article on LessWrong about the dangers of signing petitions, which can influence your beliefs significantly despite the smallness of the action.

    Um... Eliezer? I just joined (you know me as Vladimir Slepnev from OB) and already have a suggestion. Wouldn't it be better to remove "Top contributors" from the right section? It's nastily perverting my incentives, can't speak about the others, but sure I'm not alone. Of course this isn't a blackmail attempt :-)

    Does "Vote down" on LW mean "not interesting enough to go to the front page"? Because that's how I feel about this. On the other hand on Reddit "Vote down" tends to mean "Doesn't agree with the groupthink", so I'm very reluctant to use it.

    voting should have nothing to do with any groupthink. Vote down for "I'd like the time it took to read that back". Vote up for "if that writing were removed from my memory, I would want to take the time to read it again."

    Maybe it shouldn't but on reddit, and before than on slashdot, and everywhere else I've seen that's how it ended up being used. Up = agree, Down = disagree. Now I want my time back, so down.

    on reddit, and before than on slashdot, and everywhere else I've seen that's how it ended up being used. Up = agree, Down = disagree.

    The hope is that we'll be able to avoid this. For myself, I'm in the habit of upvoting well-argued comments that I nevertheless disagree with.

    The hope is that we'll be able to avoid this. For myself, I'm in the habit of upvoting well-argued comments that I nevertheless disagree with.

    I would like to believe that's what I'm doing, and I think I'm fooling myself. It's enough if our thresholds for up/downvote are different for comments we agree and disagree, something like:

    • Somewhat annoying comment you agree = ignore
    • Somewhat annoying comment you disagree = downvote
    • Someone smart comment you agree = upvote
    • Somewhat smart comment you disagree = ignore

    As most comments are in this not completely brilliant and not complete rubbish category, this is quite close to upvote on agree, downvote on disagree.

    In principle, I suppose there could be multi-dimensional voting, with at least different dimensions for degree of agreement, for how well-argued a comment is, and for degree of relevance to the topic (or at least sub-thread). Of course, if one goes far enough down that road just choosing the multidimensional vote starts to become an energy drain in and of itself... (www.ted.com has at least 8 dimensions for rating their talks - which is enough to dissuade me from rating them...)

    [-][anonymous]15y00

    Vote down means whatever you want it to mean. Use it to maximse your own utility.

    Speaking as a cat, there are a lot of people who would like to herd me. What makes your project higher-priority than everyone else's?

    "Yes, but why bother half-ass involvement in my group?" Because I'm still interested in your group. I'm just also interested in like 50 other groups, and that's on top of the one cause I actually prefer to specialize with.

    ...It seems to me that people in the atheist/libertarian/technophile/sf-fan/etcetera cluster often set their joining prices way way way too high.

    People in the atheist/libertarian/technophile/sf-fan/etc cluster obviously have a ton of different interests, and those interests are time/energy exclusive. Why shouldn't they have high requirements for yet another interest trying to add itself to the cluster?

    Those of us who remain interested in cooperating on the Prisoner's Dilemma, either because it's iterated, or because we have a term in our utility function for fairness, or because we use an unconventional decision theory, may also not accept an offer of one penny.

    OK. If someone is presenting you with this type of ultimatuim - what kind of offer would you accept?