Epistle to the New York Less Wrongians

(At the suggestion and request of Tom McCabe, I'm posting the essay that I sent to the New York LW group after my first visit there, and before the second visit:)

Having some kind of global rationalist community come into existence seems like a quite extremely good idea.  The NYLW group is the forerunner of that, the first group of LW-style rationalists to form a real community, and to confront the challenges involved in staying on track while growing as a community.

"Stay on track toward what?" you ask, and my best shot at describing the vision is as follows:

"Through rationality we shall become awesome, and invent and test systematic methods for making people awesome, and plot to optimize everything in sight, and the more fun we have the more people will want to join us."

(That last part is something I only realized was Really Important after visiting New York.)

Michael Vassar says he's worried that you might be losing track of the "rationality" and "world optimization" parts of this - that people might be wondering what sort of benefit "rationality" delivers as opposed to, say, paleo dieting.  (Note - less worried about this now that I've met the group in person.  -EY.)

I admit that the original Less Wrong sequences did not heavily emphasize the benefits for everyday life (as opposed to solving ridiculously hard scientific problems).  This is something I plan to fix with my forthcoming book - along with the problem where the key info is scattered over six hundred blog posts that only truly dedicated people and/or serious procrastinators can find the time to read.

But I really don't think the whole rationality/fun association you've got going - my congratulations on pulling that off, by the way, it's damned impressive - is something that can (let alone should) be untangled.  Most groups of people capable of becoming enthusiastic about strange new nonconformist ways of living their lives would have started trying to read each other's auras by now.  Rationality is the master lifehack which distinguishes which other lifehacks to use.

The way an LW-rationality meetup usually gets started is that there is a joy of being around reasonable people - a joy that comes, in a very direct way, from those people caring about what's true and what's effective, and being able to reflect on more than their first impulse to see whether it makes sense.  You wouldn't want to lose that either.

But the thing about effective rationality is that you can also use it to distinguish truth from falsehood, and realize that the best methods aren't always the ones everyone else is using; and you can start assembling a pool of lifehacks that doesn't include homeopathy.  You become stronger, and that makes you start thinking that you can also help other people become stronger.  Through the systematic accumulation of good ideas and the rejection of bad ideas, you can get so awesome that even other people notice, and this means that you can start attracting a new sort of person, one who starts out wanting to become awesome instead of being attracted specifically to the rationality thing.  This is fine in theory, since indeed the Art must have a purpose higher than itself or it collapses into infinite recursion.  But some of these new recruits may be a bit skeptical, at first, that all this "rationality" stuff is really contributing all that much to the awesome.

Real life is not a morality tale, and I don't know if I'd prophesy that the instant you get too much awesome and not enough rationality, the group will be punished for that sin by going off and trying to read auras.  But I think I would prophesy that if you got too large and insufficiently reasonable, and if you lost sight of your higher purposes and your dreams of world optimization, the first major speedbump you hit would splinter the group.  (There will be some speedbump, though I don't know what it will be.)

Rationality isn't just about knowing about things like Bayes's Theorem.  It's also about:

* Saying oops and changing your mind occasionally.

* Knowing that clever arguing isn't the same as looking for truth.

* Actually paying attention to what succeeds and what fails, instead of just being driven by your internal theories.

* Reserving your self-congratulations for the occasions when you actually change a policy or belief, because while not every change is an improvement, every improvement is a change.

* Self-awareness - a core rational skill, but at the same time, a caterpillar that spent all day obsessing about being a caterpillar would never become a butterfly.

* Having enough grasp of evolutionary psychology to realize that this is no longer an eighty-person hunter-gatherer band and that getting into huge shouting matches about Republicans versus Democrats does not actually change very much.

* Asking whether your most cherished beliefs to shout about actually control your anticipations, whether they mean anything, never mind whether their predictions are actually correct.

* Understanding that correspondence bias means that most of your enemies are not inherently evil mutants but rather people who live in a different perceived world than you do.  (Albeit of course that some people are selfish bastards and a very few of them are psychopaths.)

* Being able to accept and consider advice from other people who think you're doing something stupid, without lashing out at them; and the more you show them this is true, and the more they can trust you not to be offended if they're frank with you, the better the advice you can get.  (Yes, this has a failure mode where insulting other people becomes a status display.  But you can also have too much politeness, and it is a traditional strength of rationalists that they sometimes tell each other the truth.  Now and then I've told college students that they are emitting terrible body odors, and the reply I usually get is that they had no idea and I am the first person ever to suggest this to them.)

* Comprehending the nontechnical arguments for Aumann's Agreement Theorem well enough to realize that when two people have common knowledge of a persistent disagreement, something is wrong somewhere - not that you can necessarily do better by automatically agreeing with everyone who persistently disagrees with you; but still, knowing that ideal rational agents wouldn't just go around yelling at each other all the time.

* Knowing about scope insensitivity and diminishing marginal returns doesn't just mean that you donate charitable dollars to "existential risks that few other people are working on", instead of "The Society For Curing Rare Diseases In Cute Puppies".  It means you know that eating half a chocolate brownie appears as essentially the same pleasurable memory in retrospect as eating a whole brownie, so long as the other half isn't in front of you and you don't have the unpleasant memory of exerting willpower not to eat it.  (Seriously, I didn't emphasize all the practical applications of every cognitive bias in the Less Wrong sequences but there are a lot of things like that.)

* The ability to dissent from conformity; realizing the difficulty and importance of being the first to dissent.

* Knowing that to avoid pluralistic ignorance everyone should write down their opinion on a sheet of paper before hearing what everyone else thinks.

But then one of the chief surprising lessons I learned, after writing the original Less Wrong sequences, was that if you succeed in teaching people a bunch of amazing stuff about epistemic rationality, this reveals...

(drum roll)

...that, having repaired some of people's flaws, you can now see more clearly all the other qualities required to be awesome.  The most important and notable of these other qualities, needless to say, is Getting Crap Done.

(Those of you reading Methods of Rationality will note that it emphasizes a lot of things that aren't in the original Less Wrong, such as the virtues of hard work and practice.  This is because I have Learned From Experience.)

Similarly, courage isn't something I emphasized enough in the original Less Wrong (as opposed to MoR) but the thought has since occurred to me that most people can't do things which require even small amounts of courage.  (Leaving NYC, I had two Metrocards with small amounts of remaining value to give away.  I felt reluctant to call out anything, or approach anyone and offer them a free Metrocard, and I thought to myself, well, of course I'm reluctant, this task requires a small amount of courage and then I asked three times before I found someone who wanted them.  Not, mind you, that this was an important task in the grand scheme of things - just a little bit of rejection therapy, a little bit of practice in doing things which require small amounts of courage.)

Or there's Munchkinism, the quality that lets people try out lifehacks that sound a bit weird.  A Munchkin is the sort of person who, faced with a role-playing game, reads through the rulebooks over and over until he finds a way to combine three innocuous-seeming magical items into a cycle of infinite wish spells.  Or who, in real life, composes a surprisingly effective diet out of drinking a quarter-cup of extra-light olive oil at least one hour before and after tasting anything else.  Or combines liquid nitrogen and antifreeze and life-insurance policies into a ridiculously cheap method of defeating the invincible specter of unavoidable Death.  Or figures out how to build the real-life version of the cycle of infinite wish spells.  Magic the Gathering is a Munchkin game, and MoR is a Munchkin story.

It would be really awesome if the New York Less Wrong groups figures out how to teach its members hard work and courage and Muchkinism and so on.

It would be even more awesome if you could muster up the energy to track the results in any sort of systematic way so that you can do small-N Science (based on Bayesian likelihoods thank you, not the usual statistical significance bullhockey) and find out how effective different teaching methods are, or track the effectiveness of other lifehacks as well - the Quantitative Self road.  This, of course, would require Getting Crap Done; but I do think that in the long run, whether we end up with really effective rationalists is going to depend a lot on whether we can come up with evidence-based metrics for how well a teaching method works, or if we're stuck in the failure mode of psychoanalysis, where we just go around trying things that sound like good ideas.

And of course it would be really truly amazingly awesome if some of you became energetic gung-ho intelligent people who can see the world full of low-hanging fruit in front of them, who would go on to form multiple startups which would make millions and billions of dollars.  That would also be cool.

But not everyone has to start a startup, not everyone has to be there to Get Stuff Done, it is okay to have Fun.  The more of you there are, the more likely it is that any given five of you will want to form a new band, or like the same sort of dancing, or fall in love, or decide to try learning meditation and reporting back to the group on how it went.  Growth in general is good.  Every added person who's above the absolute threshold of competence is one more person who can try out new lifehacks, recruit new people, or just be there putting the whole thing on a larger scale and making the group more Fun.  On the other hand there is a world out there to optimize, and also the scaling of the group is limited by the number of people who can be organizers (more on this below).  There's a narrow path to walk between "recruit everyone above the absolute threshold who seems like fun" and "recruit people with visibly unusually high potential to do interesting things".  I would suggest making extra effort to recruit people who seem like they have high potential but not anything like a rule.  But if someone not only seems to like explicit rationality and want to learn more, but also seems like a smart executive type who gets things done, perhaps their invitation to a meetup should be prioritized?

So that was the main thing I had to say, but now onward to some other points.

A sensitive issue is what happens when someone can't reach the absolute threshold of competence.  I think the main relevant Less Wrong post on this subject is "Well-Kept Gardens Die By Pacifism."  There are people who cannot be saved - or at least people who cannot be saved by any means currently known to you.  And there is a whole world out there to be optimized; sometimes even if a person can be saved, it takes a ridiculous amount of effort that you could better use to save four other people instead.  We've had similar problems on the West Coast - I would hear about someone who wasn't Getting Stuff Done, but who seemed to be making amazing strides on self-improvement, and then a month later I would hear the same thing again, and isn't it remarkable how we keep hearing about so much progress but never about amazing things the person gets done -

(I will parenthetically emphasize that every single useful mental technique I have ever developed over the course of my entire life has been developed in the course of trying to accomplish some particular real task and none of it is the result of me sitting around and thinking, "Hm, however shall I Improve Myself today?"  I should advise a mindset in which making tremendous progress on fixing yourself doesn't merit much congratulation and only particular deeds actually accomplished are praised; and also that you always have some thing you're trying to do in the course of any particular project of self-improvement - a target real-world accomplishment to which your self-improvements are a means, not definable in terms of any personality quality unless it is weight loss or words output on a writing project or something else visible and measurable.)

- and the other thing is that trying to save people who cannot be saved can drag down a whole community, because it becomes less Fun, and that means new people don't want to join.

I would suggest having a known and fixed period of time, like four months, that you are allowed to spend on trying to fix anyone who seems fixable, and if after that their outputs do not exceed their inputs and they are dragging down the Fun level relative to the average group member, fire them.  You could maybe have a Special Committee with three people who would decide this - one of the things I pushed for on the West Coast was to have the Board deciding whether to retain people, with nobody else authorized to make promises.  There should be no one person who can be appealed to, who can be moved by pity and impulsively say "Yes, you can stay."  Short of having Voldemort do it, the best you can do to reduce pity and mercy is to have the decision made by committee.

And if anyone is making the group less Fun or scaring off new members, and yes this includes being a creep who offends potential heroine recruits, give them an instant ultimatum or just fire them on the spot.

You have to be able to do this.  This is not the ancestral environment where there's only eighty people in your tribe and exiling any one of them is a huge decision that can never be undone.  It's a large world out there and there are literally hundreds of millions of people whom you do not want in your community, at least relative to your current ability to improve them.  I'm sorry but it has to be done.

Finally, if you grow much further it may no longer be possible for everyone to meet all the time as a group.  I'm not quite sure what to advise about this - splitting up into meetings on particular interests, maybe, but it seems more like the sort of thing where you ought to discuss the problem as thoroughly as possible before proposing any policy solutions.  My main advice is that if there's any separatish group that forms, I am skeptical about its ability to stay on track if there isn't at least one high-level epistemic rationalist executive type to organize it, someone who not only knows Bayes's Theorem but who can also Get Things Done.  Retired successful startup entrepreneurs would be great for this if you could get them, but smart driven young people might be more mentally flexible and a lot more recruitable if far less experienced.  In any case, I suspect that your ability to grow is going to be ultimately limited by the percentage of members who have the ability to be organizers, and the time to spend organizing, and who've also leveled up into good enough rationalists to keep things on track.  Implication, make an extra effort to recruit people who can become organizers.

And whenever someone does start doing something interesting with their life, or successfully recruits someone who seems unusually promising, or spends time organizing things, don't forget to give them a well-deserved cookie.

Finally, remember that the trouble with the exact phrasing of "become awesome" - though it does nicely for a gloss - is that Awesome isn't a static quality of a person.  Awesome is as awesome does.

271 comments, sorted by
magical algorithm
Highlighting new comments since Today at 3:00 PM
Select new highlight date
Moderation Guidelinesexpand_more

And if anyone is making the group less Fun or scaring off new members, and yes this includes being a creep who offends potential heroine recruits, give them an instant ultimatum or just fire them on the spot.

At the LW meetups I've been to so far, I've seen what I would call 'swarming' around each female present. It doesn't seem malicious, but they each end up being in the center of a group...

I guess this is something for other people to corroborate, I'm just a lonely data point waiting for my line.

Edit - please disregard this post

I don't think we've seen this in London, but obviously our actual female participants would be better placed to comment.

That's good to hear - thanks! Very keen to hear feedback on this sort of thing.

Hasn't happened at the (4) meet ups I've been to.

I'll confirm that this phenomenon exists; I routinely participate in such "swarms". I do not know to what extent this is actually a problem, though.

I've been to 1 meetup with 5 participants, of which one was female (and married to another participant). So I don't really have much relevant data yet. My guess is if this sort of thing shows up, it happens with larger meetup sizes.

I'm not sure it's avoidable, though. I think the best improvement vector is to try to decrease the creepiness without trying to decrease the interest.

How many meetups have you been to and seen this? Do you think it produces negative effects; if so, what?

More data, yum yum. :D

I can postulate, based on past experience (not with LW meetups). It depends on the person. Some people like a lot of attention and find it energising... some people don't and can find it overwhelming and exhausting. If a person finds it overwhelming and exhausting they may be turned off coming next time.

Should this be added to the "community" sequence?

Being able to accept and consider advice from other people who think you're doing something stupid, without lashing out at them; and the more you show them this is true, and the more they can trust you not to be offended if you're frank with them, the better the advice you can get. (Yes, this has a failure mode where insulting other people becomes a status display. [...])

It also has a more subtle and counterintuitive failure mode. People can derive status and get much satisfaction by handing out perfectly honest and well-intentioned advice, if this advice is taken seriously and followed. The trouble is, their advice, however honest, can be a product of pure bias, even if it's about something where they have an impressive track record of success.

Moreover, really good and useful advice about important issues often has to be based on a no-nonsense cynical analysis that sounds absolutely awful when spelled out explicitly. Thus, even the most well-intentioned people will usually be happier to concoct nice-sounding rationalizations and hand out advice based on them, thus boosting their status not just as esteemed advice-givers, but also as expounders of respectable opinion. At the end, you may well be better off with a rash "who is he to tell me what to do" attitude than with a seemingly rational, but in fact dangerously naive reasoning that you should listen to people when they are clearly knowledgeable and well-intentioned. (And yes, I did learn this the hard way.)

Things are of course different if you're lucky to know people who have the relevant knowledge and care about you so much that they'll really discard all the pious rationalizations and make a true no-nonsense assessment of what's best for you. You can expect this from your parents and perhaps other close relatives, but otherwise, you're lucky if you have such good and savvy friends.

You can take any area of life where you could be faced with tough and uncertain choices, where figuring out the optimal behavior can't be reduced to a tractable technical problem, and where the truth about how things really work is often very different from what people say about it in public (or even in private). For example, all kinds of tough choices and problems in career, education, love, investment, business, social relations with people, etc., etc.

In all these areas, it may happen that you're being offered advice by someone who is smart and competent, has a good relevant track record, and appears to be well-intentioned and genuinely care about you. My point is that even if you're sure about all this, you may still be better off dismissing the advice as nonsense. Accordingly, when you dismiss people's advice in such circumstances with what appears as an irrationally arrogant attitude, you may actually be operating with a better heuristic than if you concluded that the advice must be good based on these factors and acted on it. Even if the advice-giver has some stake in your well-being, it actually takes a very large stake to motivate them reliably to cut all bias and nonsense from what they'll tell you.

Of course, the question is how to know if you're being too arrogant, and how to recognize real good advice among the chaff. To which there is no easy and simple answer, which is one of the reasons why life is hard.

I agree. I think that the grandparent is useful, but I'm a bit fuzzy on exactly what mental levers it's telling me to pull and why to pull them.

The problem of getting good data on how other people see you is a topic I've been thinking about a lot lately. I'd love to see a top-level post on this, because I think it's pretty essential for many areas of self-improvement, and I'd write it myself but I don't think I have a clear enough idea of the problems involved. I didn't think about this particular failure mode, for example.

Alternatively, are there any other resources that can help me get this information?

On the off chance this will be spotted in the sidebar: I'm a couple years late responding, but has anyone written anything useful on this subject? Is anyone in a position to do so?

Getting a correct model of others' models of oneself, and knowing it's correct, seems ridiculously difficult to me.

I agree that this is a difficult problem. It seems to be that way because the incentive structure is misaligned for truth. The costs of giving someone unbiased feedback are mostly paid by the giver of the feedback, but the benefits are mostly received by the receiver of the feedback. Thus, this is very difficult to get from people who are not close friends and allies- but those people are probably ones who have an above-average view of you.

Thus, one of the low-hanging fruit here is rewarding negative feedback, which is in many ways more useful than positive feedback (and yet most people don't reward it).

It may be useful to ask people you trust questions like "How do you think other people view me?" The deflection to other people makes it easier to voice their personal concerns under plausible deniability, as well at getting at the question of "how do I present myself to others?" and "what features of my personality and behavior are most salient?"

"Munchkinism" already has a commonly-known name. It's called hacking.

Yes, let's please call it "hacking," or anything other than "Munchkinism."

Feel free to make concrete alternative suggestions. "Hacking" is taken.

Isn't 'munchkin' sort of taken too? The impression I got from a little googling is that the word as used by RPG players is a derogatory term. Calling someone that isn't a compliment on their cleverness in exploiting the mechanics but mockery for missing much of the point of the game and being an annoyance to other players.

If that's true then calling cryonics munchkinism would sound like agreement with people who say that death gives meaning to life or something like that.

Isn't 'munchkin' sort of taken too? The impression I got from a little googling is that the word as used by RPG players is a derogatory term.

The core of the insult is in the framing of the behavior as a negative (and an assertion of higher status of the speaker). The actual descriptive element of the behavior is a pretty close match to what we are talking about. This is perhaps enough of a reason to discard the word and create a synonym that doesn't have the negative association.

The problem with the MIN-MAXing munchkin - or rather the thing that causes munchkin-callers to insult them is that they think Role Playing Games are about actually taking on the role and doing what the character should do. The whole @#%@# world is at stake so you learn what you need to about the physics and the current challenges. You work out the best way to eliminate the threat and if possible ensure a literal 'happily ever after' scenario. Then you gain the power necessary to ensure that your chance of success is maximised.

But the role of the character is not what (the name-caller implies) the point of the game is about. It is about out what the game master expects, working out your own status within the group and achieving a level of success that matches your station (and no higher). The incentive to the speaker is to secure their higher position in the hierarchy and maintain their own behavior as the accepted model of sophistication. Object level actions are to be deprecated in favor of the universal game of social politics.

Many of the same behaviours and judgments apply to life as well. Optimising for whatever your own preferences are as an alternative to doing what you think you are supposed to do. Optimising your behavior for status gain only if and when status gain is what you want or need.

I don't play roleplaying games myself. I much prefer cards, board games or games that are physical. Both the social aspect and the games themselves are for more fun and the roleplaying just slow, with the 'role' of an individual borderline insulting. If I want to socialise I'll socialise. If I want to hear someone else's story I'll read a book. If I want to play a roleplaying game I'll download one on the computer. If I want to guess the boss's password I'll get a job - and I damn well be being payed well for it.

I may be a little biased. The last time I did, in fact, play a group RPG one of my companions thought it was ok to steal something of mine. I gave her fair warning and plenty of chances to comply but I ended up having to fight both her and her and the two allies she recruited (while the other two in the group stayed out of it.) Once I defeated them I took my pick of their stuff by way of appropriate reprisal. It's exactly what the character I was roleplaying would have done and I wouldn't roleplay a pushover but at the same time the overall social experience isn't especially rewarding. I haven't once had to beat the crap through three of my friends in the real world, which suits me just fine! :)

There is no problem with "Munchkinism." The problem is that in old RPG's the rules imply poorly designed (see lack of challenge upon full understanding of the system) tactical battle simulation games with some elements of strategy, while the advertising implies a social interaction and story-telling game without giving the necessary rules to support it. Thus different people think they're playing different games together and social interaction devolves into what people imagine they would do given a hypothetical situation without consequences (at least until the consequences are made explicit, violating their expectations as you note in your example).

devolves into what people imagine they would do given a hypothetical situation without consequences

Put all my points into charisma and charm skills and go find me some wenches? Oh, you mean saving the world. Got it.

Actually that is another problem with RPG designs. There are social skills and stats provided but they are damn near pointless in practice. Even when you want to role play a lovable rogue who can charm, manipulate and deceive his way out of problems you may as well put your skills into battle axes. Because the only person that you need to use social skills on is the DM and that is an out of character action. Unless you somehow manage to find a DM who considers the interaction to be about the character trying to persuade an NPC and not the player trying to persuade him and just lets the player roll some dice already.

"What is the skill check for "seduce the maidservant and get her to show you the secret entrance to the castle"?"
... "No, I don't need to tell you what lines I'm going to use... since I would just have to lie so as to not offend the sensibilities of the company. Dice. I want to use dice and charm wenches!"
... "What? Oh, this is just too much hassle. Let's do what we know works. Guys, you take the guard on the left and I'll take the guard on the right. Rescue the princess and kill everything that tries to stop us."

Of course, what actions players enjoy actually role-playing out, and what actions they prefer to just encapsulate into a die-roll, varies a lot among potential players.

Most RPG systems I've seen seem optimized for players who enjoy making tactical decisions (do I wield a sword or cast a spell? do I go through this door or that one, and do I check it for traps before I open it?), and so devote an enormous amount of attention to the specifics of different weapon types but don't care very much about the specifics of different wench-charming lines.

I could imagine it being different: e.g., the session starts with three or four hours of hanging out at the local tavern swapping stories, and otherwise navigating the tribal monkey politics of a simulated adventuring party, finding out which vendors have the best equipment and give the best deals, bartering with salespeople, etc., etc., etc. ... and then everyone rolls against their "explore dungeon" to determine how successful they were, how much loot they got, who died, how many monsters they killed, etc. ("No, I don't need to tell you which door I'm going to enter through. Dice. I want to use dice and explore dungeons!")

But I expect they would appeal to a vastly different audience.

The analogy doesn't fit. The salient difference here isn't one of emphasis on a different aspects of adventuring. It's that the bulk of the significant decisions for everything except the tactics boil down to guessing the DM's password. And that just isn't that fun. Nor is it compulsory (school) or economically worthwhile (paid employment), the other times that password guessing is the whole point of the game.

The reason the disgruntled charmer had to fall back on tactical combat is because that is the one aspect of the situation over which the players actually have influence. Because no matter how much attention you pay to that aspect it still amounts to trying to guess how some roleplaying nerd thinks you should pick up wenches! Something just isn't right there.

On the hand designing an entire gaming system around a solid theory of social dynamics has real potential as a learning tool if run by those with solid competence themselves. "Lookt! It's a 9 HB. They have a 30 second timeout. Quick, use a +3 neghit then follow up with that new 2d8 identity conveying routine you've been preparing all week! Let me run interference on the AMOG to hold agro while you establish rapport" (No, on second thought, let's not go to Camelot. 'Tis a silly place.)

I agree completely with you that "how some roleplaying nerd thinks you should pick up wenches" bears no meaningful relationship to real social dynamics, so it's all password-guessing.

From my perspective, the same thing was true of slicing swords through armor, raising allied morale, casting spells, praying for divine intervention, avoiding diseases in the swamp, etc. None of those simulated activities bore any meaningful relationship to the real thing they ostensibly simulated.

But I'll grant that in the latter cases, there were usually formal rules written down, so I didn't have to guess the passwords: I could read them in a book, memorize them, and optimize for them. (At least, assuming the GM followed them scrupulously.)

But I'll grant that in the latter cases, there were usually formal rules written down, so I didn't have to guess the passwords: I could read them in a book, memorize them, and optimize for them. (At least, assuming the GM followed them scrupulously.)

Then, of course, there are the actual strategic roleplaying choices. Not the mere tactical ones of how to fight some orcs. The ones where you have to make a choice on where you go next. Roughly speaking you are often best off choosing what the rational course of action is and then picking the opposite. It's a lot more fun, the battles are both more likely and more of a challenge and you get far more experience! If the DM already has a plan on how long his adventure will take to complete and a rough idea of what you'll be fighting at the end then the more danger you encounter in the mean time the better. So go sleep in that haunted wood then walk into what is obviously a trap.

Does anyone remember where Eliezer joked about leaving his spare coins around under random objects? He also made a point that in roleplaying games you are usually best served by going around and doing everything else first instead of doing the thing that is the shortest path to getting what you want.

Roughly speaking you are often best off choosing what the rational course of action is and then picking the opposite.

I consider this a symptom of poor scenario design - the availability of unpredictably optimal actions is the key technical difference (there are of course social differences) between open-ended and computer-mediated games. If the setting is incompatible with the characters' motivations, it's impossible to maintain the fiction that they're even really trying, and either the setting's incentives or the characters' motivations (or both in tandem) need revision.

Running a good open-ended game in the presence of imaginative and intelligent players is hard. You either leave lots of material unused, or rob the game of its key strength by over-constraining the set of possible actions.


Of course, it helps to be clear about what you actually want.

IME most computer RPG designers assume their players want to "beat the game": that is, to do whatever the game makes challenging as efficiently as possible. And they design for that, clearly signaling what the assigned challenges are and providing a steadily progressing path of greater challenge and increased capacity to handle those challenges. (As you and EY point out, this often involves completely implausible strategic considerations.)

This is also true of a certain flavor of TT RPG, where the GM designs adventures as a series of challenging obstacles and puzzles which the players must overcome/solve in order to obtain various rewards. (And as you suggested earlier, one could also imagine a social RPG built on this model.)

In other (rarer) flavors of TT, and in most forum-based RPGs, it's more like collaborating on a piece of fiction: the GM designs adventures as a narrative setting which the players must interact with in order to tell an interesting story.

It can be jarring when the two styles collide, of course.

It can be jarring when the two styles collide, of course.

There is far more than a difference of styles at work.

Well, that's portentous. Is this meant as a back-reference to the things you've already discussed in this thread, or as an intimation of things left unsaid?

Is this meant as a back-reference to the things you've already discussed in this thread, or as an intimation of things left unsaid?

The former, but I suppose both apply. Either way I thought enough had been said and wanted to exit the conversation without particularly implying agreement but without making a fuss.either. A simple assertion of position was appropriate. While strictly true saying "further conversation would just involve spinning new ways of framing stuff for the purpose of arguing for a position and generally be boring and uninformative" would represent connotations that I didn't want to convey at the time. The conversation to that point was positive and had merely exhausted the potential. Quit before it is just an argument.

Since you asked.

Yeah, I think roleplayers and writers share the position that sadism is one of the most important virtues.

Yeah, I think roleplayers and writers share the position that sadism is one of the most important virtues.

I read some Ian Irvine a while back - the punishment he deals out to his two protagonists goes through sadistic and out the other side. But on the other hand he did let the pair hook up and have a stable, secure relationship whenever one or the other wasn't either kidnapped or out alone on the run in the forest with no food and probably a broken leg. I didn't quite make it through the series but I assume they lived happily (albeit in intermittent agony and constant adversity) ever after. So he's just sadistic, not cruel. :)

From my perspective, the same thing was true of slicing swords through armor, raising allied morale, casting spells, praying for divine intervention, avoiding diseases in the swamp, etc. None of those simulated activities bore any meaningful relationship to the real thing they ostensibly simulated.

Like Melf's Minute Meteors doing fire damage. Those things are still supercooled by the time they hit the ground. Those trolls should be fine! (Until you use Melf's Acid Arrow).

That's an issue that traditional-game GMs go back and forth on all the time - some say "but it's more interesting if you role-play it out", and some say "but you're not making the fighter actually stab people when he wants to make an attack". Personally, in that sort of game I like to have players in-character to an extent, but their social stats should be the thing that determines their character's success at social tasks.

There are a ton of spectacular indie games that deal with this in other ways!

Wuthering Heights Roleplay is just incredible. Your main stats are Despair and Rage. There are general rules for matching tasks to stat rolls, and specific rules for Duels, Murder, Art, and Seduction. The general trajectory of a game is: a bunch of terrible people obsessed with their own problems (or rather, their Problems) start falling in love with each other and making dramatic revelations, until eventually they're all hacking each other to pieces. It's a fun evening.

The Mountain Witch is another interesting one. All "conflicts" are decided by a simple roll-off, d6 versus d6. You get more d6s (keep the highest) if you're working with other people. Players keep track of how much they Trust each other player, and you can spend someone else's Trust in you to help them out with bonuses in conflicts, to gain control of the narration of the outcome of their conflicts, or to give yourself bonuses when directly opposing them. (There's a lot more to this one, but that's the gist of the conflict mechanic.) Cool stuff.

The Mountain Witch is another interesting one. All "conflicts" are decided by a simple roll-off, d6 versus d6.

What, no d20s? Or even a d8? Where's the geeky fun in that? P

Instead, you use poker chips to represent trust! I find that appealing somehow...

Yeah, that one just uses d6 - though there's an interesting "duel" mechanic where you and an opponent roll secretly, then decide together whether you'll each roll again - to emulate two ronin staring each other down before deciding the battle with a single cut. (The game has a very specific setting - you're a group of ronin who've been hired to climb Mt. Fuji and kill the Witch (though he's a dude?) that lives on top. You all have secret ulterior motives! I think it's been adapted to similar scenarios such as bank heists.)

Uh, Wuthering Heights uses d100, and you roll under or over your Despair / Rage depending on what you want to do. For example, killing someone means rolling below Rage (easier to do the angrier you are), whereas noticing other peoples' feelings and stuff requires a roll over Despair. Ooh, plus, if you're into nerdy dice-related stuff, there's a big Random Table of Problems, like "You are an alcoholic", "You are a homosexual", "You are Irish", "You are in love with a member of your family", or "You are a poet", and everyone has to roll d100 a few times to get their Problems.

Oh! And if you just want to chuck lots of different kinds of dice around, you can't go wrong with Dogs in the Vineyard - where you play itinerant teenage pseudo-Mormon enforcers of the faith in a west that never was. All your traits have some amount of dice of some size next to them, and when they come up in a conflict, you roll them into your pool and can use them when raising / seeing. For instance - possessions of any sort are 1d6, 1d4 if they're sorta worthless, 1d8 if they're excellent (criterion: in order to be excellent, a thing has to be good enough that people might remark on how excellent it is), 2 dice if they're big, and an extra d4 if it's a gun (so a big, excellent pistol is 2d8+1d4).

Huh, kinda geeked out there. ^-^;

Instead, you use poker chips to represent trust!

Ok, poker chips qualify as a legitimate nerd-coolness alternative. I'm convinced. :P

so a big, excellent pistol is 2d8+1d4

A Mormon with a deagle
you know that's unheard of!

A Mormon with a deagle you know that's unheard of!

Hah! And of course, this being a roleplaying game, I defy you to find a player who won't take a big, excellent gun.

The other problem with Munchkinism is that, once your character actually achieves godlike power by breaking the game system, there's no actual challenge left. It's like solving a Rubix Cube by peeling the colored stickers off of the sides and sticking them back on in the "solved" position; there's really no point to it. So you self-handicap and choose to play a character that isn't Pun-Pun.

It's like solving a Rubix Cube by peeling the colored stickers off of the sides and sticking them back on in the "solved" position;

Munchkinism is definitely not the same as cheating. You don't break the Rubix cube physics, you work within them. A munchkin probably would google "solve rubix cube" and then apply a dozen or so step algorithm that will solve the cube from any given starting configuration. In fact peeling the stickers isn't even cheating properly. The result doesn't even constitute a solved rubix cube. It constitutes an ugly block that used to be a rubix cube. It is far better to simply dismantle the cube and click it back into place correctly. (This is actually necessary if some clown has taken out one of the blocks and swapped it around such that the entire cube is unsolvable. A cruel trick.)

A legitimate challenge there is to set yourself the task of solving it without external knowledge. The one I would go with (if I was interested in playing with the cubes beyond being able to solve them all at will) is to learn to solve the cube blindfolded. You get to look at the cube once for a couple of seconds then you have to do the whole thing by touch (and no, there is no braille there to help you). As a bonus this is exactly the sort of task that grants general improvements in mental focus!

The other problem with Munchkinism is that, once your character actually achieves godlike power by breaking the game system, there's no actual challenge left.

That's the kind of thing I like to demonstrate once in principle and then propose a rule change. My usual example is that of playing 500 and the open misere call. I usually propose something of a limitation on frequency of misere calls (and allow any 10 call to beat it). If the other players don't want the limitation I proceed to play open misere every time it is rational to do so (about 1/4 hands, depending on the score at the time). And ask them if they have changed their mind every time I win.

I like self-handicaps. At least in the form of giving yourself a genuinely challenging task and then trying to overcome it. My character selections (in RPGs when I have played them and in CRPGs) tend to be based on novelty or and emotional appeal. All the choices after that can be made intelligently.

RPGs are kind of a weird case; they're not "games" in the same sense as a competitive game, because there's not one fixed thing specified in the rules that you're supposed to be maximizing (this is part of why I don't play RPGs :P ). With those you start getting derogatory nicknames for those who don't do everything possible to win (e.g. "scrubs"). Though I don't know of any short term for those who do (aside from just "people who play to win"), except in the context of Magic where they're known as "Spikes". Of course, if we're speaking of "winning at life", there it is also not clear should to be maximized! People aren't very good at knowing their own goals. So that's something of a disanalogy.

Munchkinism's more vivid in my mind. Then again, I love to make up new words.

Munchkinism's more vivid in my mind. Then again, I love to make up new words.

On the other hand if 'real' munchkins were anything like 'munchkins' in this sense they would have taken a level in badass then dealt with both the wicked witches themselves without waiting for a fortuitous outside intervention. And made the yellow brick road straight.

It's not the same thing. Picking locks is a hack. Cryonics is something more, which is why even most people who can pick locks don't go for it.

I wonder if it's accurate to say that for hacks, it's the means that's considered "cheating", whereas for cryonics, it's the end itself that's considered "cheating".

That seems like a good distinction between Munchkinism and Hacking, as I've seen them used by their respective cultures. Munchkinism is about using the rules to accomplish an "unacceptable" goal, whereas Hacking is about accomplishing acceptable goals via "unacceptable" methods. Thank you for helping me cement why the two terms felt like very separate ones :)

No, it's quite the same thing.

Hackers typically had little respect for the silly rules that administrators like to impose, so they looked for ways around. For instance, when computers at MIT started to have "security" (that is, restrictions on what users could do), some hackers found clever ways to bypass the security, partly so they could use the computers freely, and partly just for the sake of cleverness (hacking does not need to be useful). However, only some hackers did this—many were occupied with other kinds of cleverness.... [snip several examples]

-- rms, "On Hacking"

Does not the bolded section describe cryonics? Isn't death a "silly rule"? I think your sense of the word "hacking" is too strict.

7 points