(The Exercise Prize series of posts is the Center for Applied Rationality asking for help inventing exercises that can teach cognitive skills. The difficulty is coming up with exercises interesting enough, with a high enough hedonic return, that people actually do them and remember them; this often involves standing up and performing actions, or interacting with other people, not just working alone with an exercise booklet and a pencil. We offer prizes of $50 for any suggestion we decide to test, and $500 for any suggestion we decide to adopt. This prize also extends to LW meetup activities and good ideas for verifying that a skill has been acquired. See here for details.)
Exercise Prize: Check Consequentialism
In philosophy, "consequentialism" is the belief that doing the right thing makes the world a better place, i.e., that actions should be chosen on the basis of their probable outcomes. It seems like the mental habit of checking consequentialism, asking "What positive future events does this action cause?", would catch numerous cognitive fallacies.
For example, the mental habit of consequentialism would counter the sunk cost fallacy - if a PhD wouldn't really lead to much in the way of desirable job opportunities or a higher income, and the only reason you're still pursuing your PhD is that otherwise all your previous years of work will have been wasted, you will find yourself encountering a blank screen at the point where you try to imagine a positive future outcome of spending another two years working toward your PhD - you will not be able to state what good future events happen as a result.
Or consider the problem of living in the should-universe; if you're thinking, I'm not going to talk to my boyfriend about X because he should know it already, you might be able to spot this as an instance of should-universe thinking (planning/choosing/acting/feeling as though within / by-comparison-to an image of an ideal perfect universe) by having done exercises specifically to sensitize you to should-ness. Or, if you've practiced the more general skill of Checking Consequentialism, you might notice a problem on asking "What happens if I talk / don't talk to my boyfriend?" - providing that you're sufficiently adept to constrain your consequentialist visualization to what actually happens as opposed to what should happen.
The skill of Checking Consequentialism isn't quite as simple as telling people to ask, "What positive result do I get?" By itself, this mental query is probably going to return any apparent justification - for example, in the sunk-cost-PhD example, asking "What good thing happens as a result?" will just return, "All my years of work won't have been wasted! That's good!" Any choice people are tempted by seems good for some reason, and executing a query about "good reasons" will just return this.
The novel part of Checking Consequentialism is the ability to discriminate "consequentialist reasons" from "non-consequentialist reasons" - being able to distinguish that "Because a PhD gets me a 50% higher salary" talks about future positive consequences, while "Because I don't want my years of work to have been wasted" doesn't.
It's possible that asking "At what time does the consequence occur and how long does it last?" would be useful for distinguishing future-consequences from non-future-consequences - if you take a bad-thing like "I don't want my work to have been wasted" and ask "When does it occur, where does it occur, and how long does it last?", you will with luck notice the error.
Learning to draw cause-and-effect directed graphs, a la Judea Pearl and Bayes nets, seems like it might be helpful - at least, Geoff was doing this while trying to teach strategicness and the class seemed to like it.
Sometimes non-consequentialist reasons can be rescued as consequentialist ones. "You shouldn't kill because it's the wrong thing to do" can be rescued as "Because then a person will transition from 'alive' to 'dead' in the future, and this is a bad event" or "Because the interval between Outcome A and Outcome B includes the interval from Fred alive to Fred dead."
On a five-second level, the skill would have to include:
- Being cued by some problem to try looking at the consequences;
- Either directly having a mental procedure that only turns up consequences, like trying to visualize events out into the future, or
- First asking 'Why am I doing this?' and then looking at the justifications to check if they're consequentialist, perhaps using techniques like asking 'How long does it last?', 'When does it happen?', or 'Where does it happen?'.
- Expending a small amount of effort to see if a non-consequentialist reason can easily translate into a consequentialist one in a realistic way.
- Making the decision whether or not to change your mind.
- If necessary, detaching from the thing you were doing for non-consequentialist reasons.
In practice, it may be obvious that you're making a mistake as soon as you think to check consequences. I have 'living in the should-universe' or 'sunk cost fallacy' cached to the point where as soon as I spot an error of that pattern, it's usually pretty obvious (without further deliberative thought) what the residual reasons are and whether I was doing it wrong.
Pain points & Pluses:
(When generating a candidate kata, almost the first question we ask - directly after the selection of a topic, like 'consequentialism' - is, "What are the pain points? Or pleasure points?" This can be errors you've made yourself and noticed afterward, or even cases where you've noticed someone else doing it wrong, but ideally cases where you use the skill in real life. Since a lot of rationality is in fact about not screwing up, there may not always be pleasure points where the skill is used in a non-error-correcting, strictly positive context; but it's still worth asking each time. We ask this question right at the beginning because it (a) checks to see how often the skill is actually important in real life and (b) provides concrete use-cases to focus discussion of the skill.)
Checking Consequentialism looks like it should be useful for countering:
- Living in the should-universe (taking actions because of the consequences they ought to have, rather than the consequences they probably will have). E.g., "I'm not going to talk to my girlfriend because she should already know X" or "I'm going to become a theoretical physicist because I ought to enjoy theoretical physics."
- The sunk cost fallacy (choosing to prevent previously expended, non-recoverable resources from having been wasted in retrospect - i.e., avoiding the mental pain of reclassifying a past investment as a loss - rather than acting for the sake of future considerations). E.g., "If I give up on my PhD, I'll have wasted the last three years."
- Cached thoughts and habits; "But I usually shop at Whole Foods" or "I don't know, I've never tried an electric toothbrush before." (These might have rescuable consequences, but as stated, they aren't talking about future events.)
- Acting-out an emotion - one of the most useful pieces of advice I got from Anna Salamon was to find other ways to act out an emotion than strategic choices. If you're feeling frustrated with a coworker, you might still want to Check Consequentialism on "Buy them dead flowers for their going-away party" even though it seems to express your frustration.
- Indignation / acting-out of morals - "Drugs are bad, so drug use ought to be illegal", where it's much harder to make the case that countries which decriminalized marijuana experienced worse net outcomes. (Though it should be noted that you also have to Use Empiricism to ask the question 'What happened to other countries that decriminalized marijuana?' instead of making up a gloomy consequentialist prediction to express your moral disapproval.)
- Identity - "I'm the sort of person who belongs in academia."
- "Trying to do things" for simply no reason at all, while your brain still generates activities and actions, because nobody ever told you that behaviors ought to have a purpose or that lack of purpose is a warning sign. This habit can be inculcated by schoolwork, wanting to put in 8 hours before going home, etc. E.g. you "try to write an essay", and you know that an essay has paragraphs; so you try to write a bunch of paragraphs but you don't have any functional role in mind for each paragraph. "What is the positive consequence of this paragraph?" might come in handy here.
(This list is not intended to be exhaustive.)
- Being able to state and then focus on a positive outcome seems like it should improve motivation, at least in cases where the positive outcome is realistically attainable to a non-frustrating degree and has not yet been subject to hedonic adaptation. E.g., a $600 job may be more motivating if you visualize the $600 laptop you're going to buy with the proceeds.
Also, consequentialism is the foundation of expected utility, which is the foundation of instrumental rationality - this is why we're considering it as an early unit. (This is not directly listed as a "pleasure point" because it is not directly a use-case.)
Constantly asking about consequences seems likely to improve overall strategicness - not just lead to the better of two choices being taken from a fixed decision-set, but also having goals in mind that can generate new perceived choices, i.e., improve the overall degree to which people do things for reasons, as opposed to not doing things or not having reasons. (But this is a hopeful eventual positive consequence of practicing the skill, not a use-case where the skill is directly being applied.)
Teaching & exercises:
This is the part that's being thrown open to Less Wrong generally. Hopefully I've described the skill in enough detail to convey what it is. Now, how would you practice it? How would you have an audience practice it, hopefully in activities carried out with each other?
The dumb thing I tried to do previously was to have exercises along the lines of, "Print up a booklet with little snippets of scenarios in them, and ask people to circle non-consequentialist reasoning, then try to either translate it to consequentialist reasons or say that no consequentialist reasons could be found." I didn't do that for this exact session, but if you look at what I did with the sunk cost fallacy, it's the same sort of silly thing I tried to do.
This didn't work very well - maybe the exercises were too easy, or maybe it was that people were doing it alone, or maybe we did something else wrong, but the audience appeared to experience insufficient hedonic return. They were, in lay terms, unenthusiastic.
At this point I should like to pause, and tell a recent and important story. On Saturday I taught an 80-minute unit on Bayes's Rule to an audience of non-Sequence-reading experimental subjects, who were mostly either programmers or in other technical subjects, so I could go through the math fairly fast. Afterward, though, I was worried that they hadn't really learned to apply Bayes's Rule and wished I had a small little pamphlet of practice problems to hand out. I still think this would've been a good idea, but...
On Wednesday, I attended Andrew Critch's course at Berkeley, which was roughly mostly-instrumental LW-style cognitive-improvement material aimed at math students; and in this particular session, Critch introduced Bayes's Theorem, not as advanced math, but with the aim of getting them to apply it to life.
Critch demonstrated using what he called the Really Getting Bayes game. He had Nisan (a local LWer) touch an object to the back of Critch's neck, a cellphone as it happened, while Critch faced in the other direction; this was "prior experience". Nisan said that the object was either a cellphone or a pen. Critch gave prior odds of 60% : 40% that the object was a cellphone vs. pen, based on his prior experience. Nisan then asked Critch how likely he thought it was that a cellphone or a pen would be RGB-colored, i.e., colored red, green, or blue. Critch didn't give exact numbers here, but said he thought a cellphone was more likely to be primary-colored, and drew some rectangles on the blackboard to illustrate the likelihood ratio. After being told that the object was in fact primary-colored (the cellphone was metallic blue), Critch gave posterior odds of 75% : 25% in favor of the cellphone, and then turned around to look.
Then Critch broke up the class into pairs and asked each pair to carry out a similar operation on each other: Pick two plausible objects and make sure you're holding at least one of them, touch it to the other person while they face the other way, prior odds, additional fact, likelihood ratio, posterior odds.
This is the sort of in-person, hands-on, real-life, and social exercise that didn't occur to me, or Anna, or anyone else helping, while we were trying to design the Bayes's Theorem unit. Our brains just didn't go in that direction, though we recognized it as embarrassingly obvious in retrospect.
So... how would you design an exercise to teach Checking Consequentialism?
So... how would you design an exercise to teach Checking Consequentialism?
I would check to see if such a thing already exists or if there are people who have experience designing such things. I know of a Belgian non-profit 'Center for Informative Games' that not only rents games designed to teach certain skills but will also help you create your own.
From their site: On request C.I.S. develops games for others. The applicant provides the content of the game, while C.I.S. develops the conceptual and game technical part to perfection. The applicant has the opportunity to attend some game tests and to redirect when necessary.
They also offer coaching if you want to work on your own: Do you want to create a game concept on your own, but you don't know where to start? No worries C.I.S. can give you a hand. During a number of concrete working sessions C.I.S. facilitates your development. In between sessions the applicant continues the work to, finally, end up with a solid game.
I have enjoyed their games in the past and can attest to their quality. The obvious problem is that it's a purely Belgian organization and the 'search' function only works with Dutch words. However if you want to check them out I'd be happy to act as a go-between. Since a couple of months there is even a Brussels LW meetup, I'm certain I could get a couple of members to help in the production process (again, if this seems interesting)
In a group, with a leader who knows the exercise:
Get a volunteer to act as a judge (or a few to act as a jury, in a large group). Have her leave the room. The leader presents the rest with a short set of Contrived Hypothetical Situations, each with finite options and either clearly-defined outcomes for each option, or a probabilistic distribution of outcomes for each option. The leader says, "Please write down your choice for each problem, sign your paper, and turn it in to me. Then I'll call in the judge, and have her decide on each problem. You get a point wherever her decision agrees with yours. The winner is the one with the most points." When the judge is called in, however, the leader doesn't tell them the actual problems. Rather, the leader just reports the outcomes (or distributions), and asks them to choose which outcome or distribution is best. The winners are announced based on that.
Example: One of the situations given is some variant of the trolley problem. When the judge comes in, she is just asked whether she'd prefer one person to get hit by a trolley, or five. Everybody laughs as she replies "...one?"
Example: The problem given to the g... (read more)
"Intense, long, certain, speedy, fruitful, pure—
Such marks in pleasures and in pains endure.
— Jeremy Bentham's mnemonic for the signs of "consequentialist reasons"
EDIT: It occurs to me that I should explain this more. Bentham was trying to popularize consequentialism and remind his readers of what sorts of things count as consequentialist reasons to prefer a particular outcome. Eliezer suggests that we sho... (read more)
Cleverness-related failure mode (that actually came up in the trial unit):
One shouldn't try too hard to rescue non-consequentialist reasons. This probably has to be emphasized especially with new audiences who associate "rationality" to Spock and university professors, or audiences who've studied pre-behavioral economics, and who think they score extra points if they come up with amazingly clever ways to rescue bad ideas.
Any decision-making algorithm, no matter how stupid, can be made to look like expected utility maximization through the transform "Assign infinite negative utility to departing from decision algorithm X". This in essence is what somebody is doing when they say, "Aha! But if I stop my PhD program now, I'll have the negative consequence of having abandoned a sunk cost!" (Sometimes I feel like hitting people with a wooden stick when they do this, but that act just expresses an emotion rather than having any discernible positive consequences.) This is Cleverly Failing to Get the Point if "not wanting to abandon a sunk cost", i.e., the counterintuitive feel of departing from the brain's previous decision algorithm, is treated ... (read more)
One of the other models people have for the rationalizing sort of "rationality" is that of lawyers.
Lawyers are very good at logic — the LSAT, the entrance examination for U.S. law schools, leans heavily on logic puzzles — but the whole point of being a trial or appeals lawyer is to come up with clever (and socially respectable) arguments for whatever position your client may have at the moment.
This extends past real-world lawyerhood. The tabletop role-playing game crowd have the expression "rules lawyer" for a person who comes up with clever arguments for why their character should get away with whatever they want to at the moment.
My normal response is, "so what's bad about that?" and go a few rounds until the person has to struggle for an answer... the teachable moment where I can say, "you see what you're doing? you're just making stuff up. What's actually going to happen?"
(That being said, it would definitely have been helpful for me in the past if I had thought to confine questions of consequences to things happening at a point-in-time. I eventually figured out that I needed to ask that for things people were thinking about or remembering, but there was a long time where I also had the hit-them-with-a-stick frustration to this kind of response.)
The only suggestion I have for exercises is to make people write down their own thinking (or state their thinking out loud), and then read it back as a kind of grammar-checking exercise. Are these abstract nouns or concrete nouns? Do they describe a point in time or some sort of vague non-timey thing?
I've done some similar things with small groups, though, and one ... (read more)
Not attempting to answer the question, but I've been nursing a thought about the rationality org that might be useful: The nearby Stanford University has a world-renown program in "decision sciences" http://decision.stanford.edu/ which is basically "how to make decisions scientifically"; they virtually invented influence diagrams, they teach biases as a part of the program, etc. The head of the program, Ronald Howard, also co-founded http://www.decisioneducation.org/ , his teen-oriented "rationality org".
there is probably things to learn from both
if "rationality org" has a value proposition to these organizations they can be useful in teaching opportunities and for credibility building
I am reminded of a game we played in elementary school:
There are 100 pieces of candy in a jar, and 20 students. Each student gets to vote "share" or "selfish". If EVERYONE votes to share, the candy is split evenly. If ONE person votes "selfish", they get all the candy. If MORE than one person votes "selfish", no one gets candy, and the experiment is repeated until either the candy is distributed, or X iterations (3-5 seems normal) have passed.
Before each iteration, the students are allowed to discuss strategy. The solution, of course, is for a single trustworthy person to make a binding commitment to vote "selfish" and then evenly distribute the candy. By pre-commiting to vote "selfish", everyone else knows that voting "selfish" themselves will result in no candy - unlike a commitment to have everybody share.
I've always considered it a decent test of group rationality and social skills whether or not this consensus actually gets reached before the final iteration. I've seen groups that hit on this, had a single iteration with a few outliers testing it just to be sure that the person would really vote "selfish" like they said, and then implementing the strategy. I've seen others where 10-20% of the audience simply would not believe the person who made the pre-commitment, and so there was never a successful iteration
An important question to ask that you are leaving out is "What are my alternatives to this course of action?"
The comparison of consequences requires an alternative set of consequences to compare to. Considering the question "Should I be in graduate school?" The answer may well be different if the alternative is getting a decent job than if the alternative is starving unemployment.
The listing of alternatives also helps catch cheating. If the alternative is implausible and disastrous (Stay in grad school or be a hobo) then it is likely that Checking Consequentialism isn't being done seriously.. The alternative compared needs to be a serious answer to the question "What would I do if I couldn't/wouldn't do this?"
This seems like a comparatively reliable procedure: imagine a collection of possible worlds generated by possible actions; explain what future events distinguish between these worlds in a way that makes one of them preferable to the others; then choose the action that leads there.
Paying attention to events that distinguish these possible futures from each other guards against errors such as comparing to status quo or should-world (neither of which is among the possible futures), or worse comparing an arbitrarily picked-out event (in one of the possible futures) to an arbitrary anchor (i.e. a cost that feels excessive in some absolute way, and not by comparison to alternatives). Focusing on future events and not past events or actions themselves guards against deontological and identity-based arguments, such as "this is a proper action", "he shouted first" or "because I'm a human being".
Saying "positive consequence" sounds like a bad habit to me: positive compared to what? The comparison should be among the alternatives, without anchoring on some baseline of neutrality such that some consequences are more "positive" than that.
Concreteness Game The object of this game is to train players to generate examples for explaining concepts quickly. The game requires at least two people, but may work better with groups of three or four.
To play, one of the players (Asker) names a concept to be explained, such as "How do you traverse a linked nodal network", "Explain the law of Conservation of Energy", or "What constitutes good financial advice?"
The other player (Generator) then tries to explain the concept/skill/other by using a nearby object to assist. The ... (read more)
You could use mazes where your score is -(total distance traveled) . First, give a simple maze with two obvious paths, A and B but Path A is much shorter. Then give a second maze identical to the first but you are taking over from another player who has already gone down Path B but the shortest way to the exit is to double back and then go down Path A. Then give the same maze but now there is an obstacle on Path A that you must go around if you take this path and so it's now optimal to go on Path B. The obstacle was placed there for some unfair reason ... (read more)
Kahneman suggests such an exercise for groups after pointing out that organizations generally act more rationally than individuals. The devil's advocate role and thinking at the worst possible outcome. We don't always have the luxury of having others near us for checking our thoughts. But we often have imaginary conversations with friends or parents. So it shouldn't be very difficult to assign a devil's advocate position to a imaginary voice. That should put in perspective the way we feel about the subject. It is a basic mean of delaying the strong coherence of the first good narrative.
Maybe it would be great to have an imaginary Bayesian friend...
This wouldn't work as the only exercise, but could be useful if paired with another.
Presumably all students have other things they could be doing with their time, some of it possibly fun. Near the end of the lesson, perhaps just before the last exercise, with maybe 15-20 minutes left in the session, present this option: You can leave right now and get on with your day.
Obviously this is always an option, no one is required to stay against their will, but it's usually considered bad form and it's never given as an explicit option. Tell everyone to think abou... (read more)
I think of this as "looking effectward", one of the basic directions in concept space (opposite causeward, making it the inverse operation of asking "why").
Another inferential path: it may be valuable to differentiate them as attitudes and events. If my motivation for getting a PhD is "I will feel terrible if I do not get a PhD", that's an attitude motivation, which in theory I have internal control over. If my motivation for getting a PhD is that I will get hired for a different sort of job, that's an event motivation, for which control is primarily external. I don't have control over whether or not a variety of job requires a PhD, but I do have control over whether or not my attitude will be negat... (read more)
I think an important pre-skill is to decide what your goals are BEFORE checking the consequences. Otherwise, you can easily retroactively change your goals to fit whatever action you want.
For example, in the sunk costs PhD scenario, it would be easy for someone to say something like "If I pursue my PhD, it will make my family proud. This is very important to me, so I will pursue my PhD." However if you asked them what their goals are BEFORE they listed the consequences of each action they probably would not list "Make family proud", bu... (read more)
I don't have suggestions on the main question, but I strongly recommend that you design the curricula to be consumable by corporate executives. If you can convince companies that they need to send their execs to week-long rationality seminars, to help them improve their decision-making skills, you are going to make megabucks.
We did the Really Getting Bayes game at the Mountain View meetup this week. My impression of it was that the explanation was at first a little unclear, but that once we had gotten the sense of it, it was worthwhile.
One thing that I realized during the game was the degree to which I was using the availability heuristic to provide my likelihood ratios. For instance, one object I was given was either an electrical extension cord or an audio cable. In coming up with RGB likelihoods, I thought, "Electrical extension cords are usually black or white, hence ... (read more)
Why is teaching people to think like consequentialists a good idea again? Serious question.
If they're (relatively successful) mathematicians and programmers I don't see how it could go wrong but I'm awfully worried about some of the rest of the population. Specifically people not being able to sustain it without letting other things slip.
second edit: I should clarify. It's teaching the habit that I'm thinking about. Everyone should be able to think like a consequentialist but is instilling these reflexes gonna be a net positive?
Why the fancy words? This just seems like a complicated way of saying: "Because the person would then be dead. And that is bad".
Does it have kuru? I'm only open to eating healthy human flesh in this scenario.
Also, if it poofs into existence from nowhere, is it creating matter out of nothing? It's creating something that still has usable energy in it, out of nothing? That could not only end world hunger and veganism, you might be able to use the newly-created corpses for fuel in some kind of power plant. Sure, you might have to go back to steam power to make it work, and sure, human bodies might not be the optimal fuel source, but if you're getting them from nowhere, that solves all our energy woes.
It also might make the planet gain mass, eventually, if you did enough of it for long enough. Hmm. Oh, well, you can use that to make spacecraft. Maybe. Or something.
That and blood pudding. And fertilizer.
I think actually, being able to poof human corpses into existence would be an improvement over the current state of affairs. It might still be sub-optimal, but it would be better.
Now I want to be able to poof human corpses into existence from nowhere. I also think maybe I should start a list of things I've said that I wouldn't have been able to predict that I would say if asked the day before.
Less Wrong: Rationality, polyamory, cannibalism.
To me, this comes down to what I am trying to learn as my anti-akrasia front kick: I cache the question "Why am I doing what I am doing?". While I lose some amount of focus to the question itself, I have gained key insights into many of my worst habits. For instance, my employer provides free soft drinks- I found that I would end up with multiple, open drinks at my desk. The cached question revealed I was using the action of getting a drink whenever I felt the need to stretch and leave my desk. Browsing reddit too much at work- cached question c... (read more)
Many of the pain points listed have a common trait: the decision would seem easier with less information. For example, the PhD decision is easier if you didn't know about the costs which have been sunk, the identity decisions are easier if you're not sure of your own identity, cached thought problems are easier without having that thought cached, etc...
But we know that information should never have negative value. So why not highlight that dissonance? Imagine the following exercise:
Handout: "You spent the last 3 years working toward a PhD. You passed... (read more)
If the world were going to end right after I took an action, which action would I choose? (Alt: If everybody saw what choice I was about to make, but then circumstances changed and my decision turned out not to matter, what choice would I want to have made?)
Did answering that question feel the same as answering the actual question? If so, I'm not really thinking about consequences.
So... how would I design an exercise to teach Checking Consequentialism?
Divide the group into pairs. One is the decider, the other is the environment. Let them play some game repeatedly, prisoners dilemma might be appropriate, but maybe it should be a little bit more complex. The algorithm of the environment is predetermined by the teacher and known to both of the players.
The decider tries to maximize utilitiy over the repeated rounds, the environment tries to minimise the winnigs of the decider, by using social interaction between the evaluated game round... (read more)
Maybe the easiest way to teach it is to teach how it applies to others. That is, train people to catch nonconsequential reasoning in arguments that others make, and then hope that they apply that to themselves. The easiest way to do that is by reflexively asking, "so what?"
Here's a long-form exercise:
The other participant(s) ask questions to clarify the problem/decision.
The problem/decision can be either real or imaginary, but if imaginary the presenter must come up with appropriately detailed answers to questions.
Your check consequentialism sounds a lot like risk management. Risk is the effect of uncertainty on objectives (ISO 31 000). The risk management process involves indentifying risks, analysing how significant they are, and then treating the big ones so that they don't prevent you from attaining your objective. This is fairly straightforward to do. The difficult part is building a risk management culture where the risks are considered before making a decison, embarking on a project, etc. Just identifying the risks is often the big deal. Once you are aware that a risk exists you will probably deal with it. Sorry that I have not given you an activity, but perhaps I have given you a useful keyword to help your search.
Obligatory link: Urges vs. Goals: The analogy to anticipation and belief.
By the way, if you don't mind my asking, once you've come up with your rationality curriculum what do you plan to do with it? Are you making inroads with whoever you would need to talk to to get this in a school curriculum, for instance?
What, we're not even allowed to have identities now?
Identity shouldn't act as a normative consideration. "He's going to do X because he belongs to a reference class Y" may be a valid outside view observation, a way of predicting behavior based on identity. On the other hand, "I'm going to do X because I belong to a reference class Y" is an antipattern, it's a descriptive explanation, fatalist decision rule, one that may be used to predict, but not to decide. An exception is where you might want to preserve your descriptive identity, but then the reason you do that is not identity-based.
So you can have an identity, the way you can have a pair of gloves or a Quirrell, just don't consider it part of morality.
It occurs to me that games with some significant strategic component might be useful for priming the "but what consequences does it have?" response. I'm thinking of games like Magic: the Gathering, Settlers of Catan, Risk, etc. (I'm sure the board game aficionados will have better examples than I). I say this because of personal experience with Magic players - as they get better at magic, they tend to get better at life. Well, some of them do. The others perhaps compartmentalize too much, so maybe this won't help with everyone.
In any case, my mod... (read more)
I sometimes try to get myself to make better decisions by pretending I'm a character in a Choose Your Own Adventure book. (E.g. "If you decide to stay on the couch because you're too lazy to work, turn to page 30.") Unfortunately, in the real books it's rare that enough information is given for you to make a really good decision, and the authors also appear to like messing with you by having good decisions blow up in your face.
So, maybe a similar book that actually gave you enough information to make a good decision and rewarded good decisions and punished bad ones?
This sounds like a more useful, more intuitive, much more widely applicable reification of my own method of "What Would Your TV Tropes Page Say?"
If you are influenced by the fictional evidence, your TV Tropes page will say Wrong Genre Savvy.
I don't know if it's a consequentialism issue, but "if I was sensible" seems like a way of locking a problem in place.
Maybe there should be a separate category for noticing identity issues.
As far as I can tell, most people find it fairly easy to think about others' decisions in consequential terms, but have a lot more trouble thinking about their own that way. So, a good technique to switch to consequential thinking is to imagine that instead of thinking about your own decision, you're thinking about a decision that someone in your exact situation is making. Consider what advice you'd want to give this person, and what choice he or she should make. Disassociating yourself from the decision like this should remove the influence of most things... (read more)
Possible exercise: Assume that you have no source of income except what you can beg, steal, or find ownerless/abandoned. Assume that you have a friend in similar straits (we'll call this person Paul Poor), and that both you and Paul know of a very wealthy person (whom we'll call Richard Rich). One day, you find — apparently abandoned in the street — a loaded gun. Think of various reasons for you to use the loaded gun to force Richard to give money to Paul. Which of these reasons are non-consequentialist, and why? Now think of various reason... (read more)
Exercise: Notice results.
Example: (participants A and B)
A: What would be the consequences if you told everyone you were not wearing any underwear?
B: writing it down: I would be really embarrassed for the rest of the day, and everyone would laugh at me so I wouldn't want to show my face.
A: Go do that.
B: gives paper to A, does it
(some time later) A: What were the consequences of telling people...?
B: not reading the earlier paper I was a little embarrassed at the time, but people laughed so it was okay. Also, one person hit on me.
A: shows paper to B
They talk... (read more)
On a high level, practice asking: If I do X, what does the world look like 5 minutes from now? An hour from now? A day from now? etc.
If I don't do X, what does the world look like 5 minutes from now? An hour from now? A day from now? etc.
So, let's take the PhD example. Try talking about it without using the word "because".
If I decide to finish my PhD, 5 minutes from now I feel OK. An hour from now I'm eating dinner. A day from now I'm grinding away at my dissertation. A month from now, I'm grinding away at my dissertation. A year from now, maybe ... (read more)
It seems to me that since it's easier to notice it in other people, starting with illustrative examples would be good. This allows you to establish the basic idea, and establish a model in the audience's head. I'd suggest fictional examples would be easier to come up with, but using real examples might add veracity and help the audience engage. You could possibly even invite a few audience members up to discuss things where they might be stuck, but that runs in to the usual risks of audience examples and would probably take a fair amount of time.
Once you'v... (read more)
There's something important here. Problem solving. That's the use of intelligence that got us to the Moon. That's the use of intelligence which gave us Bayes theorem. And the best way you can spend your time is focussing on this. It does not help you a whole lot... (read more)
Beware of motivated stopping. If someone wants to do A, because B will happen, that is only the beginning. There are several directions it's worth exploring further, with one person exploring and another prompting them with questions such as these:
Will B actually happen (or be more likely to), given A?
What makes B a desired consequence? Some further consequence that it leads to? Some larger purpose for which B is a means? Or is B terminally desirable?
At some point one has to stop, but the very first consequences one thinks of may not be that point.
Split the class into groups and get each group working on something they all will easily become invested in. I'm thinking have them spend 10 minutes creating/building something as a group, and make it a competition (bragging-rights only) to solidify the investment.
Before anyone has enough time to finish, offer $100 to the first person to destroy their group's creation. (Obviously, it would be best if doing so could be done in a quick motion: like if they were building a large tower with jenga blocks or something.)
After 5 seconds, pause and have each person... (read more)
What kinds of exercises you use to teach a skill like "checking consequentialism" should probably be placed in the greater context of a rationality curriculum. You have to know where the students are coming from at each step.
That said-- making the assumption that the students are already familiar with the theory of heuristics and biases, and just need to learn how to apply them-- I think most of these can be taught with similar kinds of hypotheticals and problems.
For checking consequentialism, you might want to focus on problems involving sunk co... (read more)
Practicing this could be fun in pairs, dissecting an acted out scenario. Two instructors act out previously conceived scenarios, with a Influencer and a Reactor. At some point, 'twill be implied the Reactor wishes to act on the scenario itself or the knowledge presented therein; the scenario will then halt, and the students put in pairs to brainstorm the beneficience and maleficience of possible actions. Each student will take turns (which can be timed) being the brainstormer and the consequentialist (utilitarian?); of course the pairs can have differen... (read more)
This could also be caused be confusing correlation with causation.
The best idea I have for teaching rationality (in the general sense) is to:
1) explain the concepts to people (ie. explain the idea of consequentialist thinking, and the rationale behind it).
2) have people write essays about thoughts/ideas they have (they should be excited to write these essays), and then peer review the essays, pointing out errors in rationality. Like not supporting claims with evidence. Then have an instructor go over the essays and the evaluations to make sure they did a good job.
Also, I think what you're doing right now - crowd sourcing - is probably the best thing for idea generation.
A simple 4 Step Process:
Step 1) write down a list of the consequences
Step 2) => take this list and eliminate all descriptions of actions
Step 3) Eliminate indirect descriptions of what someone has done: honor, merit, guilt, virtue (and all their derivations); this includes personality descriptions like: beeing a good friend, winner, loser, hero, asshole, slut, murderer
Step 4) Eliminate all "consequences" that are defined as the fulfillment or not-fulfillment of plans/goals
When reading this, Thomas Sowell's 3 questions came to mind.
Without identifying the other options, even identifying something as "the cause" becomes problematic. And it is always problematic because "the cause" is generally used as "that event which I assign the credit or blame to".
If you're only looking for the positive events, you're obviously biasing your search. You should be looking for all consequ... (read more)
One possible exercise:
In pairs or in groups one person is asked by instructor, what he or she wants to buy in near future. For example, the person wants new digital camera.
Then group should calculate full cost of this camera, including all accessories and expendables.
After that people in the group suggest alternative activities and expenses, based on this full cost of digital camera, what the person can buy instead of this camera. For example, the person can buy a bike and ride around, instead buy a camera and take pictures around.
Then the person,
Possible exercise: Take one decision, two groups. First group works out all the details of what would happen in either case, best case scenario. Second group works out all the details what would happen in either case, worst case scenario. Don't be afraid to get creative or exaggerate, have fun with it. Then write down the key points, and both groups make their decision.
Then discuss both options between groups, being more realistic.
Is there a difference in approach? Reflect as a group, what have you learned? Will you use this in future decisions? If you hav... (read more)
So, first of all
The easiest way to help people learn this skill, I think, would be to teach people:
How to relax and open their muscles and joints
How to breath properly
And, the easiest way to teach people this skill, I think, is to instead teach them about this skill. This means that exercises should be somewhat indirect. Exercises should definitely get people to experience the problem instead of getting people to learn the solution, and only make available this solution as an option. Partly because the proposed solution is not the o... (read more)
I remember reading about an experiment performed by behavioral economists where person A divides some cash and person B either accepts their division and gets their allocated share of the money or rejects it and neither party gets their allocated share. You could say the consequentialist solution is to always accept the division of money, which most folks don't do, so this could make a good trial exercise. On the other hand, if person A is someone person B is going to have repeated interactions with, one could argue that the social capital of training pers... (read more)
I haven't had a chance to test it much, but I think I have an idea. Frame the question as what will happen if I do x that won't happen if I do y. At this point it seems like it should be possible to reuse the mental processes for making beliefs pay rent. Basically typecast "I do x" and "I do y" to beliefs. Then see what experiences I anticipate as a result of the beliefs "I do x" and "I do y". Then determine which set of anticipated experiences has higher utility.
The day-to-day cognitive skills I've mastered most completely (I will not say "rationalist skills," because this is true of my countless irrational skills too) are the ones which I learned during a moment of strong emotion — any emotion, excitement or curiosity or joy or surprise or depression or fear or betrayal.
In the case of this particular skill, it was betrayal. I brought it on myself — the details aren't important; suffice it that I spent two weeks living in the "should-universe" (I like this term) before a rude reminder of reali... (read more)
Well, this might be a completely trivial suggestion but, if the point is to get people using consequentialist thinking in their lives, why not have them each pick some big important decision in their lives (either an upcoming one or an ongoing one), preferably one they aren't entirely set in already or are uneasy with, so they would be more open to changing their mind, then get into groups and each take turns to discuss their decisions and options (hopefully without applying any judgement to them at this stage), then the other members trying to come up wit... (read more)
A fun and social exercise could involve Facebook timeline!