It's one thing to read about a subject, but one gains a deeper understanding by seeing it applied to real problems, and an even deeper understanding by applying it yourself. This applies in particular to the closely related subjects of rationality, cognitive biases, and decision theory. With this in mind, I'd like to propose that we create one or more discussion topics each devoted to discussing and analyzing one decision problem of one person, and see how all this theory we've been discussing can help. The person could be either a Less Wrong member or just an acquaintance of one of us.

I'll commit to actively participating myself. Does anyone want to put forth a problem to discuss?

 

New to LessWrong?

New Comment
25 comments, sorted by Click to highlight new comments since: Today at 5:55 AM

Study cognitive biases and fallacies and use them to examine your thought patterns and actions. Cognitive dissonance is, in my opinion, the most important, as it allows people to make mistakes without recognizing them as such and thus to make them again and again. Every time you make a decision that has unexpected consequences, even if they are favorable, ask yourself honestly, was this really a good decision? What evidence did you have at the time to justify it? Would you make it again, and if so, why? Would other people agree with your reasoning?

People can go through life blissfully unaware of how catastrophically bad some of their past decisions--for example, of where to work, where to live, who to marry--have been because of this all-power cognitive bias. Your goal is not to obsess over the past (and there is a risk of that), but rather to learn from it, and cognitive dissonance impedes you from doing that.

As Kahneman points out in his new book, failures of reasoning are much easier to recognize in others than in ourselves. His book is framed around introducing the language of heuristics and biases to office water-cooler gossip. Practicing on the hardest level (self-analysis) doesn't seem like the best way to grow stronger.

I'll try to think of some problems for you. In the mean time I'll point out that this seems to have a lot of overlap with Vaniver's offer at the end of the decision theory post. I'm not sure if Vaniver reads discussion posts, so you may want to contact Vaniver directly if you want to collaborate with them.

-edit link

What kind of problems to you mean specifically? By "one decision problem of one person" do you mean problems that may arise in day to day life, or is the scope allowed to be wider than that?

I'm thinking things like deciding whether or not to take a job, what major to choose in college, whether to continue or end a relationship, choosing which of several business opportunities to pursue, etc. What do you have in mind by a wider scope? Can you give some examples?

Probability and statistics.

For example, if you are interested in a particular major, what sort of employment prospects can you reasonably expect from it? Can you afford the school you want to go to, and if not, what sort of student loan debt are you looking at and will you be able to pay it off with your desired major? How many unemployed or under-employed graduates are there who got an unremunerative major from a school they couldn't afford, or worse, went to graduate school in that subject with the hope of getting a good adjunct appointment and ultimately winning the ever-dwindling tenure lottery that are now unemployed or underemployed and saddled with student loan debt they can't pay off?

These people never bothered to run the numbers and had at best only a vague understanding of what might be in store for them but convinced themselves that they were special, that they would be among the elect who "made it," regardless of how stacked the odds were against them.

Yeah. I'm trying to decide what I want to do to mitigate existential risk so that I can pick a college major so that I can pick a college to go to. I'd love help at each step of the process. I'll be the guinea pig for the first decision.

I'm going to suggest some initial steps from Hammond, Keeney, and Raiffa's book, Smart Choices. The first step is problem definition, which I think accords with the advice discussed on this website to hold off on proposing solutions until you've examined the problem thoroughly. Here are some of their ideas:

  • Be creative about your problem definition; don't necessarily state it the first way it pops into your mind. Is the problem simply about mitigating existential risk? Or is it about giving your life purpose and meaning? Or is it that you want to prolong your own life as long as you can, and anything that poses a risk to the entire human race also poses a risk to you? Be honest here; don't give the socially acceptable answer if it's not the real answer.

  • Ask what triggered the decision. Why are you even considering it? This can provide a link to the essential problem, but beware of letting it lock you into thinking about the problem only in the way it first occurred to you. It sounds like the college issue, and perhaps the whole "what am I going to do with my life" question, is what prompted the decision. So is the problem really "where do I take my life next?", or "where should I go to college?", or "what career should I follow", with the issue of existential risk being an important consideration?

  • Question the constraints in your problem statement. Is going to college necessary for you? Can you find any other hidden constraints or assumptions in your problem statement?

  • Identify the essential elements. What are possible threats to human existence? Or, if this is about "what should I do with my life", figure out what your core values are, what makes you feel fulfilled, what things have you done in the past that made you feel good about yourself, etc.

  • Understand what other decisions hinge on this decision. You've already identified college major and choice of university as decisions that hinge on how you are going to contribute to mitigating existential risk.

  • Establish a sufficient but workable scope for your problem definition. In particular, make it clear what is not part of this decision, so as not to muddy the waters.

  • Gain fresh insights by asking others how they see the situation. You're already doing this.

Definition- I could be very creative/broad with it and say that I'd like to develop a personal philosophy that combines motivational techniques, qualities I value in people, what I value in my work, etc. But that would be too broad for this topic. (I am working on this btw, right now I'm working on mind mapping. I've got to do some debugging of my old irrational beliefs). I also think that reducing existential risk is a means to an ends. I want to be uploaded, but the only way that's going to happen is if I reduce existential risk (unless somehow there are limited computational resources and only a few people can be uploaded. Honestly, I don't know how accurate any predictions we make right now are going to be. I tend to favor the prediction that we'll be able to upload plenty of people).

What triggered the decision?- I'm graduating soon. I need to make a decision.

The question is really "How am I going to reduce existential risk and enjoy doing it". I personally think that I'll be happy no matter what decision I make. I used to be depressed, but I've gained some control over my mood. Also, as long as I'm working hard to reduce ExR, live up to my potential, and help others do so as well, I'll be pretty happy. It's just a matter of how to do that most effectively. I suppose you could add "helping others live up to their potential" as a separate goal or a sub-goal of reduce ExR.

Constraints- I don't know if college is necessary for me. I'm applying for a Thiel Fellowship. However, my parents will not fund me unless I'm going to college so unless I get the Fellowship scholarship, I'm going to be going to college. The question of what to do in college is harder. Obviously there's more to it than just grades. I want to network so that I can meet mentors, business people, friends, girls, etc. I also would like to write books. I'm currently planning on writing an ebook on education and autodidactism (tentative title: ignorance is piss). I used to be in a mindset of choosing one career and becoming good at it (like the 10K hours rule. I actually have a copy of K.A.Ericcson's "Cambridge Handbook on expertise and expert performance" that I got for Christmas). On the other hand, I think that a combination of computer science, math, and chemistry (with self-studied business and philosophy) would allow me to be highly generalized. I could do start-ups, consulting, programming, publishing. On the other hand, I'd rather be an expert at at least one thing rather than be a jack of all trades.

Essential elements- On a personal level I think that I'm going to take care of that regardless of what major/career I choose by reading up on meditation, stoicism, psychology, PUA, NVC, IFS, etc. The essential elements of a good career...hmm. I have been thinking about this. I don't want to have some idiot boss. I want a job that allows me to learn on the job (20 years experience, not 5 years experience 4X). I'd like a career that advances the pace at which progress is made (as that would have the largest net impact). So I was thinking about investigating nootropics, but since the funding structure of medicine is such that little research is devoted to human enhancement/augmentation. So, now I'm thinking something like education reform. Also, I think the idea behind debategraphs/knowledemapping would highly benefit the advancement of knowledge if it were widely adopted.

conditionals/hinging decisions- dealt with.

scope- the scope is fairly large. I don't really see how it could be larger. I'm trying to think about ExR and the future of humanity to reduce it. I'm thinking about what I want to do after college. Personally, I've been thinking lately about how my plans never last for more than a couple months. My plan so far has been to acquire goal stability by reading a lot so that I identify all the unknown unknowns of the problem. Now that I have done that (to some extent), the next step is to make a mind-map with all these considerations in mind. Also, I feel like once I get to college, my plans will change drastically. Perhaps they won't. That would be good.

fresh insights- I've talked to people sitting next to me on the plane. My extended family. and several LWers. Slowly and steadily, I'm gaining goal stability.

The main points I'm getting from the above, in terms of problem definition, are these:

  • Reducing existential risk is a means to an end. The problem is how to ensure that your existence continues for a very, very long time (so that you get to do lots of fun stuff and have lots of interesting experiences?) At least, I presume that is why you want to get uploaded. Existential risk is one obstacle to achieving this, but not the only one.

  • Your problem is how to make a significant contribution to reducing existential risk and enjoy doing it. This seems a bit at odds with the previous statement, that reducing existential risk was a means to an end.

These two problem statements have very different implications for what you should do. For example, if the second one is really the problem you want to solve (have fun reducing existential risk), then it's conceivable that you might trade off some expected lifespan to achieve it... which runs counter to the first.

I'll be posting another comment later today on the second step: objectives. This is really a qualitative exploration of your utility function. You may find it useful to go back and forth between objectives and problem statement a bit until they are both clear in your mind.

There's a concept that may be useful for you: the idea of achieving failure. It's a phrase that Eric Ries uses to describe entrepreneurs who flawlessly execute the wrong plan. As HKR put it, "A good solution to a well-posed decision problem is almost always a smarter choice than an excellent solution to a poorly posed one."

I want to be uploaded, but the only way that's going to happen is if I reduce existential risk

What's the chain of causation, here?

I'm somewhat skeptical of the value of goal stability for you right now. Human lives are long.

My recommendations:

Philosophy of Life: This is an important thing worth thinking about, but not too much or too frequently. You should write a draft now, with lots of question marks. Rewrite it from scratch annually, and compare. How are your experiences changing your perception of your values? Be suspicious of convergence before, say, age 25. Keep them private so you can be honest.

College Major: It sounds like flexibility is important at this stage. I would recommend something like a compsci/physics or compsci/chem double major. It's easier to start with the hardest and move down than start with the middle and move up.

Don't let college define your responsibilities. It's easy to see that you're getting As in all of your courses and think that you've done enough; set measurable goals for mentors, friends, girls, and give yourself grades on those.

A note on the recommendation to study physics: physicists can do anything. Seriously. You see physicists making contributions to computer science, biology, statistics, computational finance, etc. You rarely see non-physicists making significant contributions to physics. I think the reason for this is that physicists learn lots of very useful mathematics that can be applied in a wide variety of contexts... and unlike someone who earns a mathematics degree, their emphasis is on applying math to solving problems, rather than on math for its own sake. The other factor may be that physicists are trained to think in terms of fundamental principles; they expect to find some hidden underlying pattern that will bring order out of the chaos.

The other reason, of course, is that there's nearly no work in physics itself.

After problem definition, the next step is to clarify your objectives. What do you really want? What do you hope for? In the terminology of decision analysis, this is the step where you try to understand the rough outlines of your utility function, as it relates to your decision problem. Without this understanding you can't effectively compare and evaluate the alternatives.

You may have multiple objectives, and there may be some conflict between them. Don't let that stop you from listing all of them; you'll deal with any necessary tradeoffs in a later step. Don't rush this step, and don't settle for the immediate, obvious answers; figuring out what you really want is harder than you might think.

Here are some of HKR's thoughts on tackling this step:

  • Objectives help you determine what information to seek. You'll generally seek information for two reasons: (1) to suggest alternatives to consider, and (2) to help you evaluate those alternatives. Knowing your objectives tells you what information you need for purpose (2).

  • Often decision makers take too narrow a focus. If your list of objectives is brief and cursory, you may find after making your decision that you left out important considerations. Don't concentrate solely on the tangible and quantitative (lifespan, income, etc.); make sure you consider the intangible and subjective also (Is this fun? Is this something I really care about?)

  • Don't limit your objectives by the ease of measurement. Income may be easier to measure than happiness, but the latter is important nonetheless.

  • If a prospective decision sits uncomfortably in your mind, you may have overlooked an important objective. For example, suppose that you were considering taking a particular job and that it looked great according to all of your criteria: good current income, prospects for advancement, opportunities to learn and expand your skills, etc. If you still feel uneasy about it, figure out why. Does it require you to live in a location that you find unappealing? Does something about the people there or the work environment bother you? The answers to such questions will point to possible additional objectives you should include.

AKR suggest the following five-step process for eliciting your objectives:

  1. Write down all the concerns you hope to address through your decision. Put together a wish list. Consider a great, if infeasible, alternative; what's so great about it? Consider a terrible alternative; why is it so bad? Ask people who have faced similar situations what they considered when they made their decision -- not what they chose, nor the choice they recommend for you, but what criteria and issues they considered.

  2. Convert your concerns into succinct objectives. A short phrase consisting of an verb and an object works well, e.g., "enjoy life", "save humanity", "live forever".

  3. Separate ends from means to establish your fundamental objectives. You may have heard these discussed as instrumental values vs. terminal values; Eliezer goes into great detail on the subject here. For example, when it comes to personal objectives, money itself is usually not a fundamental objective; it is, instead, a means of obtaining other things that you really care about (e.g., food, shelter, amusement) or that themselves are means (e.g., status) to getting other things you really care about (e.g., sex). A good exercise is to take each objective and ask why you want it; take the answer and ask why you want that; and continue until you arrive at something that needs no further justification -- you value it for itself. (This is similar to The 5 Whys, but aimed at discovering root values rather than root causes.) Further notes: The means objectives can stimulate ideas for alternatives to consider, and help you understand the problem better. The fundamental objectives -- and only the fundamental objectives -- are used to evaluate and compare alternatives.

  4. Clarify what you mean by each objective. Ask, "what do I really mean by this?" Latch onto any fuzziness or ambiguity in the objective and resolve it. Imagine that you're explaining your objectives to someone who's being a bit obtuse.

  5. Test your objectives. Compare several alternatives; does the one that comes up best, according to your stated objectives, feel like it could be better than the others? A second test is to see if your objectives would suffice to explain a prospective decision to someone else. (Imagine this person to be entirely non-judgmental so that you aren't tempted to be less than honest about your true objectives.) If you feel that A is a better choice than B, but you can't adequately explain why in terms of your stated objectives, then you may be missing something.

At this point you have a lot of introspection to carry out. You may find it useful to think out loud (or, more accurately, in print) with us. You don't have to achieve perfection here, but I would urge you to carefully think through these first two steps before proceeding further. However, there may be some things that you just can't get right on the first pass, and will only realize are important after you have proceeded further through the decision process. That's okay; you can make this an iterative process, returning to earlier steps when necessary.

You're right, this is going to take some time. I do think that introspection is hard, but even after reading books like Timothy Wilson's "Strangers to Ourselves", I find that these contrived experimental examples of people not knowing why they chose something to not be very helpful/informative when I'm trying to make everyday decisions. this comment asks the "so what?" question. How are we going to use this information about our lack of introspective power?

EDIT: I just realized that you were the one who made the comment I just linked to.

This sounds like a good idea. I'll start.

Okay, first off, what existential risks do you think are most likely? Nuclear war, natural pandemic, engineered pandemic, UFAI, nanotech, climate change, or something else?

Second question: what are your talents? Computer science, biology, chemistry, writing, politics/persuading people, or something else? Especially note talents/skills that could make you a lot of money.

Once you've answered one or both of those, I'll know where we can go from there.

Existential risk- I'm not sure. That's what I'm researching now. I've decided to become more strategic (because I'm not automatically) and so I have a copy of Bostrom's "global catastrophic risks" that I'm reading right now. I also tried to read the FOOM debate between Hanson and Yudkowsky, but I felt that I wasn't knowledgeable enough to assign probabilities to the scenarios. I feel that in order to better understand the risks I need to 1. Encourage people to use more structured forms of debate like this website called knowledgemapping or debategraphs.org. 2. Study math.

Talents- being young (17), smart , athletic, funny, ambitious, well-read. I'm learning programming and business right now through self-study (in addition to my coursework).

Being 18 I'm in a pretty similar situation to yourself, and I've looked extensively into the whole problem.

The things you mention as talents are actually qualities, programing on the other hand counts as a skill (of course if you're particularly good at it, it counts as a talent).

What you need to look for is your Element (to use Sir Ken Robinson's concept). While it is all well and noble to pick what you want to do on the basis of mitigating existential risk your best bet is to find something you're both good at and passionate about (your talent). Then work out how to use that to reduce existential risk. Not only will you end up happier for it, you'll also be more effective in saving the world.

Of course that doesn't mean other areas of skill aren't worth developing, so long as their useful they're good, but in terms of your career go with what you're most passionate about. If you're serious about learning maths you should take a look at http://www.khanacademy.org/ , the videos on there cover a great deal of stuff, from the very basics to calculus to matrices and the likes.

I wouldn't worry too much if I were you about which existential risk is greatest but rather which one you are most suited to negate. In fact if you were to try to negate a potential horror that requires a certain area of expertise to counteract, and not being particularly good (that is just average) you could end up screwing things up even worse. Eliezer posts about how he nearly destroyed the world are a good example of that.

I hope some of that helps.

noble to pick what you want to do on the basis of mitigating existential risk your best bet is to find something you're both good at and passionate about (your talent).

I agree. If there were a ExR from asteroid impacts, yet we couldn't do much about it, I wouldn't think about trying to solve it. So I'm already of the mind that I'm trying to use my comparative advantage to reduce net ExR as much as possible. You could use an equation I guess. X (how much attention it deserves) = Y (probability of risk occurring) * Z (probability of my actions reducing Y).

As far as happiness goes, I think that if I didn't choose a career that allowed me to reduce ExR effectively, I would feel guilty. I'd feel happy to go to work if I knew it was for the good of humanity. I also think that other things like meditation, stoicism, cognitive enhancement, etc can provide happiness if you have a boring day job (like highschool).

Well, I'm pretty sure I'm not going to be an FAI programmer. If one assumes that the biggest bottleneck to AGI is large insight from geniuses, then I think the best way to help AGI development would not be to try to be a genius (if you aren't one), but to try to work on education reform so that there are more geniuses working on FAI. Or to advertise to geniuses.

If one assumes that the biggest bottleneck to AGI is large insight from geniuses,

Don't be too sure about that. You might want to take a look at Little Bets: How Breakthrough Ideas Emerge from Small Discoveries. My own experience when dealing with difficult problems has been that an incremental approach can be much more productive than sitting at my desk thinking real hard about the parts I don't know how to do. I just go ahead and do the parts that I can do, and let the harder problems percolate in my mind. By the time I've got the manageable parts done, I've learned enough about the problem, turned vague abstractions into concrete realizations, and reduced the uncertainty enough that the hard parts often fall right into place. Consider a probability distribution over variables x1, ..., xn, with the variables representing solutions to parts of a problem. P(xn) may be quite diffuse and spread out, but P(xn | x1, x2, ..., xk) may be much more concentrated.

I don't expect the problem of FAI to fall easily, but any incremental advance can help set the stage for the definitive advances.

Good point! I also appreciate how a lot of your ideas are referenced to books. Assuming your right (I think I would have to know much more about AGI in order to evaluate the claim with any certainty), the next obvious question is, what's limiting progress? Are there not enough people making small discoveries (good reason to go into the field)? Are they not sharing their discoveries (don't know how you'd fix this one)? Is there not enough funding for them to do their work (good reason to stay out unless you plan on outcompeting everybody because you feel that's what you can do best to help)?

Also, I feel like many of the posts tagged "decision" or "decision theory" are more about logical AIs in contrived situations than the type of decision making we're trying to do here. Take the prisoner's dilemma for example.There are more than 2 individual playing in real life There are no arbitrary rules in real life. When does game theory apply in my every day life (and in trying to make this decision?)?

Game theory was applied during the Cold War to analyze how best to play the "great game" with the Soviets. It has also been applied to bargaining / negotiating.

My employer does advanced marketing research and analysis, and some of our clients are large companies that have a limited number of major competitors (an oligopoly situation). In this situation game theory is useful in making decisions about positioning and marketing -- if you change your product lineup, prices, etc., your competitors will make changes of their own in response, and you have to take this into account.

I can't think of an application to everyday decisions of individuals, however. You're right that game theory is less relevant when there are many players so that no one player has a major impact on the others.

The reason it works in war and in business is you've got a measurable outcome/reward. Lives and dollars. However, we cannot quantify happiness or success with a "how do you fell from 1-10?" survey.

Ah I see, by wider scope I was thinking more along the lines of "What sort of economic system could be constructed to encapsulate most of the benefits of capitalism while having far fewer failings?" and "What is the best way to prevent a Nuclear Holocaust?"

You know, just the small things...