Urges vs. Goals: The analogy to anticipation and belief

by AnnaSalamon7 min read24th Jan 201271 comments

92

Anticipated ExperiencesSignalingMotivations
Frontpage

Partially in response to: The curse of identity

Related to: Humans are not automatically strategic, That other kind of status, Approving reinforces low-effort behaviors.

Joe studies long hours, and often prides himself on how driven he is to make something of himself.  But in the actual moments of his studying, Joe often looks out the window, doodles, or drags his eyes over the text while his mind wanders.  Someone sent him a link to which college majors lead to the greatest lifetime earnings, and he didn't get around to reading that either.  Shall we say that Joe doesn't really care about making something of himself?

The Inuit may not have 47 words for snow, but Less Wrongers do have at least two words for belief.  We find it necessary to distinguish between:

  • Anticipations, what we actually expect to see happen;
  • Professed beliefs, the set of things we tell ourselves we “believe”, based partly on deliberate/verbal thought.

This distinction helps explain how an atheistic rationalist can still get spooked in a haunted house; how someone can “believe” they’re good at chess while avoiding games that might threaten that belief [1]; and why Eliezer had to actually crash a car before he viscerally understood what his physics books tried to tell him about stopping distance going up with the square of driving speed.  (I helped Anna revise this - EY.)

A lot of our community technique goes into either (1) dealing with "beliefs" being an evolutionarily recent system, such that our "beliefs" often end up far screwier than our actual anticipations; or (2) trying to get our anticipations to align with more evidence-informed beliefs.

And analogously - this analogy is arguably obvious, but it's deep, useful, and easy to overlook in its implications - there seem to be two major kinds of wanting:

  • Urges: concrete emotional pulls, produced in System 1's perceptual / autonomic processes
    (my urge to drink the steaming hot cocoa in front of me; my urge to avoid embarrassment by having something to add to my accomplishments log)
  • Goals: things we tell ourselves we’re aiming at, within deliberate/verbal thought and planning
    (I have a goal to exercise three times a week; I have a goal to reduce existential risk)

Implication 1:  You can import a lot of technique for "checking for screwy beliefs" into "checking for screwy goals".

Urges, like anticipations, are relatively perceptual-level and automatic.  They're harder to reshape and they're also harder to completely screw up.  In contrast, the flexible, recent "goals" system can easily acquire goals that are wildly detached from what we actually do, wildly detached from any positive consequences, or both.  Some techniques you can port straight over from "checking for screwy beliefs" to "checking for screwy goals" include:

The fundamental:

  • "What's the positive consequence?"  This is the equivalent of "What's the evidence?" for beliefs.  All the other cases involve not asking it, or not asking hard enough.

The Hansonian:

  • Goals as clothes / goals as tribal affiliation:  We are people who have free software (/ communism / rationality / whatever) as our goal”.  Before you install Linux, do you think "What's the positive consequence of installing Linux?" or does it just seem like the sort of thing a free-software-supporter would do?  (EY says:  What positive consequence is achieved by marching in an Occupy Wall Street march?  Can you remember anyone stating one, throughout the whole affair - "if we march, X will happen because of Y"?)
  • Goals as a signal of one’s value as an ally:  Sheila insists that she wants to get a job.  We inspect her situation and she's not trying very hard to get a job.  But she's in debt to a lot of her friends and is borrowing more to live on a month-to-month basis.  It's not hard to see why Sheila would internally profess strongly that she has a goal of getting a job.
  • Goals as personal fashion statements:  A T-Shirt that says “Give me coffee and no one gets hurt” seems to state a very strong desire for coffee.  This is clearly a goal professed directly to affect how others see you, and it's more a question of affecting a 'style' than anything directly tribal or status-y.

The satiating:

  • Having goals as optimism:  "I intend to lose weight" can be created by much the same sort of internal processes that would make you believe "I will lose weight", in cases where the goal (belief) would not yet seem very plausible to an outside view.
  • Having goals as apparent progress:  My current to-do list has "write thank-you notes for wedding gifts".  This makes me feel like I've appeased the demand for internal attention by having a goal.  (EY:  I have "send Anna and Carl their wedding gift" on my todo list.  This was very effective at appeasing the need to send them a wedding gift.)

Implication 2:  "Status" / "prestige" / "signaling" / "people don't really care about" is way overused to explain goal-urge delinkages that can be more simply explained by "humans are not agents".

This post was written partially in response to The Curse of Identity, wherein Kaj recounts some suboptimal goal-action linkages - wanting to contribute to the Singularity, then teaching himself to feel guilty whenever not working; founding the Finnish Pirate Party, then becoming the spokesperson which involved tasks he wasn't good at; helping Eliezer on writing his book, and feeling demotivated because it seemed like work "anyone could do" (which is just the sort of work that almost nobody is motivated to do).

Kaj forms the generalization "as soon as my brain adopted a cause, my subconscious reinterpreted it as the goal of giving the impression of doing prestigious work for the cause".  I worry that our community has a tendency to explain as e.g. status signaling or "people really don't care about X", observations that can also be explained by less malice/selfishness and more "our brains have known malfunctions at linking goals to urges".  People are as bad at looking into hospitals for their own health as for the sake of their parents' health; Kaj didn't actually gain much prestige from feeling guilty about his relaxation time.

We do have a status urge.  It does affect a lot of things.  People do tend to massively systematically understate it in much the same way that Victorians pretended that sex wasn't everywhere.  But that's not the same cognitive problem as "Our brain is pretty bad at linking effective behaviors to goals, and will sometimes reward us for just doing things that seem roughly associated with the goal, instead of actions that cause the consequence of the goal being achieved."  And our brains not being coherent agents is something that's even more massive than status.

Implication 3:  Humans cannot live by urges alone

Like beliefs, goals often get much wackier than urges.  I've seen a number of people react to this realization by concluding that they should give up on having goals, and lead an authentic life of pure desire.  This wouldn't work any more than giving up on having beliefs.  To precisely anticipate how long it takes a ball to fall off a tower, you have to manipulate abstract beliefs about gravitational acceleration.  I have an urge to drive a car that runs smoothly, but if I didn't also have a goal of having a well-maintained car, I would never get around to having it serviced - I have no innate urge to do that.

I really have seen multiple people (some of whom I significantly cared about) malfunctioning as a result of misinterpreting this point.  As a stand-alone system for pulling your actions, urges have all kinds of problems.  Urges can pull you to stare at an attractive stranger, to walk to the fridge, and even to sprint hard for first base when playing baseball.  But unless coupled with goals and far-mode reasoning, urges will not pull you to the component tasks required for any longer-term goods.  When I get into my car I have a definite urge for it not to be broken.  But absent planning, there would never be a moment when the activity I most desired was to take my car for an oil change.  To find and keep a job (let alone a good job), live in a non-pigsty, or learn any skills that are not immediately rewarding, you will probably need goals.  Even though human goals can easily turn into fashion statements and wishful thinking.

Implication 4:  Your agency failures do not imply that your ideals are fake.

Obvious but it needs to be said:  People are as bad at looking into hospitals for their own health as for the sake of their parents' health.  It doesn't mean that they don't really care about their parents, and it doesn't mean that they don't really care about survival.  They would probably run away pretty fast from a tiger, where the goal connected to the urge in an ancestrally more reliable way and hence made them more 'agenty'; and they might fight hard to defend their parents from a tiger too.

There's a very real sense in which our agency failures imply that human beings don't have goals, but this doesn't mean that our ungoaly ideals are any more ungoaly than anything else.  Ideals can be more ungoaly because they're sometimes about faraway things or less ancestral things - it's probably easier to improve your agency on less idealy goals that link more quickly to urges - but as entities which can look over our own urges and goals and try to improve our agentiness, there's no rule which says that we can't try to solve some hard problems in this area as well as some easy ones.[2]

Implication 5:  You can align urges and goals using the same sort of effort and training that it takes to align anticipations and beliefs.

Although I've heard people saying that we discuss willpower-failure too much on Less Wrong, most of the best stuff I've read has been outside Less Wrong and hasn't made contact with us.  For a starting guide to many such skills, see Eat That Frog by Brian Tracy [3].  Some basic alignment techniques include:

  • Get in the habit of asking "What is the positive consequence?"  (Probably more needs to be written about this so that your brain doesn't just answer "I'll be a free software supporter!" which is not what we mean to ask.)
  • Andrew Critch's "greedy algorithm":   Whenever you catch yourself really wanting to do something you want to want, immediately reward yourself - by feeding yourself an M&M, or if that's too difficult, immediately pumping your fist and saying "Yes!"
  • Whenever you sit down to work, naming a single, high-priority accomplishment for that session.  Visualizing that accomplishment, and its positive rewarding consequences, until you have an urge for it to happen (instead of just having an urge to log today's hours).

And much the same way that a lot of craziness stems, not so much from "having a wrong model of the world", as "not bothering to have a model of the world", a lot of personal effectiveness isn't so much about "having the right goals" as "bothering to have goals at all" - where unpacking this somewhat Vassarian statement would lead us to ideas like "bothering to have something that I check my actions' consequences against, never mind whether or not it's the right thing" or "bothering to have some communication-related urge that animates my writing when I write, instead of just sitting down to log a certain number of writing hours during which I feel rewarded from rearranging shiny words".  

Conclusion:

Besides an aspiring rationalist, these days I call myself an "aspiring consequentialist".

 


 

[1] IMO the case of somebody who has the belief "I am good at chess", but instinctively knows to avoid strong chess opponents that would potentially test the belief, ought to be a more central example in our literature than the person who believes they have an dragon in their garage (but instinctively knows that they need to specify that it's invisible, inaudible and generates no carbon dioxide, when we show up with the testing equipment).

[2] See also Ch. 20 of Methods of Rationality:

Professor Quirrell:  "Mr. Potter, in the end people all do what they want to do. Sometimes people give names like 'right' to things they want to do, but how could we possibly act on anything but our own desires?"

Harry:  "Well, obviously I couldn't act on moral considerations if they lacked the power to move me. But that doesn't mean my wanting to hurt those Slytherins has the power to move me more than moral considerations!"

[3] Thanks to Patri for recommending this book to me in response to an earlier post. It is perhaps not written in the most LW-friendly language -- but, given the value of these skills, I’d recommend wading in and doing your best to pull useful techniques from the somewhat salesy prose.  I found much of value there.

92

71 comments, sorted by Highlighting new comments since Today at 10:03 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I have also found Eat That Frog to be an unusually good collection of the major productivity techniques. Incidentally, I also heard about the book from Patri via Divia.

For a shorter and more rationality-friendly version of the book, I summarized it here:

EDIT: http://becomingeden.com/summary-of-eat-that-frog/

Great summary; just read it and bookmarked it. Much thanks for writing this. I had thought I needed to reread Eat That Frog but had been reluctant to take the hours required; now I don't have to.

3Cosmos9yThanks, I'm glad you found it useful! :)
0witzvo7yThe link didn't work for me today. Does it have a new home, by any chance?
3arundelo7yhttp://becomingeden.com/summary-of-eat-that-frog/ [http://becomingeden.com/summary-of-eat-that-frog/]
0witzvo7yThanks!
0quentin9yI second that thank you! Usually self-help books are way too fluffy for me to end up finishing (much less implementing), hopefully some of this will stick. Looks good so far :D

this analogy is arguably obvious, but it's deep, useful, and easy to overlook in its implications - there seem to be two major kinds of wanting:

and

Obvious but it needs to be said: People are as bad at looking into hospitals for their own health as for the sake of their parents' health.

I found neither of these things the least bit obvious. I hadn't realized Implication 4 until I had been reading Less Wrong for many months and it was not obvious in retrospect. I hadn't even considered the distinction between urges and goals at all, though it did seem obvious in retrospect - only in retrospect.

I say this because I have had a ton of trouble grasping the concept that things that are obvious to me aren't necessarily obvious to other people.

(Though I don't want to make the same mistake and assume that other people also have this problem.)

I really have seen multiple people (some of whom I significantly cared about) malfunctioning as a result of misinterpreting this point. As a stand-alone system for pulling your actions, urges have all kinds of problems. Urges can pull you to stare at an attractive stranger, to walk to the fridge, and even to sprint hard for first base when playing baseball. But unless coupled with goals and far-mode reasoning, urges will not pull you to the component tasks required for any longer-term goods. When I get into my car I have a definite urge for it not to be broken. But absent planning, there would never be a moment when the activity I most desired was to take my car for an oil change. To find and keep a job (let alone a good job), live in a non-pigsty, or learn any skills that are not immediately rewarding, you will probably need goals. Even though human goals can easily turn into fashion statements and wishful thinking.

I sort of run this way. Contrary to the description, though, I sometimes do get urges to clean, do laundry, etc. This usually occurs when I happen to be annoyed by the feel of dirt on my bare feet, or find my clothes hamper full, or some other stimulus trigger... (read more)

fortunately or unfortunately, I also have parents to provide me with reasons to have urges to do things I wouldn't otherwise have an urge to do.

A good point.

Social incentives that directly incentivize the immediate steps toward long-term goals seem to be key to a surprisingly large portion of functional human behavior.

People acquire the habit of wearing seatbelts in part because parents'/friends' approval incentivizes it; I don't want to be the sort of person my mother would think reckless. (People are much worse at taking safety measures that are not thus backed up by social approval; e.g. driving white or light-colored cars reduces one's total driving-related death risk by ord mag 20%, but this statistic does not spread, and many buy dark cars.)

People similarly bathe lest folks smell them, keep their houses clean lest company be horrified, stick to exercise plans and study and degree plans and retirement savings plans partly via friends' approval, etc.; and are much worse at similar goals for which there are no societally cached social incentives for goal-steps. The key role social incentives play in much apparently long-term action of this is one reason people sometimes say... (read more)

1multifoliaterose9yIs lack of social skills typically the factor that prevents unhappily single folk from finding relationships? Surely this is true in some cases but I would be surprised to learn that it's generic.
0[anonymous]9yLong-term planning for status: Long-term education plans (e.g., law school or medical school) For health: Controlling weight; regular medical check-ups [I omit the last because I don't understand what it means to "practice social skills."] You overstate the degree of goal-urge disconnect. Usually, when people ignore their professed goals, it's a case of "approving of approving." If goals were truly so disconnected from conduct as you imply (and have apparently convinced yourself is the case), they would serve little real function (except Hansonian signaling). You report that your friends came to grief by living by their urges alone, but if goals have minimal inherent power to guide conduct (that is, if they don't tend spontaneously to recruit urges in their support), then we would all (or most of us) be living like your unfortunate friends, since most people don't go through the self-help exercises of conscientiously attaching urges to goals. A hypothesis better accounting for the facts is that we often don't pursue our goals because our limited supply of will-power produces decision fatigue [http://disputedissues.blogspot.com/2011/12/decision-fatigue-its-implications-for.html] . We have to carefully focus our efforts and only pursue the goals most valuable at the margin. But that doesn't mean we practically ignore our paramount goals.
0Swimmer9639yDo you still feel this way, or do you feel that you understand what I meant in Action and Habit [http://lesswrong.com/lw/60y/action_and_habit/]? Have you changed any of your decision-making methods?
0CronoDAS9yI think I understand, sort of, but I haven't actually changed my decision-making methods. I don't even know how I would begin to go about doing that. Also, would changing my decision-making methods tend to increase or reduce urge-satisfaction?

I think I might be living by urges alone. Whenever I see something about "goals" or "self-discipline" or "self-improvement" I immediately shut down and get miserable. My brain says "I don't want to, dammit!" Of course, people tell me I am self-disciplined, but I see that as merely being practical; if it makes any sense, I'm willing to be practical but severely freaked out by aspirational or normative thinking.

Kaj forms the generalization "as soon as my brain adopted a cause, my subconscious reinterpreted it as the goal of giving the impression of doing prestigious work for the cause". I worry that our community has a tendency to explain as e.g. status signaling or "people really don't care about X", observations that can also be explained by less malice/selfishness and more "our brains have known malfunctions at linking goals to urges".

I actually agree with this, and have somewhat changed my mind about the explanation in my or... (read more)

[-][anonymous]9y 7

Andrew Critch's "greedy algorithm": Whenever you catch yourself really wanting to do something you want to want, immediately reward yourself - by feeding yourself an M&M, or if that's too difficult, immediately pumping your fist and saying "Yes!"

I have been doing this deliberately for a few months because I was starting to get fed up with fighting my instincts every time I chose to program for an hour and wanted to spend that hour reading science fiction, so I actually started standing and exercising to watch anime or read books I... (read more)

3AnnaSalamon9yWhat are the negative stimuli? Have you looked into simple behaviorist methods for making studying less painful, instead of just making it more rewarding?
1Giles9yOr making the alternatives more painful?
3AnnaSalamon9yNope.
1[anonymous]9ybeeminder.com works for me...
0[anonymous]9yYou can try it first :) Tell me if it is worth it.
0AspiringKnitter9yAssuming klfwip wants to maximize xyr own happiness, changing the situation by adding more pain wouldn't help. It might increase the amount of studying, but klfwip could also do that by enjoying studying more (possibly by altering xyr study habits), which would have a greater expected utility because it also makes klfwip happier.
0[anonymous]9yI study for perceived benefits that include happiness but are broad enough that I am willing to suffer in the short term for greater motivation. If someone put a gun to my head and ordered me to study, I would have to cooperate and probably be very productive, but I am just paranoid enough and value my current existence too much to let this happen. However, after forcing myself to act in ways that violate my natural hyperbolic discounting for months, it seems to have sunk in a bit so it seems like even minor penalties for behavior I do not want to encourage are enough to change most of my habits if I am consistent enough. I would not argue everyone should place themselves in self defined bootcamp to try and improve their abilities, but it has been an interesting experiment at least. Many organizations use similar tactics to brainwash members because it works, and it seems to be at least somewhat effective even when self administered.
0[anonymous]9yThe greatest negative stimuli of studying are hard to address for me, actual failure to comprehend a difficult set of problems for weeks is itself enough to make me want to give up completely at times, and the simplest ways to eliminate this would be by no longer caring about results or actually succeeding at everything I do. The first would remove most of my incentive to learn in the first place, the second I would love to do but I don't expect this to be feasible any time in the near future. There are probably effective ways to make studying more fun that I have not really explored though... Studying in groups with other people dealing with the same problems seems to be effective but can be hard to do practically, Nicotine can be used to artificially associate actions with pleasant feelings, along with other drugs. If nicotine actually works as well as Gwern and some others suggest I may try it, but age and then financial constraints have been too limiting.
  • Anticipations, what we actually expect to see happen;
  • Professed beliefs, the set of things we tell ourselves we “believe”, based partly on deliberate/verbal thought.

This distinction helps explain how an atheistic rationalist can still get spooked in a haunted house;

I apologize if this seems nitpicky, but the implication seems to be that in Yvain's post he is merely "professing" to not believe in ghosts, but "anticipating" that they exist. I believe the actual point of the post was that Yvain both professes and anticipates the none... (read more)

5roystgnr9yAdding a third category, (Urges, Feelings, Goals), we get a rewording of (Things I "want", Things I "like", Things I "want to want" or "approve of"), IIRC also from previous LessWrong discussions. So (Internalizations, Anticipations, Professed beliefs) seems like a close enough analogy. Your gut-level internalization/urge tells you to jump at the scary noise or to eat lots of the junk food, but that doesn't mean you wouldn't actually be surprised if you saw a real ghost or felt really contently satiated afterwards, and throughout both actions that voice in the back of your mind is telling you what an idiot you're being. This is starting to sound like (Id, Ego, Superego), as well, which is a little worrisome. It's a better model for human behavior than a unified mind, but reinventing pop psychology is probably not something to be proud of, and I'm sure any binary/trinary dichotomy is still an over-simplification. I'm not just a triumvirate; I contain multitudes.
1Spurlock9yI can't deny feeling a wave of "Uh oh" when you mention the similarity to Freud... but let's keep in mind "The world's greatest fool may say the Sun is shining... [http://wiki.lesswrong.com/wiki/Reversed_stupidity_is_not_intelligence]" etc. The idea that there is a difference between our conscious and unconscious selves is hardly a novel observation on this site (Type 1 vs. Type 2 reasoning [http://lesswrong.com/lw/7e5/the_cognitive_science_of_rationality/], the whole nature of cognitive biases, etc.), and the same is true of the difference between our actual current selves and our aspirations/goals ("I want to become stronger" [http://wiki.lesswrong.com/wiki/Tsuyoku_naritai]). It does seem like a realistic and useful trichotomy, Freud or no Freud. And if we need additional levels to describe ourselves more accurately, I certainly have no problem including them as they become necessary :-) Edit: For anyone who may be interested, I believe the prior discussion roystngr is referring to is also Yvain [http://lesswrong.com/lw/6nz/approving_reinforces_loweffort_behaviors/].
1Viliam_Bur9yWhy exactly is it a taboo to say that Freud made a good approximation of something?
1RichardKennaway9yIt's no more taboo than it is to say the Sun goes round the Earth. We just know better than to take Freud seriously about anything. (Or so I generally understand without having looked closely. If you want to justify the claim that Freud made a good approximation of something, go ahead, but the argument won't be with me.)
5Viliam_Bur9yDo you agree or disagree with the following things? * People sometimes do things which are not fully conscious, though if we think about these actions, we might find some hidden motive. Seems like reason is only one of the forces that move our mind; desire and group values are other significant forces. * Healing psychical problems by hypnosis is not safe. The "healed" problems usually reappear later. * People often think about sex (surely much more often than is polite to admit in Victorian society). * Our dreams are related to our emotions. Because for me, this is the historical contribution of Freud to psychology. It does not mean he invented it all, but at least he popularized it, and I guess it was pretty controversial at that time.
1RichardKennaway9yThe second point relates to the Victorian fad for Mesmerism [1], the fourth is wisdom of the ages, and the other two are Freud lite. Where are his id, superego, and ego now? One might as well credit medieval alchemists with modern chemistry. What do you think of the well-known claims by various critics that he "set psychiatry back one hundred years", or that psychoanalysis is the "most stupendous intellectual confidence trick of the twentieth century"? (Quotes from here [http://en.wikipedia.org/wiki/Sigmund_Freud#Legacy].) [1] Hypnotherapy still exists, but it's curious that there has never been a single substantial mention of it on LessWrong. The Google box brings up just two mentions-in-passing. I guess the idea of getting into a verge-of-falling-asleep state while listening to a voice droning suggestions into one's ear isn't going to appeal much here, for all the magical powers attributed to it in fiction and by NLP practitioners (do I repeat myself?). Searching for "hypnosis" gives a lot more hits, but from a quick glance, little discussion.
3juliawise9yI'm not sure that's true. In the pre-Freud examples I can think of, dreams were interpreted as predicting actual future events. (Think Joseph interpreting Pharaoh's dream [http://www.enduringword.com/commentaries/0141.htm], or the portentious dreams in Shakespeares's Julius Caesar [http://www.online-literature.com/shakespeare/julius_caesar/6/], or lots of folk methods [http://www.ancientfolklore.co.uk/howtodreamfuturespouse.htm] for dreaming about a future spouse.) Freud's claim that dreaming about a crop failure meant something about your fears or emotions, rather than actual future weather conditions, was a new idea.
3Viliam_Bur9yAt time when Freud worked, Mesmerism was a popular topic, today it is not. Of course today criticizing Mesmerism would be a waste of time. (Hopefully a hundred years later people will consider criticizing homeopathy or creationism a waste of time. But it does not mean that people who are criticizing it today are wasting time.) I do not know enough about history of medicine to estimate how much Mesmerism was popular among physicians in that era. By the way, at the beginning Freud also used and advocated the hypnotic cure, but later he said "Oops". He completely reworked his theories at least twice. Sure, but how did people use this wisdom? There were many attempts to explain dreams, but seems to me they either required some irreproducible personal talent or a dictionary saying "X means Y" without any explaining what is the relationship between X and Y or how to explain things you don't find in the dictionary. Saying that dreams are censored metaphorical scenarios of our supressed wishes coming true, and actually using this framework to explain some specific dreams, seems like an improvement to me. Used by psychoanalysts; shortly revived and popularized by Eric Berne [http://en.wikipedia.org/wiki/Games_People_Play_%28book%29] in 1960s. Yes. Freud was not a scientist. Scientists make hypotheses, construct experiments, evaluate them statistically, etc. Freud was a physician -- he tried to cure his patients when the general state of knowledge in his area was pathetic: mostly useless, often harmful. So he made up some heuristics, they seemed to work (though it could also be a placebo effect), compiled them into theories, and published books. He trained a few followers, and some people found his theories (with some updates) useful for a few decades. I would classify his teachings as an "expert opinion", not "science". And if you'd prefer the word "pseudoscience", I wouldn't say you are wrong. This is how psychology was done at that time. Unfortunately, many criticism
1[anonymous]9yThis seems too narrow a conception of science: Did Darwin do science that way? What Freud didn't succeed in is to elevate psychology from a preparadigm state (in Thomas Kuhn's sense). But Freud's main concern was mental conflict, and I don't think its study has today reached the stage of genuine science. Cognitive-behavioral approaches to treatment largely ignore mental conflict, and the result is that they are more collection of tricks than a theory. Because students typically set the bar too high for psychoanalysis, Freud's own principal trick, free association, is vastly under-utilized.

Andrew Critch's "greedy algorithm": Whenever you catch yourself really wanting to do something you want to want, immediately reward yourself - by feeding yourself an M&M, or if that's too difficult, immediately pumping your fist and saying "Yes!"

I'm adopting this. Could someone point me to the source? I tried to google for Andrew Critch's "greedy algorithm" but haven't found anything except this LW post. Update: Sent a PM to Andrew, asked for more details.

Update 2:

I tried this for a while but alas, it didn't stick - I... (read more)

Critch, aka Academian, taught it in minicamp and unfortunately has yet to write it up anywhere. I wish he would. pm academian and ask him to :)

Great post! "Aspiring consequentialist" has a nice ring to it.

I just realized I also have "Send Carl & Anna their wedding gift" on my to-do list.

We know a thing or two about the neurobiology behind the divide between urges and goals. Those interested can read about it here.

0Solvent9yIs it a coincidence both of those were posted on the same day?
1lukeprog9yYes, though we each knew that the other was planning/writing something like the post each of us ended up publishing.

More generally, for the basic decision-making tools we have a collection (automatic application, automatic correction, deliberative application, deliberative correction). For goals, that's (wanting, liking, approving, approving of approving); for beliefs, (anticipation, learning/surprise, professed belief, correspondence with referent (taskian truth)).

For example, correcting wrong belief in belief (professed belief) that doesn't reflect more accurate anticipation then corresponds to getting rid of fake professed utility functions that don't reflect the act... (read more)

Andrew Critch's "greedy algorithm": Whenever you catch yourself really wanting to do something you want to want, immediately reward yourself - by feeding yourself an M&M, or if that's too difficult, immediately pumping your fist and saying "Yes!"

Closely related to the parenting advice of "Catch them being good" - which works wonders on kids. I expect it will generalize well to adults.

A lot of our community technique goes into either (1) dealing with "beliefs" being an evolutionarily recent system, such that our "beliefs" often end up far screwier than our actual anticipations; or (2) trying to get our anticipations to align with more evidence-informed beliefs.

Wow. I hadn't heard this expressed quite like this before... We have one territory and two maps, and we can help get both maps in sync with reality by getting them both in sync with each other.

Is (2) related to taking ideas seriously?

To me there seems to be ... (read more)

And much the same way that a lot of craziness stems, not so much from "having a wrong model of the world", as "not bothering to have a model of the world", a lot of personal effectiveness isn't so much about "having the right goals" as "bothering to have goals at all" - where unpacking this somewhat Vassarian statement would lead us to ideas like "bothering to have something that I check my actions' consequences against, never mind whether or not it's the right thing" or "bothering to have some commun

... (read more)
1NancyLebovitz9yI'd say it's useful, but that is not a simple explanation.
3Jonathan_Graehl9yPerhaps I really meant "persuasive", not "simple". Simple would be: Try to have a model of the world. Try to have goals. Don't worry yet about mistakes. By trying at all, you'll probably do better than most people.

I think "goals" are the wrong way to look at it.

Very few people have a complete, coherent system of terminal values. The few who do usually seem to suffer from their excessive rigidity. I can't commit to an exhaustive set of goals, all the way to the end of my life. I've had to discard and change my plans too many times. What looks like a great idea today may turn out to be fruitless on inspection.

Instead of goals I think about resources. I don't know specifically what I'm going to want to do, but whatever it is, money will be helpful. As w... (read more)

This should be on the front page.

1Swimmer9639yI expect it will be, fairly soon.

Thank you for this post. The central insight that we should consider instrumental rationality by analogy to epistemic rationality is something that has never occured to me before. I wish I had thought of it.

Besides an aspiring rationalist, these days I call myself an "aspiring consequentialist".

I think I'll do that too.

Well it seems to me that a rational goal professed by a rationalist should correspond to a few anticipations (that goal is achievable, that achieving the goal will achieve some rational super goal, serve an urge, or otherwise be positive). Not an analogy but a straightforward correspondence.

Unless of course one adopts goals of the form - suppose I am the leader of the tribe and you are regular member and I tell you to defend this hill. Or vice versa. And we adopt defending of the hill as a goal without any knowledge as to why we are defending this hill an... (read more)

0[anonymous]9yThis would seem true if the only force involved in evolution were natural selection. But selection selection seems to have played a considerable role in human evolution. A horizon limited to reproduction doesn't seem very sexy to my intuitions.
0Dmytry9yWell, in principle a: one can fake whatever signals it takes and b: the mate selection goes both ways, the effective reproducer should more often mate with another effective reproducer.

Visualizing that accomplishment, and its positive rewarding consequences, until you have an urge for it to happen

I so have to try this hack. No agency without urgency?

This fits in reasonably well with an anti-akrasia framework I've been thinking over: Rephrase goal X as "I honestly believe that I will achieve X", and then carry on thinking until you actually have a reasonably solid case for believing that. This particular trick translates to breaking down the statement into "I will force myself to develop an urge to do X. And once I have ... (read more)

1FeepingCreature9yAs a rationalist, you can frame that as "I prefer to reward future versions of me that have achieved this by having correctly predicted their behavior. "

He personally had the experience of believing "If the last day where I remember having gone to bed was a Tuesday today shouldn't be Monday but Wednesday". Before the belief got challenged by hard reality I have never paid any conscious attention to the belief. Getting it challenged on the other hand produced one of the three stongest feelings of cognitive dissonce that I felt in my life.

We all have a bunch of beliefs which a very reasonable but for which they are edge cases where the beliefs don't hold.

I think the common term for those beliefs ... (read more)

I strongly endorse your second and fourth points; thanks for posting this. They're related to Yvain's post Would Your Real Preferences Please Stand Up?.

When I was in San Francisco, I recall the phrase "goals not roles" popping up a lot.

I find that it's a fairly easy way to remember that it's even a question whether I'm trying to accomplish something, or just do some things that make it look like I'm trying to accomplish it.

Important and timely (the next Melbourne LW meetup will focus on setting good goals, an exercise which has always confounded me).

I find particularly interesting the "wedding gift todo" example, where imagined achievement of the goal stands-in for actually achieving the stated goal (giving a wedding gift). We want to have and act on "goals" rather than "urges". But setting goals is the kind of activity where "urges" can dominate. To me this looks like the analogue of belief-in-belief. We want our reasoning processes t... (read more)

I agree with your article. I think that this example doesn't quite illustrate it:

Before you install Linux, do you think "What's the positive consequence of installing Linux?" or does it just seem like the sort of thing a free-software-supporter would do?

The first few times I did this, it was the second motivation. After a while, it became the first, namely that I got a system I had better control over, incorporating high-quality software. However, the first motivation was very good for the second. Without (lots of) people doing the sort of t... (read more)

[-][anonymous]9y 0

Urges vary in strength, but it isn't usual to speak of one goal being stronger than another—except in the sense that it's powered by more urges. But goals, too, would seem to vary in strength. A goal's strength would bear some relationship to the expected value of striving to attain it.

You overstate the disconnection between urges and goals because you don't consider the consequences of goals having intrinsic strength, apart from their extrinsic association with urges. A stronger goal exerts a stronger pull to recruit urges to its service. Unless we're neurotic, we don't typically ignore our strongest goals because of a dearth of supporting urges.

Thought provoking post.

I got a lot out of this post, and it's obviously very high quality, but I have one humble gripe.

"and why Eliezer had to actually crash a car before he viscerally understood what his physics books tried to tell him about stopping distance going up with the square of driving speed. (I helped Anna revise this - EY.)"

I feel as if the parenthetical statement at the end of the quoted text would be unnecessarily alienating to an outside reader. Maybe it's that it feels unprofessional (I'm not really sure), but it seems like the kind of thing that ... (read more)

1Prismattic9yThe parenthetical is clearly there to show that she is not using this anecdote without EY's permission, since it might be taken as status-reducing.
2ahartell9yYeah, but maybe it would have been better as a footnote. And would newer readers know what "EY" meant?
2Ben_Welchner9yGiven it's right after an anecdote about someone whose name starts with "E", I think they could make an educated guess.
0ahartell9yProbably. When I first started reading Lw it took me a while I think to figure out EY, though it is a pretty obvious connection. Anyway, I don't really think it's a big deal, just that it might be sub-optimal.

My workplace seems, at times, to be well-designed to align my urges and goals for me.

(also: Congratulations, Anna and Carl on your wedding!)

1[anonymous]9yHow so?
0fburnaby9yWhen I'm there, I feel like working and when I'm anywhere else, I don't. I haven't ever stopped to try and figure out what it is about the place, but I've assumed that someone must be thinking about it. If you'd like to have some guesses: * it's a very sterile environment with no distractions * I feel pressure to demonstrate that I'm working right this second, which may help me stay in near-mode (One necessity for all this to work is, of course, that my goals be related to furthering my career and to accomplishing and learning stuff that's positively correlated to my employer's goals.)

I love this post's anticipations : professed beliefs :: urges : professed goals. Planning seems more necessary (although I guess it's actually rare) than talking about your beliefs (which is easy to do to excess).

All the other cases involve not asking it, or not asking hard enough.

"the cases" is unclear. I assume you mean the rest of the "ways to screw up in choosing goals" yet to be listed.

Ideals can be more ungoaly because they're sometimes about faraway things or less ancestral things - it's probably easier to improve your agen

... (read more)

The Inuit may not have 47 words for snow

The Inuit does not have 47 words for snow! Please, don't propagate this falsehood, especially on a 'rationality' blog.

Edit: Sorry I read incorrectly. My apologies! It says 'may not'...