Followup to: Sunk Cost Fallacy

Related to: Rebelling Against NatureShut Up and Do the Impossible!

(expanded from my comment)

"The world is weary of the past—
O might it die or rest at last!"
— Percy Bysshe Shelley, from "Hellas"

Probability theory and decision theory push us in opposite directions. Induction demands that you cannot forget your past; the sunk cost fallacy demands that you must. Let me explain.

An important part of epistemic rationality is learning to be at home in a material universe. You are not a magical fount of originality and free will; you are a physical system: the same laws that bind the planets in their orbits, also bind you; the same sorts of regularities in these laws that govern the lives of rabbits or aphids, also govern human societies. Indeed, in the last analysis, free will as traditionally conceived is but a confusion—and bind and govern are misleading metaphors at best: what is bound as by ropes can be unbound with, say, a good knife; what is "bound" by "nature"—well, I can hardly finish the sentence, the phrasing being so absurd!

Epistemic rationality alone might be well enough for those of us who simply love truth (who love truthseeking, I mean; the truth itself is usually an abomination), but some of my friends tell me there should be some sort of payoff for all this work of inference. And indeed, there should be: if you know how something works, you might be able to make it work better. Enter intrumental rationality, the art of doing better. We all want to better, and we all believe that we can do better...

But we should also all know that beliefs require evidence.

Suppose you're an employer interviewing a jobseeker for a position you have open. Examining the jobseeker's application, you see that she was expelled from four schools, was fired from her last three jobs, and was convicted of two felonies. You ask, "Given your record, I regret having let you enter the building. Why on Earth should I hire you?"

And the jobseeker replies, "But all those transgressions are in the past. Sunk costs can't play into my decision theory—it would hardly be helping for me to go sulk in a gutter somewhere. I can only seek to maximize expected utility now, and right now that means working ever so hard for you, O dearest future boss! Tsuyoku naritai!"

And you say, "Why should I believe you?"

And then—oh, wait. Just a moment, I've gotten my notes mixed up—oh, dear. I've been telling this scenario all wrong. You're not the employer. You're the jobseeker.

Why should you believe yourself? You honestly swear that you're going to change, and this is great. But take the outside view. What good have these oaths done for all the other millions who have sworn them? You might very well be different, but in order to justifiably believe that you're different, you need to have some sort of evidence that you're different. It's not a special question; there has to be something about your brain that is different, whether or not you can easily communicate this evidence to others with present technology. What do you have besides the oath? Are you doing reasearch, trying new things, keeping track of results, genuinely searching at long last for something that will actually work?

For if you do succeed, it won't have been a miracle: you should be able to pin down at least approximately the causal factors that got you to where you are. And it has to be a plausible story. You won't really be able to say, "Well, I read all these blogposts about rationality, and that's why I'm such an amazing person now." Compare: "I read the Bible, and that's why I'm such an amazing person now." The words are different, but translated into math, is it really a different story? It could be. But if it is, you should be able to explain further; there has to be some coherent sequence of events that could take place in an material universe, a continuous path through spacetime that took you from there to here. If the blog helped, how specifically did it help? What did it cause you to do that you would not otherwise have done?

This could be more difficult than it now seems in your current ignorance: the more you know about the forces that determine you, the less room there is for magical hopes. When you have a really fantastic day, you're more likely to expect tomorrow to be like that as well if you don't know about regression towards the mean.

I'm not trying to induce despair with this post; really, I'm not. It is possible to do better; I myself am doing better than I was this time last year. I just think it's important to understand exactly what doing better really involves.

I feel bad blogging about rationality, given that I'm so horribly, ludicrously bad at it. I'm also horribly, ludicrously bad at writing. But it would hardly be helping for me to just shut up in despair—to go sulk in a gutter somewhere. I can only seek to maximize expected utility now, and for now, that apparently means writing the occasional blogpost. Tsuyoku

New to LessWrong?

New Comment
51 comments, sorted by Click to highlight new comments since: Today at 2:58 PM

Am I wrong, or are you conflating disregarding past costs in evaluating costs and benefits with failing to remember past costs when making predictions about future costs and benefits?

It seems pretty clear that the sunk cost consideration is that past costs don't count in terms of how much it now would cost you to pursue using vendor A vs. pursuing vendor B, while induction requires you to think, "every time we go with Vendor A, he messes up, so if we go with Vendor A, he will likely mess up again".

What's the conflict?

I think I may have been too brief/unclear, so I am going to try again:

The fallacy of sunk costs is, in some sense, to count the fact that you have already expended costs on a plan as a benefit of that plan. So, no matter how much it has already cost you to pursue project A, avoiding the fallacy means treating the decision about whether to continue pursuing A, or to pursue B (assuming both projects have equivalent benefits) as equivalent to the question of whether there are more costs remaining for A, or more costs remaining for B.

The closest to relevant thing induction tells us is how to convert our evidence into predictions about the remaining costs of the projects. This doesn't conflict, because induction tells us only that, if projects like A tend to get a lot harder from the point you are at, that your current project is likely to get a lot harder from the point you are at.

There just isn't a conflict there.

If project A is what you've always done in the past, even when having the knowledge that project B is superior, you can induce that you will probably continue with project A in the future.

You can also induce from what incentives you seem to respond to how to increase the probability that you will do B. For instance, if telling your friends that you plan to do a project has a high correlation with your doing that project, then you can increase your probability that you will do B by telling your friends that you plan to do B.

If project A is what you've always done in the past, even with the knowledge that project B is superior, you can induce that you will probably continue with project A.

Probably, but shouldn't? If project B is superior, is that not the superior choice?

I think I understand the point you are making, I just want to be clear that there is a distinction between what is supposed to happen and what is likely to happen.

Yes.

To put it another way, the prognosis of recovering irrationalists is, so far, not good.

As I read it, this was the main point of the original post. Most of us (I suspect) still get rather different answers when we honestly ask "what action should I take" and "what action would I take".

[-]gjm15y70

In the paragraph beginning "Epistemic rationality", I think the last occurrence of "epistemic" should say "instrumental". Or have I misunderstood?

Fixed; thanks.

This could be more difficult than it now seems in your current ignorance: the more you know about the forces that determine you, the less room there is for magical hopes.

Actually, the reverse is true: any psychological technology that can be distinguished from magic, is insufficiently advanced. ;-)

That is, if you can't make yours (or other people's) behavior changes look like magic, you have an insufficient mastery of the forces that determine our behavior. Just think about priming, for example!

You mention "I read the Bible, and that's why I'm such an amazing person now." Yet, religious conversions clearly do exist, and for some people, they stick... perhaps for the same reason that some people can just decide one day to quit cigarettes, and that's that.

If you have a model that requires strenuous effort or lengthy struggle to induce change in a human being, that's a pretty good indication there's something seriously wrong with your model.

Heck, for the past few months I've become increasingly aware of the ways in which my own "miracle" techniques are really a lot slower and more involved than they actually need to be. The catch is, you have to believe that things can work faster, in order for them to work faster!

I used to think that the reason some people needed help with certain kinds of mind hacks was that there was interference between thinking about the process steps and experiencing the mental representations being hacked.

But lately, I see that the role a helper such as myself really plays, is keeping the person from distracting themselves with anosognosiac explanations, excuses, and objections, unrelated to the process they're supposed to be following. And some processes that I thought only worked on some types of problems, actually seem to work on a wider variety of things, as long as you keep the person from following self-interfering thoughts.

It's like I was saying about suspension of disbelief: it isn't so much necessary to believe in the possibility of a miracle, as it is to simply stop the disbelief process from interfering. The disbelief process pops up and says, "this isn't going to work" or "I'm no good at this stuff" or "I'm too messed up for this to work on me", and then the person goes into a different mental state than the one we were trying to work on.

Anyway, I've noticed recently that if, instead of reassuring people about these things or even addressing those objections at all, I just (almost literally) ask them to shut up and do it anyway, the very next thing that happens is they're saying, "sonofagun, that worked." (I think I need my own version of "shut up and multiply" to use for this!)

Anyway... my point is mainly that there's no need to despair about the "forces that determine you". They are actually pretty damn shallow and manipulable, as long as you are able to accept things being easy, instead of hard. (Again, think of priming! Priming is EASY, not hard.)

However, to the extent that you make difficulty a virtue (which has been my major sin all these many years of study), you will experience difficulty.

Just think: throughout thousands of years of human existence, how many people have managed to change their behavior without the aid of a psychologist, let alone a rationalist?

Maybe it's only as hard as we expect it to be.

[-]gjm15y90

If you have a model that requires strenuous effort or lengthy struggle to induce change in a human being, that's a pretty good indication there's something seriously wrong with your model.

Credible only in so far as "one can consistently induce change in a human being without strenuous effort or lengthy struggle" is, and I don't think the latter is anything like obviously right. On the face of it, it seems obviously wrong: people often do require effort and struggle to change, and evolutionarily speaking that seems like what one should expect. (You don't want random other people to be able to change your behaviour too easily, and easy self-modification is liable to make for too-easy modification by others.)

... my own "miracle" techniques ...

You remind us frequently about what miraculous techniques you have. So it seems like by now you should be a walking miracle, a paragon of well-adjusted Winning. And yet, it doesn't seem all that rare for you to post something saying "I just discovered another idiotic bug in my mental functioning. So I bypassed the gibson using my self-transcending transcendence-transmogrification method, and I'm better now." To my cynical eye, there seems to be some tension here.

Again, think of priming! Priming is EASY, not hard.

OK, so there are some ways, commonly harmful but maybe sometimes exploitable for good, in which our mental states can be messed with non-rationally for a shortish period. Remind me, please, how that is supposed to be good evidence that we can consistently change our behaviours, motivations, etc., in ways we actually want to, with lasting effect?

On the face of it, it seems obviously wrong: people often do require effort and struggle to change, and evolutionarily speaking that seems like what one should expect.

Yes... and no. See below:

(You don't want random other people to be able to change your behaviour too easily, and easy self-modification is liable to make for too-easy modification by others.)

Exactly. The conscious mind is both an offensive weapon (for persuading others) and a defense against persuasion. Separating conscious/social ("far") beliefs from action-driving ("near") beliefs allows the individual to get along with the group while remaining unconvinced enough to continue acting impulsively for their own benefit under low-supervision circumstances.

In other words, willpower works better when you're being watched... which is exactly what we'd expect.

The offense/defense machinery evolved with or after language; initially language probably worked directly on the "near" system, which led to the possibility of exploitation via persuasion... and an ensuing arms race of intelligence driven by the need for improved persuasive ability and improved skepticism, balanced by the benefit of remaining able to be truly convinced of things for which sufficient sensory ("near") evidence is available.

You remind us frequently about what miraculous techniques you have. So it seems like by now you should be a walking miracle, a paragon of well-adjusted Winning. And yet, it doesn't seem all that rare for you to post something saying "I just discovered another idiotic bug in my mental functioning. So I bypassed the gibson using my self-transcending transcendence-transmogrification method, and I'm better now." To my cynical eye, there seems to be some tension here.

Only if you don't get that belief systems aren't always global. Some beliefs are more global than others.

Also, it's important to bear in mind that knowing how to change something, knowing what to change, and knowing what to change it to, are all different skills. I've known methods for the first for quite some time now, and the last year or two I've focused more on the second. This year, I've finally started making some serious progress on the third one as well, which is actually part of understanding the second. (I.e., if you know where you're going, it's easier to know what's not there yet.)

For example, Dweck's work on fixed vs. growth mindsets: that stuff isn't a matter of global beliefs in a literal sense. Each area of your life that you perceive as "fixed" may be a distinct belief on the emotional level, so each one needs to be changed as it's encountered. In the month or so since I read her book, I've identified over half a dozen such mindsets: intelligence, time, task granularity, correctness, etc... each of which was a distinct "belief" at the emotional level regarding its "fixed"-ness.

Changing each one was "magical" in the sense that it opened up a range of choices that wasn't available to me before... but I couldn't simply read her book and decide, "woohoo, I will change all my fixed mindsets to growth ones". The brain does not have a "view source" button; you cannot simply "list" all your beliefs on the basis of an abstract pattern like fixedness vs. growthness, or ones that involve supernatural thinking, or any other non-sensory abstractions. (Abstractions are in the "far" system, not the "near" one.)

OK, so there are some ways, commonly harmful but maybe sometimes exploitable for good, in which our mental states can be messed with non-rationally for a shortish period. Remind me, please, how that is supposed to be good evidence that we can consistently change our behaviours, motivations, etc., in ways we actually want to, with lasting effect?

We are constantly self-priming. Techniques that work, work because they change the data we prime ourselves with.

When you discovered that Santa Claus didn't exist, did you try to stay up late to see him any more, or did your behavior change immediately, with lasting effect?

Basically, you stopped priming yourself with the thoughts that generated those behaviors, because your brain was no longer predicting certain events to occur. It is our never-ending stream of automatically-generated internal predictions that is the main internal source of priming. Change that prediction stream, and you change the behavior.

External methods of change work by forcing new predictions; internal methods (including CBT, NLP, hypnosis, etc.) work by manipulating the internal representations that are used to generate the predictions.

And yet, it doesn't seem all that rare for you to post something saying "I just discovered another idiotic bug in my mental functioning. So I bypassed the gibson using my self-transcending transcendence-transmogrification method, and I'm better now." To my cynical eye, there seems to be some tension here.

As a programmer, I will charitably note that it's not uncommon for a more serious bug to mask other more subtle ones; fixing the big one is still good, even if the program may look just as badly broken afterwards. Judging from his blog, he's doing well enough for himself, and if he was in a pretty bad state to begin with his claims may be justified. There's a difference between "I fixed the emotional hang-up that was making this chore hard to do" and "I've fixed a crippling, self-reinforcing terror of failure that kept me from doing anything with my life".

That said, there is a lack of solid evidence, and the grandiosity of the claims suggests brilliant insight or crackpottery in some mixture--but then, the same could be said of Eliezer, and he's clearly won many people over with his ideas.

As a programmer, I will charitably note that it's not uncommon for a more serious bug to mask other more subtle ones; fixing the big one is still good, even if the program may look just as badly broken afterwards.

And one of the unfortunate things about the human architecture is that the more global a belief/process is, the more invisible it is... which is rather the opposite of what happens in normal computer programming. That makes high-level errors much harder to spot than low-level ones.

First year or so, I spent way too much time dealing with "hangups making this chore hard to do", and not realizing that the more important hangups are about why you think you need to do them in the first place. So it has been taking a while to climb the abstraction tree.

For another thing, certain processes are difficult to spot because they're cyclical over a longer time period. I recently realized that I was addicted to getting insight into problems, when it wasn't really necessary to understand them in order to fix them, even at the relatively shallow level of understanding I usually worked with. In effect, insight was just a way of convincing myself to "lower the anti-persuasion shields".

The really crazy/annoying thing is I keep finding evidence that other people have figured ALL of this stuff out before, but either couldn't explain it or convince anybody else to take it seriously. (That doesn't make me question the validity of what I've found, but it does make me question whether I'll be able to explain/convince any more successfully than the rest did.)

That said, there is a lack of solid evidence, and the grandiosity of the claims suggests brilliant insight or crackpottery in some mixture

Heh, you think mine are grandiose, you should hear the claims that other people make for what are basically the same techniques! I'm actually quite modest. ;-)

"That said, there is a lack of solid evidence, and the grandiosity of the claims suggests brilliant insight or crackpottery in some mixture--but then, the same could be said of Eliezer, and he's clearly won many people over with his ideas."

Precisely the point. We're not interested in how to attract people to doctrines (or at least I'm not), but in determining what is true and finding ever-better ways to determine what is true.

The popularity of some idea is absolutely irrelevant in itself. We need evidence of coherence and accuracy, not prestige, in order to reach intelligent conclusions.

The popularity of some idea is absolutely irrelevant in itself.

Compelling, but false. Ideas' popularity not only contributes network effects to their usefulness (which might be irrelevant by your criteria), but it also provides evidence that they're worth considering.

You remind us frequently about what miraculous techniques you have. So it seems like by now you should be a walking miracle, a paragon of well-adjusted Winning. And yet, it doesn't seem all that rare for you to post something saying "I just discovered another idiotic bug in my mental functioning. So I bypassed the gibson using my self-transcending transcendence-transmogrification method, and I'm better now." To my cynical eye, there seems to be some tension here.

Are you saying that he displays bad behavior because he keeps fixing himself? I thought that was a good thing.

With more relevance:

Credible only in so far as "one can consistently induce change in a human being without strenuous effort or lengthy struggle" is, and I don't think the latter is anything like obviously right. On the face of it, it seems obviously wrong: people often do require effort and struggle to change.

I agree with your statement but only in the sense that a individual person will require effort and struggle to change in regards most magical treatments but may respond quickly to a particular treatment. To throw in my own personal experience, people change pretty quickly once you find the trick that works on them.

[-]gjm15y10

Are you saying that he displays bad behavior because he keeps fixing himself?

No, of course not. I'm saying that if you have miraculous brain-fixing techniques and deploy them as effectively as you know how to on yourself for years, then after those years you should surely (1) be conspicuously much happier / better adjusted / more rational / more productive than everyone else, and (2) not still need fixing all the time.

Now, of course, I don't know for sure that Philip isn't Winning much more than all the rest of us who haven't become Clear by getting rid of our body thetans -- oh, excuse me, wrong form of miraculous psychological fixing -- so maybe it's just natural cynicism that makes me doubt it. Philip, what say you? Is your brain much better than the rest of ours now?

No, of course not. I'm saying that if you have miraculous brain-fixing techniques and deploy them as effectively as you know how to on yourself for years, then after those years you should surely (1) be conspicuously much happier / better adjusted / more rational / more productive than everyone else, and (2) not still need fixing all the time.

Yes, of course, because we all know that if you have a text substitution tool like 'sed', you should be able to fix all the bugs in a legacy codebase written over a period of 30-some years by a large number of people, even though you have no ability to list the contents of that codebase, in just a couple of years working part-time, while you're learning about the architecture and programming language used. Yeah, that should be a piece of cake.

Oh yeah, and there are lots of manuals available, but we can't tell you which ones sound sensible but were actually written by idiots who don't know what they're talking about, and which ones sound like they were written by lunatic channelers but actually give good practical information.

Plus, since the code you're working on is your own head, you get to deal with compiler bugs and bugs in the debugger. Glorious fun! I highly recommend it. Not.

It certainly doesn't help that I started out from a more f'd up place than most of my clients. I've had a few clients who've gotten one session with me or attended one workshop who then considered themselves completely fixed, and others that spent only a few months with me before deciding they were good to go.

It also doesn't help that you can't see your own belief frames as easily as you can see the frames of others. It's easy to be a coach or guru to someone else. Ridiculously so, compared to doing it to yourself.

[-]gjm15y20

See, the thing is that you don't just say "I've got some ways of tweaking how my brain works. They aren't very good, and I don't really have any understanding of what I'm doing, but I find this interesting." (Which would be the equivalent of "I've got a text-substitution tool, and maybe there might be some way of using it to fix this undocumented 30-year-old ball of mud whose code I can't read".)

Which is not all that surprising, given that you're trying to make a living from helping people fix their brains, and you wouldn't get many clients by saying "I don't really have any more idea what I'm doing than some newbie wannabe hacker trying to wrangle the source code for Windows with no tools more powerful than sed". But I really don't think you should both claim that you understand brains and know how to fix them and you have "miracle" techniques and so on and so forth, and protest as soon as that's questioned "oh, but really it's like trying to work on an insanely complicated pile of legacy software with only crappy tools".

See, the thing is that you don't just say "I've got some ways of tweaking how my brain works. They aren't very good, and I don't really have any understanding of what I'm doing, but I find this interesting." (Which would be the equivalent of "I've got a text-substitution tool, and maybe there might be some way of using it to fix this undocumented 30-year-old ball of mud whose code I can't read".)

Actually, I do say that; few of my blog posts do much else besides describe some bug I found, what I did to fix it, and throw in some tips about the pitfalls involved.

But I really don't think you should both claim that you understand brains and know how to fix them and you have "miracle" techniques and so on and so forth, and protest as soon as that's questioned "oh, but really it's like trying to work on an insanely complicated pile of legacy software with only crappy tools".

If I told you I had a miracle tool called a "wrench", that made it much easier to turn things, but said you had to find which pipes or bolts to turn with it, and whether they needed to be tightened or loosened, would you say that that was a contradiction? Would you expect that having a wrench would instantly make you into a plumber, or an expert on a thousand different custom-built steam engines? That makes no sense.

Computer programmers have the same problem: what their clients perceive as "simple" vs. "difficult/miracle" is different from what is actually simple or a miracle for the programmer. Sometimes they're the same, and sometimes not.

In the same way, many things that people on this forum consider "simple" changes can in fact be mind-bogglingly complicated to implement, while other things that they consider to be high-end Culture-level transhumanism are fucking trivial.

Funny story: probably the only reason I'm here is because in Eliezer's work I recognized a commonality: the effort to escape the mind-projection fallacy. In his case, it was such projections applied to AI, but in my case, it's such projections applied to self. As long as you think of your mind in non-reductionistic terms, you're not going to have a useful map for change purposes.

(Oh, and by the way, I never claimed to "fix brains" - that's your nomenclature. I change the contents of brains to fix bugs in people's behavior. Brains aren't broken, or at least aren't fixable. They just have some rather nasty design limitations on the hardware level that contribute to the creation of bugs on the software level.)

[-]gjm15y10

I think this discussion is getting too lengthy and off-topic, so I shall be very brief. (I'll also remark: I'm not actually quite as cynical about your claims as I am probably appearing here.)

If I told you I had a miracle tool called a "wrench" [...]

If you told me you had a miracle tool called a wrench, and an immensely complicated machine with no supporting documentation, whose workings you didn't understand, and that you were getting really good results by tweaking random things with the wrench (note: they'd better be random things, because otherwise your analogy with an inexperienced software developer attacking an unmanageable pile of code that s/he can't even see doesn't work) ... why, then, I'd say "Put that thing down and back away slowly before you completely fuck something up with it".

I never claimed to "fix brains" - that's your nomenclature.

Yes, that's my nomenclature (though you did say "the code you're working on is your own head"...), and I'm sorry if it bothers you. Changes to the "contents of brains", IIUC, are mostly made by changing the actual brain a bit; the software/hardware distinction is nowhere near as clean as it is with digital computers.

(note: they'd better be random things, because otherwise your analogy with an inexperienced software developer attacking an unmanageable pile of code that s/he can't even see doesn't work)

It's not that you can't see the code at all, it's that you can't list all the code, or even search it except by a very restricted set of criteria. But you can single-step it in a debugger, viewing the specific instructions being executed at a given point in time. To single-step all the code would take a ridiculous amount of time, but if you can step through a specific issue, then you can make a change at that point.

Such single changes sometimes generalize broadly, if you happen to hit a "function" that's used by a lot of different things. But as with any legacy code base, it's hard to predict in advance how many things will need changing in order to implement a particular bugfix or new feature.

I'd say "Put that thing down and back away slowly before you completely fuck something up with it".

Well, when I started down this road, I was desperate enough that the risk of frying something was much less than the risk of not doing something. Happily, I can now say that the brain is a lot more redundant -- even at the software level -- than we tend to think. It basically uses a, "when in doubt, use brute force" approach to computation. It's inelegant in one sense, but VERY robust -- massively robust compared to any human-built hardware OR software.

It's not that you can't see the code at all, it's that you can't list all the code, or even search it except by a very restricted set of criteria. But you can single-step it in a debugger, viewing the specific instructions being executed at a given point in time. To single-step all the code would take a ridiculous amount of time, but if you can step through a specific issue, then you can make a change at that point.

Such single changes sometimes generalize broadly, if you happen to hit a "function" that's used by a lot of different things. But as with any legacy code base, it's hard to predict in advance how many things will need changing in order to implement a particular bugfix or new feature.

While I understand that the code/brain analogy is an analogy, I think you are significantly underplaying the dangers of doing this in a code base you do not understand. Roughly half of my job is fixing other people's "fixes" because they really had no concept of what was happening or how to use the tools in the box correctly.

While I understand that the code/brain analogy is an analogy, I think you are significantly underplaying the dangers of doing this in a code base you do not understand.

Brain code doesn't crash, and the brain isn't capable of locking in a tight loop for very long; there are plenty of hardware-level safeguards that are vastly better than anything we've got in computers. Remember, too, that brains have to be able to program themselves, so the system is inherently both simple and robust.

In fact, brains weren't designed for conscious programming as such. What "mind hacking" essentially consists of is deliberately directing the brain to information that convinces it to make its own programming changes, in the same way that it normally updates its programming -- e.g. by noticing that something is no longer true, a mistake in classification has been made, etc. (The key being that these changes have to be accomplished at the "near" thinking level, which operates primarily on simple sensory/emotional patterns, rather than verbal abstractions.)

In a sense, to make a change at all, you have to convince the brain that what you are asking it to change to will produce better results than what it's already doing. (Again, in "near", sensory terms.) Otherwise, it won't "take" in the first place, or else it will revert to the old programming or generate new programming once you get it "in the field".

I don't mean you have to convince the person, btw; I mean you have to convince the brain. Meaning, you need to give it options that lead to a prediction of improved results in the specific context you're modifying. In a sense, it'd be like talking an AI into changing its source code; you have to convince it that the change is consistent with its existing high-level goals.

It isn't exactly like that, of course -- all these things are all just metaphors. There isn't really anything there to "convince", it's just that what you add into your memory won't become the preferred response unless it meets certain criteria, relative to the existing options.

Truth be told, though, most of my work tends to be deleting code, not adding it, anyway. Specifically, removing false predictions of danger, and thereby causing other response options to bump up in the priority queue for that context.

For example, suppose you have an expert system that has a rule like "give up because you're no good at it", and that rule has a higher priority than any of the rules for performing the actual task. If you go in and just delete that rule, you will have what looks like a miraculous cure: the system now starts working properly. Or, if it still has bugs, they get ironed out through the normal learning process, not by you hacking individual rules.

I suppose what I'm trying to say is that there isn't anything I'm doing that brains can't or don't already do on their own, given the right input. The only danger in that, is if you say, motivated yourself to do something dangerous without actually knowing how to do that thing safely. And people do that all the time anyway.

Ah, that makes sense.

If you have miraculous brain-fixing techniques and deploy them [...] on yourself for years, then after those years you should surely [...] (2) not still need fixing all the time.

I think my only response to what you said is that some things need fixing forever since the perfect picture is being viewed from a fuzzy perspective. Personally, I doubt that the self-improvement ladder ever ends.

But I understand and concede the point.

I agree with your statement but only in the sense that a individual person will require effort and struggle to change in regards most magical treatments but may respond quickly to a particular treatment. To throw in my own personal experience, people change pretty quickly once you find the trick that works on them.

I would further extend that to suggest that in fact, there is an essential nature to all tricks that work, and that the key is only that you get a person to actually DO that trick, without running other mental programs at the same time. I am finding that if I (in the coaching context) simply push someone through a process, and ruthlessly deny them the opportunity to digress, disbelieve, dispute, etc., then they will get a result that they were previously failing to obtain, due to being distracted by their own skepticism. (Not in the sense of doubting me or the technique, but doubting their own ability to do the technique, or to actually change, etc.)

This leads me to believe that most of the difference between schools of self-help is largely persuasive in function: you have to first convince a person to lower their anti-persuasion shields in order to get them to actually carry out any effective self-persuasion. Because in effect, self-management equals self-persuasion.

Interesting. Is the link you provided a good start along that topic? Is there a better introductory place? The specific request:

I would like to learn more about disabling anti-persuasion shields as related to running (or not running) mental "programs". As in, which mental programs are anti-persuasion shields and is disabling anti-persuasion shields similar to disabling other mental programs.

Is the link you provided a good start along that topic?

It's not bad. I linked to a page that can be read from the top, to give a good intro to the idea of "suggestion" vs. "autosuggestion", but really most of the book is quite good. Although it was written almost 100 years ago, Coue's book has some pretty stunning insights into what actually works. There are just a few points that I would add to what he says in the book.

First, he doesn't address the distinctions between verbal and sensory imagination, commanding and questioning. Second, he doesn't say much to address the issue of dealing with existing beliefs and responses.

The first omission results in people mindlessly repeating phrases or "affirmations" and thinking they are doing autosuggestion -- they are not. Imagination must be used, and by imagination, I mean not intentional visualization (which would be counterproductively invoking what he calls the "will"), but rather the passive contemplation or musing on an idea, like "what would it be like if...?" Or "how good will it be when...?" (These are leading questions, of course, but then, that's the point: to lead your imagination to respond on its own.)

The second omission is that when you attempt to imagine what something is like, your internal response may be a feeling or idea that it is impossible, impractical, nonsensical, a bad idea, that you can't do it, or some other form of interference.

At this point, it's really only necessary to wonder what it would be like if the desired thing were already had, anyway. In other words, one acknowledges the response, but does not treat it as if it were true. You then repeat the attempt at inquiry. This is how you bypass the shields, as it were.

Is disabling anti-persuasion shields similar to disabling other mental programs?

Yes and no. It used to be that I spent all my time (and encouraged others to spend theirs) on modifying the memories that induced the kind of critical responses and negative predictions that stopped them from doing things. What I have begun wondering only today, is whether it might not be simpler just to bypass such blocks and not give them any credence to start with.

In other words, it has occurred to me that maybe it is not the initial negative response that's an issue for people being blocked; maybe it's just their response to that response. In other words, person A gets a negative response to the idea of doing something, and then responds to that by giving up or feeling like it's useless. Whereas person B might get the same initial negative response, and then respond to it by imagining how good the result is going to be. Paradoxically, the more negative responses person B gets, the greater their motivation will become.

So, I'm currently self-experimenting with that idea - of focusing on the 2nd order responses to blocks instead of the 1st order blocks themselves. If it works, it should be a big increase in efficiency, since the 2nd-order responses are more likely to be system-global, meaning fewer program changes needed to effect system-wide change.

But that's still to be tested. Right now, I've just noticed that bypassing blocks by simply ignoring the 1st-order response is quite possible. I've done it with various things today and it has worked quite well so far.

I would like to learn more about disabling anti-persuasion shields as related to running (or not running) mental "programs".

To disable your own shields, you just refrain from internal critique and stay focused on whatever process of autosuggestion you're undertaking. Suspend disbelief, in other words.

Think of autosuggestion as requiring a sterile internal environment. If you think something like "I don't know how to do this" while trying to imagine something, you will be priming yourself with "not knowing how to do it"!

Remember, just seeing words to do with "old" made people walk more slowly... if you pipe stronger messages into your own head, you will get stronger results.

What makes it work is not "belief" but experience without disbelief. After all, priming can occur without conscious notice -- if it were consciously noticed, the person might choose to disregard it.

But since you don't choose to disregard your own beliefs about what is possible or what you can do, you (as Coue says) "imagine that you cannot, and of course you cannot".

However, if you do disbelieve your interrupting beliefs, and allow yourself to contemplate the thing you want to believe or do without disbelief, then you will successfully "autosuggest" something.

(See also Coue on imagination vs. will -- if you think of the will as conscious/verbal/directed thought, and the imagination as subconscious/sensory/wondering thought, then what he says will make sense.)

It's not bad. I linked to a page that can be read from the top, to give a good intro to the idea of "suggestion" vs. "autosuggestion", but really most of the book is quite good.

Thanks. I will probably respond after processing the information in your post and your book, so head's up for a reply in the deep future. :)

For if you do succeed, it won't have been a miracle: you should be able to pin down at least approximately the causal factors that got you to where you are. And it has to be a plausible story.

The causal factors, that got you to where you are, might not be that obvious. Which is why teaching success is not trivial at all.

Also, I agree with most of what pjeby is saying in this thread. Successful people didn't become successful throught the sheer brute force of their willpower. If someone who's successfull says he achieved it "throught a lot of hard work and determination", people usually imagine "a lot of hard work" to mean things like "do stuff you don't enjoy all day, be all stressed-out, endure sleepless nights, not have a life". They imagine that in order to become successful, first you have to be go through an extended period of being miserable (i.e.doing the hard work), and then you stop being miserable because you've become successful. This strikes me as completely wrong. Successful people do not achieve success by punching through concrete walls of misery using their extraordinary willpower. Fighting akrasia does not need to involve the horrifyingly heroic efforts some people suffering from akrasia imagine it to necessarily involve.

They imagine that in order to become successful, first you have to be go through an extended period of being miserable [...] This strikes me as completely wrong. Successful people do not achieve success by punching through concrete walls of misery [...]

I agree entirely.

This intuitively feels to me very similar to the questions I have about things like memory and the way people act when the situational context has been gamed to cause unethical behavior (see "The Lucifer Effect").

One wants to believe that one's personal memory is not only accurate, but indeed unbiased, but to what extent does the realization that it may not be actually help to mitigate the fact that it may not be? Does my awareness of things such as the Stanford Prison Experiment have any correlation with whether I will or will not be sucked into the group mindset under similar circumstances in reality?

Indeed, what would one do if the answer was "No"?

Jonnan

An important, so-often-useful distinction. This reminds me of the Buddhist notion of fetters. Fetters are personal features that impair your attainment of enlightenment and bind you to suffering. You can cast them off, but in order to do so, you have to cut the crap and practice doing without them, with the full knowledge that it may takes many lifetimes to free yourself. It is not sufficient to announce your adhesion to the creed of enlightenment. The only things that make you do better are the things that make you do better. Everything else is window-dressing, or at best means to that end.

On another note...

I feel bad blogging about rationality, given that I'm so horribly, ludicrously bad at it. I'm also horribly, ludicrously bad at writing.

Is that hyperbolic self-effacement I detect?

The sunk cost fallacy comes from hanging to a plan that is already put in motion, a plan that you constructed in your mind, purchased for and are in the process of implementing. The error is in conflating unrelated things through processing them as parts of one mental entity, and thus valuing some of them too highly.

When you take an outside view, you are using the strength of your mind in processing representative pick of evidence. You can see what to expect by constructing a valid model, as opposed to taking in anecdotal evidence that misleads your mind.

The difference between these modes of thinking is in appealing to weaknesses and strengths of human brain, in finding the right answers. This is the difference that determines the failure in the first case, and relative success in the second.

However, even the power of outside view is rather limited. You are hiding valid evidence from your mind, lest it be misled. In many cases, there are ways of finding more, perhaps presenting the conclusions to another outside look.

If you decided to be a perfect employee, perhaps resolved to be a cooperatator in one-off cooperation scenario, this is private info about you that may be practically impossible to signal. In the statistics you present to the judgment from the outside view, this info is absent. Does your resolve, or mental clarity, make a difference? How to predict that? There is no fully general answer, you'd have to work on each specific case. But there is also no fundamental conflict.

I just think it's important to understand what doing better really involves.

For me, it has been the acceptance of other people, which has given me trust in myself, enabled me to relate better to other people, reduced my acrasia, made me more effective and happier. Internalising "It's alright to be you", "You have a right to be here". I am here to seek greater rationality, but do not think that rationality alone improves my life.

And the jobseeker replies, "But all those transgressions are in the past. Sunk costs can't play into my decision theory—it would hardly be helping for me to go sulk in a gutter somewhere. I can only seek to maximize expected utility now, and right now that means working ever so hard for you, O dearest future boss! Tsuyoku naritai!"

I am failing to understand how two felonies in my past are sunk costs. How are events costs? Am I missing a layer of abstraction?

There are costs involved in switching from being a criminal to being law-abiding and vice versa. If I'd be better off switching overall but I don't because I've already switched in the past, I'm falling for the sunk cost fallacy.

They are sunk costs to the jobseeker in that he cannot do anything about them and they have a negative value. If he were to take them into account, he would no doubt throw up his hands and shout "but who would hire ME?" So he must ignore them as he would any sunk cost when deciding what to do; namely, where to apply for a job.

At least that is how I understand it.

The sunk cost fallacy is when you assign a higher value than you would otherwise to something because of the price you paid for it. In this case, the job seeker is not concerned with the value of anything she gained from the felonies, so the fallacy does not apply. The job seeker's situation is not like having already paid for a ticket to a movie that she does not really want to see.

The job seeker should take into account how prospective employers will perceive her reputation, and focus on those who are more likely to give her a chance to build a more positive reputation, and be prepared to answer from the employers perspective why they should do so. The past events have consequences for expected future expected utility that should not be ignored.

"What do you have besides the oath? Are you doing reasearch, trying new things, keeping track of results, genuinely searching at long last for something that will actually work?" I agree with this, at least as a start. I would say "genuinely searching" is a tricky thing. Diet is one of the most sought areas to improve upon, and one of the most controversial for someone just looking around. One may think that looking at pubmed makes their search more rational, but that may just lead them to fall prey to existing biases in statistically confused researchers.

"For if you do succeed, it won't have been a miracle: you should be able to pin down at least approximately the causal factors that got you to where you are. And it has to be a plausible story." I only partially agree with this. Having a plausible-sounding story helps one commit and follow-through, but I see little evidence success is connected with actually understanding the causes of that success-which is why teaching excellence is so difficult. I say this agreeing that knowing the causal factors would give you more power.

[-][anonymous]15y00

Enter epistemic rationality, the art of doing better. We all want to better, and we all believe that we can do better...

I think you meant to say instrumental where you said epistemic in the bottom of the third paragraph.

This was first noticed by gjm.

[-][anonymous]15y00

For if you do succeed, it won't have been a miracle: you should be able to pin down >at least approximately the causal factors that got you to where you are. And it has to >be a plausible story.

The causal factors, that got you to where you are, might not be that obvious. Which is why teaching success is not easy at all.

Also, I agree with most of what pjeby is saying in this thread. Successful people didn't become successful throught the sheer brute force of their willpower. If someone who's successfull says he achieved it "throught a lot of hard work and determination", people usually imagine "a lot of hard work" to mean things like "do stuff you don't enjoy all day, be all stressed-out, endure sleepless nights, not have a life". They imagine that in order to become successful, first you have to be go through an extended period of being miserable (i.e.doing the hard work), and then you stop being miserable because you've become successful. This strikes me as completely wrong. Successful people do not achieve success by punching through concrete walls of misery using their extraordinary willpower. Fighting akrasia does not need to involve the horrifyingly heroic efforts some people suffering from akrasia imagine it to necessarily involve.

[-]kurige15y-10

Epistemic rationality alone might be well enough for those of us who simply love truth (who love truthseeking, I mean; the truth itself is usually an abomination)

What motivation is there to seek out an abomination? I read the linked comment and I disagree strongly... The curious, persistent rationalist should find the truth seeking process rewarding, but shouldn't it be rewarding because your working toward something wonderful? Worded another way - of what value is truth seeking if you hold the very object you seek in contempt?

If you take the strictly classical, rational view of the world than you lose the ability to say that truth is "beautiful". Not a great loss, considering "beauty" is an ill-defined, subjective term - but if you continue to cut everything our of your life that has no rational value then you very quickly become a psuedo-vulcan.

Truth, at the highest level, has an irrational, indefinable quality. It's this quality that makes it seductive, worthwhile, valuable, desirable. Truth is something you grok. Heinlein was a loony, but I do thank him for that word.

but some of my friends tell me there should be some sort of payoff for all this work of inference. And indeed, there should be: if you know how something works, you might be able to make it work better. Enter epistemic rationality, the art of doing better. We all want to better, and we all believe that we can do better...

I like to think that I seek truth. Others are here to "win" or "be better". Maybe we're all talking about the same thing. Maybe not.

This comment is a bit off-topic from the rest of the post, and quickly becoming dangerously Zen, but I would much appreciate it if somebody more knowledgeable on the subject could offer some disambiguation either here or in a separate post.

[S]houldn't [truthseeking] be rewarding because you['re] working toward something wonderful?

But if you expect the truth to be wonderful, then what do you do when you come across strong evidence for some horrifying hypothesis that makes you want to cry? And if there is no hypothesis that horrifies you, then you really must be a Vulcan ...

[I]f you continue to cut everything our of your life that has no rational value then you very quickly become a psuedo-vulcan.

This is not how I understand the term rationality. I find it helpful to keep a strict type distinction: you cut everything untrue out of your beliefs, and fold everything beautiful into your utility function.

While I can imagine hypotheses that would horrify me if they turned out to be true, I cannot think of an actual case of encountering strong evidence for such a hypothesis. Even for the examples I can think of, if they were in fact true I believe I would prefer to know the truth than to continue to believe the comforting falsehood. Can you give an example of a horrifying hypothesis that you would prefer not to know the truth of even if it was in fact true?

Can you give an example of a horrifying hypothesis that you would prefer not to know the truth of even if it was in fact true?

No; like you, I want to believe the truth. (Or at least, I want to want to believe the truth. If everyone who professed to seek truth wholeheartedly really did so, the world would be very different. I cannot claim to be wholeheartedly rational; I can only claim that I try, after my fashion.) There are theories that scare me that I do want to believe if and only if they are true---I'd rather not talk about them in this comment.

Epistemic rationality alone might be well enough for those of us who simply love truth (who love truthseeking, I mean; the truth itself is usually an abomination)

What motivation is there to seek out an abomination?

Presumably the position mentioned is simply that one can value truth without valuing particular truths in the sense that you want them to be true. It might be true that an earthquake will kill hundreds, but I don't love that an earthquake will kill hundreds.

Presumably the position mentioned is simply that one can value truth without valuing particular truths in the sense that you want them to be true. It might be true that an earthquake will kill hundreds, but I don't love that an earthquake will kill hundreds.

Yes, thank you, that's what I was trying to get at. "[U]sually an abomination" was poetic exaggeration--in retrospect, a very poor choice of words on my part.

Worded another way - of what value is truth seeking if you hold the very object you seek in contempt?

You cannot fix, or kill, what you haven't found. The phrase "truth hunting" might be appropriate.

Though if the point is that contempt of the territory does not imply contempt of the map, then I agree.

Contempt of the map? It is the map that should be is irrelevant, while the possibilities implied by it for the territory are to be valued and selected from.