What I've learned from Less Wrong

Related to: Goals for which Less Wrong does (and doesn’t) help

I've been compiling a list of the top things I’ve learned from Less Wrong in the past few months. If you’re new here or haven’t been here since the beginning of this blog, perhaps my personal experience from reading the back-log of articles known as the sequences can introduce you to some of the more useful insights you might get from reading and using Less Wrong.

1. Things can be correct - Seriously, I forgot. For the past ten years or so, I politely agreed with the “deeply wise” convention that truth could never really be determined or that it might not really exist or that if it existed anywhere at all, it was only in the consensus of human opinion. I think I went this route because being sloppy here helped me “fit in” better with society. It’s much easier to be egalitarian and respect everyone when you can always say “Well, I suppose that might be right -- you never know!”

2. Beliefs are for controlling anticipation (Not for being interesting) - I think in the past, I looked to believe surprising, interesting things whenever I could get away with the results not mattering too much. Also, in a desire to be exceptional, I naïvely reasoned that believing similar things to other smart people would probably get me the same boring life outcomes that many of them seemed to be getting... so I mostly tried to have extra random beliefs in order to give myself a better shot at being the most amazingly successful and awesome person I could be.

3. Most peoples' beliefs aren’t worth considering - Since I’m no longer interested in collecting interesting “beliefs” to show off how fascinating I am or give myself better odds of out-doing others, it no longer makes sense to be a meme collecting, universal egalitarian the same way I was before. This includes dropping the habit of seriously considering all others’ improper beliefs that don’t tell me what to anticipate and are only there for sounding interesting or smart.

4. Most of science is actually done by induction - Real scientists don’t get their hypotheses by sitting in bathtubs and screaming “Eureka!”. To come up with something worth testing, a scientist needs to do lots of sound induction first or borrow an idea from someone who already used induction. This is because induction is the only way to reliably find candidate hypotheses which deserve attention. Examples of bad ways to find hypotheses include finding something interesting or surprising to believe in and then pinning all your hopes on that thing turning out to be true.

5. I have free will - Not only is the free will problem solved, but it turns out it was easy. I have the kind of free will worth caring about and that’s actually comforting since I had been unconsciously ignoring this out of fear that the evidence appeared to be going against what I wanted to believe. Looking back, I think this was actually kind of depressing me and probably contributing to my attitude that having interesting rather than correct beliefs was fine since it looked like it might not matter what I did or believed anyway. Also, philosophers failing to uniformly mark this as “settled” and move on is not because this is a questionable result... they’re just in a world where most philosophers are still having trouble figuring out if god exists or not. So it’s not really easy to make progress on anything when there is more noise than signal in the “philosophical community”. Come to think of it, the AI community and most other scientific communities have this same problem... which is why I no longer read breaking science news anymore -- it's almost all noise.

6. Probability / Uncertainty isn’t in objects or events - It’s only in minds. Sounds simple after you understand it, but I feel like this one insight often allows me to have longer trains of thought now without going completely wrong.

7. Cryonics is reasonable - Due to reading and understanding the quantum physics sequence, I ended up contacting Rudi Hoffman for a life insurance quote to fund cryonics. It’s only a few hundred dollars a year for me. It’s well within my budget for caring about myself and others... such as my future selves in forward branching multi-verses.


There are countless other important things that I've learned but haven't documented yet. I find it pretty amazing what this site has taught me in only 8 months of sporadic reading. Although, to be fair, it didn't happen by accident or by reading the recent comments and promoted posts but almost exclusively by reading all the core sequences and then participating more after that.

And as a personal aside (possibly some others can relate): I still love-hate Less Wrong and find reading and participating on this blog to be one of the most frustrating and challenging things I do. And many of the people in this community rub me the wrong way. But in the final analysis, the astounding benefits gained make the annoying bits more than worth it.

So if you've been thinking about reading the sequences but haven't been making the time do it, I second Anna’s suggestion that you get around to that. And the rationality exercise she linked to was easily the single most effective hour of personal growth I had this year so I highly recommend that as well if you're game.

 

So, what have you learned from Less Wrong? I'm interested in hearing others' experiences too.

232 comments, sorted by
magical algorithm
Highlighting new comments since Today at 7:07 PM
Select new highlight date
Moderation Guidelinesexpand_more

LW has helped me a lot. Not in matters of finding the truth; you can be a good researcher without reading LW, as the whole history of science shows. (More disturbingly, you can be a good researcher of QM stuff, read LW, disagree with Eliezer about MWI, have a good chance of being wrong, and not be crippled by that in the least! Huh? Wasn't it supposed to be all-important to have the right betting odds?) No; for me LW is mostly useful for noticing bullshit and cutting it away from my thoughts. When LW says someone's wrong, we may or may not be right; but when LW says someone's saying bullshit, we're probably right.

I believe that Eliezer has succeeded in creating, and communicating through the Sequences, a valuable technique for seeing through words to their meanings and trying to think correctly about those instead. When you do that, you inevitably notice how much of what you considered to be "meanings" is actually yay/boo reactions, or cached conclusions, or just fine mist that dissolves when you look at it closely. Normal folks think that the question about a tree falling in the forest is kinda useless; nerdy folks suppress their flinch reaction and get confused instead; extra nerdy folks know exactly why the question is useless. Normal folks don't let politics overtake their mind; concerned folks get into huge flamewars; but we know exactly why this is counterproductive. I liked reading Moldbug before LW. Now I find him... occasionally entertaining, I guess?

Better people than I are already turning this into a sort of martial art. Look at Yvain cutting down ten guys with one swoop, and then try to tell me LW isn't useful!

cousin_it:

Normal folks don't let politics overtake their mind; concerned folks get into huge flamewars; but we know exactly why this is counterproductive.

Trouble is, the question still remains open: how to understand politics so that you're reasonably sure that you've grasped its implications on your personal life and destiny well enough? Too often, LW participants seem to me like they take it for granted that throughout the Western world, something resembling the modern U.S. regime will continue into indefinite future, all until a technological singularity kicks in. But this seems to me like a completely unwarranted assumption, and if it turns out to be false, then the ability to understand where the present political system is heading and plan for the consequences will be a highly valuable intellectual asset -- something that a self-proclaimed "rationalist" should definitely take into account.

Now, for full disclosure, there are many reasons why I could be biased about this. I lived through a time and place -- late 1980s and early 1990s in ex-Yugoslavia -- where most people were blissfully unaware of the storm that was just beyond the horizon, even though any cool-headed objective observer should have been able to foresee it. My own life was very negatively affected by my family's inability to understand the situation before all hell broke loose. This has perhaps made me so paranoid that I'm unable to understand why the present political situation in the Western world is guaranteed to be so stable that I can safely forget about it. Yet I still have to see some arguments for this conclusion that would pass the standards that LW people normally apply to other topics.

I agree with you on this, but honestly, its a difficult enough topic that semi-specialists are needed. Trying as a non-specialist to figure out how stable your political system is rather than trying to find a specialist you can trust will get you about as far as it would in law etc.

Trickier than the 'how stable' question is that of what is likely to result from a failure. To the extent that such knowledge is missing the problem of what to do about it gains faint hints reminiscent of Pascal's Mugging.

That sounds plausible, but should probably have a time frame added.

Now, for full disclosure, there are many reasons why I could be biased about this.

With emphasis on "could be" as opposed to "am". Different past experiences leading to different conclusions isn't necessarily "bias". This is a bit of a pet peeve of mine. I often see the naive, the inexperienced, quite often the young, dismiss the views of the more experienced as "biased" or by some broad synonym.

The implicit reasoning seems to be as follows: "Here is the evidence. The evidence plus a uniform prior distribution leads to conclusion A. Yet this person sees the evidence and draws conclusion B different from A. Therefore he is letting his biases affect his judgment."

One problem with the reasoning is that "the evidence" is not the (only) evidence. There is, rather, "evidence I'm aware of" and "evidence I'm not aware of but the other person might be aware of". It's entirely possible for that other evidence to be decisive.

Your comment is an instance of the "forcing fallacy" which really deserves a post of its own: claiming that we should spend resources on a problem because a lot of utility depends, or could depend, on the answer. There are many examples of this on LW, but to choose an uncontroversial one from elsewhere: why aren't more physicists working on teleportation? The general counter to the pattern is noting that problems may be difficult, and may or may not have viable attacks right now, so we may be better off ignoring them after all. I don't see a viable attack for applying LW-style rationality to political prediction, do you?

The general counter to the pattern is noting that problems may be difficult, and may or may not have viable attacks right now, so we may be better off ignoring them after all.

This is valid where there are experts that can confidently estimate that there are no attacks. There are lots of expert physicists, so if steps towards teleportation were feasible, someone would've noticed. In case there are no experts to produce such confidence, correct course of action is to create them (perhaps from more general experts, by way of giving a research focus).

The rule "If it's an important problem, and we haven't tried to understand it, we should" holds in any case, it's just that in case of teleportation, we already did try to understand what we presently can, as a side effect of widespread knowledge of physics.

This is one of the reasons I actually rather like the politics in Heinlein's writing; while it occasionally sounds preachy, and I routinely disagree with the implicit statement that the proposed system has higher utility than current ones, it does expose some really interesting ideas. This has led me to wonder, on occasion, about other potential government systems and to attempt to determine their utility compared to what we have.

Of course, I'm not really a student of political science and therefore am ill-equipped for this purpose, and estimate insufficient utility to attempting to undertake the scholarship needed to correct this (mostly due to opportunity cost; I am active in a field where I can contribute significant utility today, and it's more efficient to update and expand my knowledge there than to branch into a completely different field in any depth). Nonetheless, inefficient though it may be, it's an open question that I find my mind wandering to on occasion.

The conclusion I've reached is that if the US government (as we currently recognize it) continues until the technological singularity, it will be because the singularity comes soon (requires within ~50 years at a low-confidence estimate, at 150 years I'm 90% confident the US government either won't exist or won't be recognizable). There are too many problems with the system; it wasn't optimized for the modern world, to the extent was optimized at all, and of course the "modern world" keeps advancing too. The US has tried to keep up (universal adult suffrage, several major changes to how political parties are organized (nobody today seriously proposes a split ticket), the increasing authority of the federal government over the states, etc.) but such change is reactive and takes time. It will always lag behind the bleeding edge, and if it gets too far behind the then-current institution will either be overthrown or will lose its significance and become something like the 21st century's serious implementations of the feudal system (rare, somewhat different from how it was a few hundred years back, and nonetheless mostly irrelevant).

(More disturbingly, you can be a good researcher of QM stuff, read LW, disagree with Eliezer about MWI, have a good chance of being wrong, and not be crippled by that in the least! Huh? Wasn't it supposed to be all-important to have the right betting odds?)

Saying that "Having incorrect views isn't that crippling, look at Scott Aaronson!" is a bit like saying "Having muscular dystrophy isn't that crippling, look at Stephen Hawking!" It's hard to learn much by generalizing from the most brilliant, hardest working, most diplomatically-humble man in the world with a particular disability. I know they're both still human, but it's much harder to measure how much incorrect views hurt the most brilliant minds. Who would you measure them against to show how much they're under-performing their potential?

Incidentally, knowing Scott Aaronson, and watching that Blogging Heads video in particular was how I found out about SIAI and Less Wrong in the first place.

How would Aaronson benefit from believing in MWI, over and above knowing that it's a valid interpretation?

Upvoted. This is definitely the right question to ask here... thanks for reminding me.

I hesitate to speculate on what gaps exist in Scott Aaronson's knowledge. His command of QM and complexity theory greatly exceed mine.

[...]

OK hesitation over. I will now proceed to impertinently speculate on possible gaps in Scott Aaronson's knowledge and their implications!

Assuming he still believes that collapse postulate theories of QM are equally plausible to Many Worlds, I could say that he might not appreciate the complexity penalty that collapse theories require... except Scott Aaronson is the Head Zookeeper of the Complexity Zoo! So he knows about complexity classes and calculating complexity of algorithms inside out. Perhaps this knowledge doesn't help him naturally calculate the informational complexity of the parts of scientific theories that are phrased in natural languages like English? I know my mind doesn't automatically do this and it's not a habit that most people have. Another possibility is that perhaps it's not obvious to him that Occam's razor should apply this broadly? So these would point to limitations in more fundamental layers of his scientific thinking ability. This could lead to him having trouble telling good new theories to spend time investigating from bad ones... or make forming compact representations for his own research findings more difficult. He consequently discovers less, more slowly, and describes what he discovers less well.

OK... wild speculation complete!

My actual take has always been that he probably understands things correctly in QM but is just exceedingly well-mannered and diplomatic with his academic colleagues. Even if he felt Many Worlds was now a more sound theory, he would probably avoid being a blow-hard about it. He doesn't need to ruffle his buddies' feathers -- he has to work with these guys, go to conferences with them, and have his papers reviewed by them. Also, he may know it's pointless to get others to switch to a new interpretation if they don't see the fundamental reason why it's right to switch. And the arguments needed to convince others have inference chains too long to present in most venues.

Scott Aaronson is the Head Zookeeper of the Complexity Zoo! So he knows about complexity classes and calculating complexity of algorithms inside out. Perhaps this knowledge doesn't help him naturally calculate the informational complexity of the parts of scientific theories that are phrased in natural languages like English?

Just to be clear: there are two unrelated notions of "complexity" blurred together in the above comment. The Complexity Zoo discusses computational complexity theory -- it discusses how the run-time of an algorithm scales with algorithm's inputs (and thereby classes algorithms into P, EXPTIME, etc.).

Kolmogorov Complexity is unrelated: it is the minimum number of bits (in some fixed universal programming language) required to represent a given algorithm. Eliezer's argument for MWI rests on Komogorov complexity and has nothing to do with computational complexity theory.

I'm sure Scort Aarsonson is familiar with both, of course; I just want to make sure LWers aren't confused about it.

No; for me LW is mostly useful for noticing bullshit and cutting it away from my thoughts. When LW says someone's wrong, we may or may not be right; but when LW says someone's saying bullshit, we're probably right.

I couldn't agree more. The "extra nerdy folks know exactly why the question is useless" theme is similarly incisive.

I wonder if the main reason for why a post like Yvain's is upvoted is not because it is great but because everyone who reads it instantly agrees. Of course it is great in the sense that it sums up the issue in a very clear and concise manner. But has it really changed your mind? It seems naturally to me think that way, the post states what I always thought but was never able to express that clearly, that's why I like it. The problem is, how do we get people to read it who disagree? I've recently introduced a neuroscientist to Less Wrong via that post. He read it and agreed with everything. Then he said it's naive to think that this will be adopted any time soon. What he meant is that all this wit is useless if we don't get the right people to digest it. Not people like us who agree anyway, probably before ever reading that post in the first place.

Regarding Eliezers post I even have my doubts that it is very useful given confused nerdy folks. The gist of that post seems to be that people should pinpoint their disagreements before one talks at cross-purposes. But it gives the impression that propositional assertions do not yield sensory experience. Yet human agents are physical systems just as trees. If you tell them certain things you can expect certain reactions. I believe that article might be inconsistent with other assertions made in this community like taking logical implications of general beliefs serious. The belief that the decimal expansion of Pi is infinite will never pay rent in future anticipations.

I'm also skeptic about another point in the original post, namely that most people’s beliefs aren’t worth considering. This I believe might be conterproductive. Consider that most people express this attitude towards existential risks from artificial intelligence. So if you link up people to that one post, out of context and then they hear about the SIAI, what might they conclude if they take that post serious?

The point about truth is another problematic idea. I really enjoyed The Simple Truth, but in the light of all else I've come across I'm not convinced that truth is a useful term to adopt anywhere but in the most informal discussions. If you are like me and grew up in a religious environment you are told that there exist absolute truth. Then if you have your doubts and start to learn more you are told that skepticism is an epistemological position, and ‘there is no truth-there is truth’ are metaphysical/linguistic positions. When you learn even more and come across concepts like the uncertainty principle, Gödel's incompleteness theorems, halting problem or Tarski’s Truth Theorem the nature of truth becomes even more uncertain. Digging even deeper won't revive the naive view of truth either. And that is just the tip of the iceberg, as you will see once your learn about Solomonoff induction and Minimum Message Length.

ETA Fixed the formatting. My last paragraph was eaten before!

I wonder if the main reason for why a post like Yvain's is upvoted is not because it is great but because everyone who reads it instantly agrees. Of course it is great in the sense that it sums up the issue in a very clear and concise manner. But has it really changed your mind?

That's how great arguments work: you agree with every step (and after a while you start believing things you didn't originally). The progress is made by putting such arguments into words, to be followed by other people faster and more reliably than they were arrived at, even if arriving at them is in some contexts almost inevitable.

Additionally, clarity offered by a carefully thought-through exposition isn't something to expect without a targeted effort. This clarity can well serve as the enabling factor for making the next step.

That's how great arguments work: you agree with every step (and after a while you start believing things you didn't originally).

And to avoid people giving in to their motivated cognition, you present the steps in order, and the conclusion at the end. To paraphrase Yudkowsky's explanation of Bayes Theorem:

By this point, conclusion may seem blatantly obvious or even tautological, rather than exciting and new. If so, this argument has entirely succeeded in its purpose.

This method of presenting great arguments is probably the most important thing I learned from philosophy, incidentally.

"That's how great arguments work: you agree with every step (and after a while you start believing things you didn't originally)."

Also how great propaganda works.

If you are going to describe a "great argument" I think you need to put more emphasis on it being tied to the truth rather than being agreeable. I would say truly great arguments tend not to be agreeable, b/c the real world is so complex that descriptions without lots of nuance and caveats are pretty much always wrong. Whereas simplicity is highly appealing and has a low cognitive processing cost.

put more emphasis on it being tied to the truth rather than being agreeable.

Oh. I only agree with argument steps that are truthful.

That's how great arguments work: you agree with every step (and after a while you start believing things you didn't originally).

There are nevertheless also conclusions that you agreed with all along. Sometimes hindsight bias makes you think you agreed all along when you really didn't. But other times you genuinely agreed all along.

You can skip to the end of Yvain's post (the one referenced here) and read the summary - assuming you haven't read the post already. Specifically, this statement: "We should blame and stigmatize people for conditions where blame and stigma are the most useful methods for curing or preventing the condition, and we should allow patients to seek treatment whenever it is available and effective." If you agree with this statement without first reading Yvain's argument for it, then that's evidence that you already agreed with Yvain's conclusions without needing to be led gradually step by step through his long argument.

It seems naturally to me think that way, the post states what I always thought but was never able to express that clearly, that's why I like it

The best essays will usually leave you with that impression. As will the best teachers.

Be careful. So will the less-than-best essays and teachers. It's a form of hindsight bias: you think this thing is obvious, but your thoughts were actually quite inchoate before that. A meme - particularly a parasitic meme - can get itself a privileged position in your head by feeding your biases to make itself look good, e.g. your hindsight bias.

When you see a new idea and you feel your eyes light up, that’s the time to put it in a sandbox - yes, thinking a meme is brilliant is a bias to be cautious of. You need to know how to take the thing that gave you that "click!" feeling and evaluate it thoroughly and mercilessly.

(I'm working on a post or two on the subject area of dangerous memes and what to do about them.)

Be careful. So will the less-than-best essays and teachers.

Less often. Learning bullshit is more likely to come with the impression that you are gaining sophistication. If something is so banal as to be straightforward and reasonable you gain little status by knowing it.

Yes, people have biases and believe silly things but things seeming obvious is not a bad sign at all. I say evaluate mercilessly those things that feel deep and leave you feeling smug that you 'get it'. 'Clicking' is no guarantee of sanity but it is better than learning without clicking.

Yes, I suspect I'm being over-cautious having been thinking about memetic toxic waste quite a lot of late. This suggests that when I'm describing the scary stuff in detail, I'll have to take care not to actually scare people out of both neophilia and decompartmentalisation.

That said, I recall the time I was out trolling the Scientologists and watched someone's face light up that way as she was being sold a copy of Dianetics and a communication course. She certainly seemed to be getting that feeling. Predatory memes - they're rare, but they exist.

That said, I recall the time I was out trolling the Scientologists and watched someone's face light up that way as she was being sold a copy of Dianetics and a communication course. She certainly seemed to be getting that feeling. Predatory memes - they're rare, but they exist.

Scary indeed. I suspect what we are each 'vulnerable' to will vary quite a lot from person to person.

Yes. I do think that a particularly dangerous attitude to memetic infections on the Scientology level is an incredulous "how could they be that stupid?" Because, of course, it contains an implicit "I could never be that stupid" and "poor victim, I am of course far more rational". This just means your mind - in the context of being a general-purpose operating system that runs memes - does not have that particular vulnerability.

I suspect you will have a different vulnerability. It is not possible to completely analyse the safety of an arbitrary incoming meme before running it as root; and there isn't any such thing as a perfect sandbox to test it in. Even for a theoretically immaculate perfectly spherical rationalist of uniform density, this may be equivalent to the halting problem.

My message is: it can happen to you, and thinking it can't is more dangerous than nothing. Here are some defences against the dark arts.

[That's the thing I'm working on. Thankfully, the commonest delusion seems to be "it can't happen to me", so merely scaring people out of that will considerably decrease their vulnerability and remind them to think about their thinking.]

This sort of thing makes me hope that the friendly AI designers are thinking like OpenBSD-level security researchers. And frankly, they need Bruce Schneier and Ed Felten and Dan Bernstein and Theo deRaadt on the job. We can't design a program not to have bugs - just not to have ones that we know about. As a subset of that, we can't design a constructed intelligence not to have cognitive biases - just not to have ones that we know about. And predatory memes evolve, rather than being designed from scratch. I'd just like you to picture a superintelligent AI catching the superintelligent equivalent of Scientology.

My message is: it can happen to you, and thinking it can't is more dangerous than nothing.

With the balancing message: Some people are a lot less vulnerable to believing bullshit than others. For many on lesswrong their brains are biassed relative to the population towards devoting resources to bullshit prevention at the expense of engaging in optimal signalling. For these people actively focussing on second guessing themselves is a dangerous waste of time and effort.

Sometimes you are just more rational and pretending that you are not is humble but not rational or practical.

I can see that I've failed to convince you and I need to do better.

In my experience, the sort of thing you've written is a longer version of "It can't happen to me, I'm far too smart for that" and a quite typical reaction to the notion that you, yes you, might have security holes. I don't expect you to like that, but it is.

You really aren't running OpenBSD with those less rational people running Windows.

I do think being able to make such statements of confidence in one's immunity takes more detailed domain knowledge. Perhaps you are more immune and have knowledge and experience - but that isn't what you said.

I am curious as to the specific basis you have for considering yourself more immune. Not just "I am more rational", but something that's actually put it to a test?

Put it this way, I have knowledge and experience of this stuff and I bother second-guessing myself.

(I can see that this bit is going to have to address the standard objection more.)

I can see that I've failed to convince you and I need to do better.

This is a failure mode common in when other-optimising. You assume that I need to be persuaded, put that as the bottom line and then work from there. There is no room for the possibility that I know more about my relative areas of weakness than you do. This is a rather bizarre position to take given that you don't even have significant familiarity with the wedrifid online persona let alone me.

In my experience, the sort of thing you've written is a longer version of "It can't happen to me, I'm far too smart for that" and a quite typical reaction to the notion that you, yes you, might have security holes. I don't expect you to like that, but it is.

It isn't so much that I dislike what you are saying as it is that it seems trivial and poorly calibrated to the context. Are you really telling a lesswrong frequenter that they may have security holes as though you are making some kind of novel suggestion that could trigger insecurity or offence?

I suggest that I understand the entirety of the point you are making and still respond with the grandparent. There is a limit to how much intellectual paranoia is helpful and under-confidence is a failure of epistemic rationality even if it is encouraged socially. This is a point that you either do not understand or have been careful to avoid acknowledging for the purpose of presenting your position.

I am curious as to the specific basis you have for considering yourself more immune. Not just "I am more rational", but something that's actually put it to a test?

I would be more inclined to answer such questions if they didn't come with explicitly declared rhetorical intent.

I am curious as to the specific basis you have for considering yourself more immune. Not just "I am more rational", but something that's actually put it to a test?

I would be more inclined to answer such questions if they didn't come with explicitly declared rhetorical intent.

No, I'm actually interested in knowing. If "nothing", say that.

Regarding Scientology, I had the impression that they usually portray themselves to those they're trying to recruit as being like a self-help community ("we're like therapists or Tony Robbins, except that our techniques actually work!") before they start sucking you into the crazy?

Wait... did you just use Tony Robbins as the alternative to being sucked into the crazy?

I'm sure that whatever it is that Tony Robbins preaches is less crazy than the Xenu story. (Although Scientology doesn't seem any crazier than the crazier versions of mainstream religions...)

I'm sure that whatever it is that Tony Robbins preaches is less crazy than the Xenu story.

Here's a video in which he lays out what he sees as the critical elements of human motivation and action. Pay extra attention to the slides -- there's more stuff there than he talks about.

(It's a much more up-to-date and compact model than what he wrote in ATGW, by the way.)

I got through 11:00 of that video. If that giant is inside me I do not want him woken up. I want that sucker in a permanent vegetative state.

Many years ago I had a friend who is a television news anchor person. The video camera flattens you from three dimensions to two, and it also filters the amount of non-verbal communication you can project onto the storage media. To have energy and charisma on the replay, a person has to project something approaching mania at record time. I shudder to think what it would be like to sit down in the front row of the Robbins talk when he was performing for that video. He comes across as manic, and the most probable explanation for that is amphetamines.

The transcript might read rational, but that is video of a maniac.

He comes across as manic

A bit of context: that's not how he normally speaks.

There's another video (not publicly available, it's from a guest speech he did at one of Brendon Burchard's programs) where he gives the backstory on that talk. He was actually extremely nervous about giving that talk, for a couple different reasons. One, he felt it was a big honor and opportunity, two, he wanted to try to cram a lot of dense information into a twenty minute spot, and three, he got a bad introduction.

Specifically, he said the intro was something like, "Oh, and now here's Tony Robbins to motivate us", said in a sneering/dismissive tone... and he immediately felt some pressure to get the audience on his side -- a kind of pressure that he hasn't had to deal with in a public speaking engagement for quite some time. (Since normally he speaks to stadiums full of people who paid to come see him -- vs. an invited talk to a group where a lot of people -- perhaps most of the audience -- sees him as a shallow "motivator".)

IOW, the only drug you're seeing there is him feeling cornered and wanting to prove something --plus the time pressure of wanting to condense material he usually spends days on into twenty minutes. His normal way of speaking is a lot less fast paced, if still emotionally intense.

One of his time management programs that I bought over a decade ago had some interesting example schedules in it, that showed what he does to prepare for his time on stage (for programs where he's speaking all day) -- including nutrition, exercise, and renewal activities. It was impressive and well-thought out, but nothing that would require drugs.

One of Tony Robbins' books has been really helpful to me. Admittedly the effects mostly faded after the beginning, but applying his techniques put me into a rather blissful state for a day or two and also allowed for a period of maybe two weeks to a month during which I did not procrastinate. I also suspect I got a lingering boost to my happiness setpoint even after that. This are much better results than I've had from any previous mind-hacking technique I've used.

Fortunately I think I've been managing to figure out some of the reasons why those techniques stopped working, and have been on an upswing, mood and productivity-wise, again. "Getting sucked into the crazy" is definitely not a term I'd use when referring to his stuff. His stuff is something that's awesome, that works, and which I'd say everyone should read. (I already bought my mom an extra copy, though she didn't get much out of it.)

Awakening the Giant Within.

You need to apply some filtering to pick out the actual techniques out of the hype, and possibly consciously suppress instinctive reactions of "the style of this text is so horrible it can't be right", but it's great if you can do that.

I will post a summary of the most useful techniques at LW at some point - I'm still in the process of gathering long-term data, which is why I haven't done so yet. Though I blogged about the mood-improving questions some time back.

You need to apply some filtering to pick out the actual techniques out of the hype

It's not so much hype as lack of precision. Robbins tends to specify procedures in huge "steps" like, "step 1: cultivate a great life". (I exaggerate, but not by that much.) He also seems to think that inspiring anecdotes are the best kind of evidence, which is why I had trouble taking most of ATGW seriously enough to really do much from it when I first bought it (like a decade or more ago).

Recently I re-read it, and noticed that there's actually a lot of good stuff in there, it's just stuff I never paid any attention to until I'd stumbled on similar ideas myself.

It's sort of like that saying commonly (but falsely) attributed to Mark Twain:

"When I was a boy of fourteen, my father was so ignorant I could hardly stand to have the old man around. But when I got to be twenty-one, I was astonished at how much the old man had learned in seven years."

Tony seems to have learned a lot in the years since I started doing this sort of thing. ;-)

It's not so much hype as lack of precision. Robbins tends to specify procedures in huge "steps" like, "step 1: cultivate a great life". (I exaggerate, but not by that much.)

That's odd - I didn't get that at all, and I found that he had a lot of advice about various concrete techniques. Off the top of my head: pattern interrupts, morning questions, evening questions, setback questions, smiling, re-imagining negative memories, gathering references, changing your mental vocabulary.

I found that he had a lot of advice about various concrete techniques.

He does, but they're mostly in the areas that I ignored on my first few readings of the book. ;-)

(I'm working on a post or two on the subject area of dangerous memes and what to do about them.)

I'm very interested in that, I think I need it. I just read this article about Mere Christianity by C. S. Lewis, and I was like "what the hell is wrong with me, that I didn't see at least some of those points myself?" It really scared me, and made me wonder what other nonsense I believe in, that I ought to have seen through right away...

It might be worth doing some analysis on the authoritative voice (the ability to sound right), and I speak as someone who's been a CS Lewis, GK Chesterton, Heinlein, Rand, and Spider Robinson fan. At this point, I suspect it's a pathology.

Dude. AN ASSERTION IS PROVEN BY SOUNDING GOOD. It's a form of the Steve Jobs reality distortion superpower: come up with a viewpoint so compelling it will reshape people's perception of the past as well as the present.

(I must note that I'm not actually advocating this.)

Argument by assertion amusement from my daughter: "I'm running around the kitchen, but I'm not being annoying by running around the kitchen." An argument by assertion of rich depth, particularly from a three-year-old.

Did you ever get around to reading either of the papers I linked you to there btw?

Nuh. Still in the Pile(tm) with yer talk, which I have watched the first 5 min of ... I hate video so much.

Did you dislike your talk's content or your presentation? So far it looks like something that should be turned into a series of blog posts, complete with diagrams.

Neither really, it's the video itself I dislike. I've put the slides on Scribd, and I'm thinking of re-recording the soundtrack. Only trouble is, I'd have to watch the video first to remember what I said... and I hate video so much.

This was over a year ago but I see that you're still around. I wanted to ask you more about this. How does Spider Robinson fit in with the others? I would also add Orwell, Kipling, and Christopher Hitchens. Maybe even Eliezer a bit.

A big part of it is that these authors talk about truth a lot and the harm of denying that it's there, and rail against and strawman other groups for refusing to accept the truth or even that truth exists.

What do you mean by a pathology? You think there was something wrong with those authors? Are you talking about overconfidence?

Spider Robinson is very definite and explicit about how things ought to be. Unfortunately, he extends this to the idea that people who are worth knowing like good jazz, Irish coffee, and puns.

I meant that there may be a pathology at my end-- being so fond of the authoritative voice that I could be a fan of writers with substantially incompatible ideas, and not exactly notice or care.

I suspect you may be reading his exaggerated enthusiasm for these things as a blanket statement about people who aren't worth knowing. For instance, I might, in a burst of excitement, say that people who don't like the song Waterfall aren't worth talking to, but I wouldn't mean it literally. It would be a figure of speech.

For instance, in one of the Callahan books he states (in the voice of the author, not as a character, IIRC) that if he had a large sum of money he'd buy everyone in the US a copy of "Running, Jumping, Standing Still" on CD because it would make the world so much better. I read this as hyperbole for how much he likes that CD, and I don't take it literally.

I may be misremembering or have missed something in his writing, though.

As far as you liking the voice, I doubt it's a pathology. I feel the same way you do and it's not surprising to me that a lot of people would find that kind of objectivity and confidence appealing. It is a bias, if you confuse the pleasure of reading those writers with their actual ideas, but since I vehemently disagree with most of the above writers I'm not too worried about it. (Do you still read or like those writers?)