LW has helped me a lot. Not in matters of finding the truth; you can be a good researcher without reading LW, as the whole history of science shows. (More disturbingly, you can be a good researcher of QM stuff, read LW, disagree with Eliezer about MWI, have a good chance of being wrong, and not be crippled by that in the least! Huh? Wasn't it supposed to be all-important to have the right betting odds?) No; for me LW is mostly useful for noticing bullshit and cutting it away from my thoughts. When LW says someone's wrong, we may or may not be right; but when LW says someone's saying bullshit, we're probably right.
I believe that Eliezer has succeeded in creating, and communicating through the Sequences, a valuable technique for seeing through words to their meanings and trying to think correctly about those instead. When you do that, you inevitably notice how much of what you considered to be "meanings" is actually yay/boo reactions, or cached conclusions, or just fine mist that dissolves when you look at it closely. Normal folks think that the question about a tree falling in the forest is kinda useless; nerdy folks suppress their flinch reaction and get confused instead...
cousin_it:
Normal folks don't let politics overtake their mind; concerned folks get into huge flamewars; but we know exactly why this is counterproductive.
Trouble is, the question still remains open: how to understand politics so that you're reasonably sure that you've grasped its implications on your personal life and destiny well enough? Too often, LW participants seem to me like they take it for granted that throughout the Western world, something resembling the modern U.S. regime will continue into indefinite future, all until a technological singularity kicks in. But this seems to me like a completely unwarranted assumption, and if it turns out to be false, then the ability to understand where the present political system is heading and plan for the consequences will be a highly valuable intellectual asset -- something that a self-proclaimed "rationalist" should definitely take into account.
Now, for full disclosure, there are many reasons why I could be biased about this. I lived through a time and place -- late 1980s and early 1990s in ex-Yugoslavia -- where most people were blissfully unaware of the storm that was just beyond the horizon, even though any cool-...
I agree with you on this, but honestly, its a difficult enough topic that semi-specialists are needed. Trying as a non-specialist to figure out how stable your political system is rather than trying to find a specialist you can trust will get you about as far as it would in law etc.
(More disturbingly, you can be a good researcher of QM stuff, read LW, disagree with Eliezer about MWI, have a good chance of being wrong, and not be crippled by that in the least! Huh? Wasn't it supposed to be all-important to have the right betting odds?)
Saying that "Having incorrect views isn't that crippling, look at Scott Aaronson!" is a bit like saying "Having muscular dystrophy isn't that crippling, look at Stephen Hawking!" It's hard to learn much by generalizing from the most brilliant, hardest working, most diplomatically-humble man in the world with a particular disability. I know they're both still human, but it's much harder to measure how much incorrect views hurt the most brilliant minds. Who would you measure them against to show how much they're under-performing their potential?
Incidentally, knowing Scott Aaronson, and watching that Blogging Heads video in particular was how I found out about SIAI and Less Wrong in the first place.
How would Aaronson benefit from believing in MWI, over and above knowing that it's a valid interpretation?
I wonder if the main reason for why a post like Yvain's is upvoted is not because it is great but because everyone who reads it instantly agrees. Of course it is great in the sense that it sums up the issue in a very clear and concise manner. But has it really changed your mind?
That's how great arguments work: you agree with every step (and after a while you start believing things you didn't originally). The progress is made by putting such arguments into words, to be followed by other people faster and more reliably than they were arrived at, even if arriving at them is in some contexts almost inevitable.
Additionally, clarity offered by a carefully thought-through exposition isn't something to expect without a targeted effort. This clarity can well serve as the enabling factor for making the next step.
Be careful. So will the less-than-best essays and teachers. It's a form of hindsight bias: you think this thing is obvious, but your thoughts were actually quite inchoate before that. A meme - particularly a parasitic meme - can get itself a privileged position in your head by feeding your biases to make itself look good, e.g. your hindsight bias.
When you see a new idea and you feel your eyes light up, that’s the time to put it in a sandbox - yes, thinking a meme is brilliant is a bias to be cautious of. You need to know how to take the thing that gave you that "click!" feeling and evaluate it thoroughly and mercilessly.
(I'm working on a post or two on the subject area of dangerous memes and what to do about them.)
Be careful. So will the less-than-best essays and teachers.
Less often. Learning bullshit is more likely to come with the impression that you are gaining sophistication. If something is so banal as to be straightforward and reasonable you gain little status by knowing it.
Yes, people have biases and believe silly things but things seeming obvious is not a bad sign at all. I say evaluate mercilessly those things that feel deep and leave you feeling smug that you 'get it'. 'Clicking' is no guarantee of sanity but it is better than learning without clicking.
Yes. I do think that a particularly dangerous attitude to memetic infections on the Scientology level is an incredulous "how could they be that stupid?" Because, of course, it contains an implicit "I could never be that stupid" and "poor victim, I am of course far more rational". This just means your mind - in the context of being a general-purpose operating system that runs memes - does not have that particular vulnerability.
I suspect you will have a different vulnerability. It is not possible to completely analyse the safety of an arbitrary incoming meme before running it as root; and there isn't any such thing as a perfect sandbox to test it in. Even for a theoretically immaculate perfectly spherical rationalist of uniform density, this may be equivalent to the halting problem.
My message is: it can happen to you, and thinking it can't is more dangerous than nothing. Here are some defences against the dark arts.
[That's the thing I'm working on. Thankfully, the commonest delusion seems to be "it can't happen to me", so merely scaring people out of that will considerably decrease their vulnerability and remind them to think about their thinki...
I can see that I've failed to convince you and I need to do better.
This is a failure mode common in when other-optimising. You assume that I need to be persuaded, put that as the bottom line and then work from there. There is no room for the possibility that I know more about my relative areas of weakness than you do. This is a rather bizarre position to take given that you don't even have significant familiarity with the wedrifid online persona let alone me.
In my experience, the sort of thing you've written is a longer version of "It can't happen to me, I'm far too smart for that" and a quite typical reaction to the notion that you, yes you, might have security holes. I don't expect you to like that, but it is.
It isn't so much that I dislike what you are saying as it is that it seems trivial and poorly calibrated to the context. Are you really telling a lesswrong frequenter that they may have security holes as though you are making some kind of novel suggestion that could trigger insecurity or offence?
I suggest that I understand the entirety of the point you are making and still respond with the grandparent. There is a limit to how much intellectual paranoia is he...
Going back and looking at the sequences is funny. Across many posts, comments accuse Eliezer of simplifying and attacking straw-men. But as someone who was religious when he was first reading OB, and who got deconverted specifically because of the arguments therein, I think that Eliezer had it right and the accusers had it wrong: many of the arguments he refutes seem like straw-men to people who associate with other rationalists, but to those steeped in irrationality they are basically the world. Witness, for instance, a former Christian's revisiting of CS Lewis to find that he not only fails to provide a strong defense of Christianity, he's basically a joke to anyone who knows enough history or biology or sociology or psychology. But when you're in an affective death spiral, you often can't notice such things.
While I worry about the self-congratulation of threads like these, I want to nominate a lesson I learned from Robin Hanson (and Daniel Klein, another GMU economist), which will probably affect me professionally as much as my religious deconversion affected me personally:
It is ok to believe things that are obvious, even if they are unpopular.
It seems non-controversial, but when you actually find yourself in a discussion with an intelligent, like-minded person with similar interests and arguments with the backing of high-status individuals, the temptation is enormous to switch sides.
See, this is the way to get people to read the posts in the sequences: give them a reason and speak personally. For example, you've just given me another attack of tab explosion ...
My gains from LessWrong have come in several roughly distinct steps, all of which have come as I've been working my way through the Sequences. (Taking notes has really helped me digest and cement the information.)
1) Internalizing that there is a real world out there, and like Louie said, that ideas can be right or wrong. Making beliefs pay rent, referents and references, etc. A perspective on beliefs that they should accurately reflect the world and be used for achieving things I care about; all else is fluff. That every correct map should agree with every other, so that life does not seem such a disconnected jumble of different domains. Overall these kinds of insights really helped to give focus to my thoughts and clear out the clutter of my mind.
2) Having a conception of what beliefs should do, LessWrong helps me be aware of and combat various biases that interfere with the formation of accurate beliefs, and with taking coherent action based on those beliefs. I've made large gains here, but of course I'm not finished.
3) Forming a coherent, productive, happy me. Bootstrapping and snowballing effects. As I learn more, I seek out more good information, better. On this point, see An...
I've been looking around the site for awhile, having several people I know who go here. What I've learned is unfortunately that I'm unlikely to be able to learn from this site unless something changes. Which is too bad because I don't think I'm unable to learn in general.
I have no academic background whatsoever, and no expertise in science or philosophy. I am not an intellectual. I am good at noticing jargon, but terrible at picking it up and being able to use and understand it. I have no particular skill in abstract thinking. While tests aren't everything, I score in the range of borderline intellectual functioning on IQ tests and I do so for a reason: I am quite lacking in several standard cognitive abilities.
I also have obvious cognitive strengths, writing among them, but they don't match up with the ones necessary to navigate this site. From my perspective, reading this site is like trying to read a book with several words per sentence chopped out, and the words that remain being used in /ways/ that don't match well with my ability to comprehend.
Normally I would just turn around and walk away. I don't think anyone here has any particular desire to see someone like me...
Very nice post! My personal favorite things I've learned about from reading LessWrong:
Causality: Models, Reasoning, and Inference, a book by Judea Pearl written in 2000 which is frequently referenced by the SIAI and on LessWrong.
Politics as charity: that in terms of expected value, altruism is a reasonable motivator for voting (as opposed to common motivators like "wanting to be heard").
That a significant number of people are productively working on philosophical problems relevant to our lives.
Lots of little sanity checks to keep in mind, like Conservation of Expected Evidence, i.e. that without evidence, your expectation of what your confidence will be after seeing evidence is equal to your prior confidence. (But see this comment on things you can expect from your beliefs.)
I can't claim to be "converted to rationality" or any particular school of thought by LessWrong, because most of the ideas in the sequences were not new to me when I read them, but it was extremely impressive and relieving to see them all written down in one place, and they would have made a huge impact on me if I'd read them growing up!
And many of the people in this community rub me the wrong way.
Yes, like you, for stealing my post idea! Kidding, obviously.
At the risk of contributing to this community becoming a bit too self-congratulatory, here are some of the more significant concepts that I've grokked from reading LW:
No Universally Compelling Arguments and Ghosts in the Machine. Shamefully, it never even occurred to me to de-anthropomorphize the idea of a mind.
You Provably Can't Trust Yourself and No License To Be Human, along that same theme.
The Luminosity sequence is a bit under-celebrated, I think, with relation to its value. I've found that to be one of the most important things I've read here, and applying those concepts has aided me in improving my life in not-insignificant ways.
Affective Death Spirals! I cannot praise this enough for giving me the skills to recognize this phenomenon and keep myself from engaging in this at the negative end.
Most of all, LW has taught me that being the person that I want to be takes work. To actually effect any amount of change in the world requires understanding the way it really is, whether you're doing science or trying to understand your own personali...
Interestingly, although reading the Sequences and other LW articles significantly affected my thinking style and general outlook over time, I've probably learned as much if not more from participating -- writing posts and comments, and receiving feedback.
...which feels strange to say, because I was skeptical in the beginning of the whole transition of Overcoming Bias into LW. For one thing, I didn't like the idea of having to "move". And I was highly suspicious of the karma system, because I was afraid of having my status numerically measured. I ...
Great post!
My experience on Less Wrong has been that many of the top-voted articles initially have seemed sort of mundane and obvious if mildly pleasant to read, but that returning to them and having them reverberate in my mind has been very helpful to me in framing the issues that come up in my day to day life. Over and over again I've had the experience of being subliminally aware of a given phenomenon discussed on Less Wrong but that reading a well-written explanation is very helpful to me in drawing the key issues at hand into focus.
Eliezer's article
I've learned that people significantly more knowledgeable and intelligent than me do exist, and not just as some mythical statistical entity at the fringes of what I'll realistically encounter in my everyday life.
The internet - and indeed communications technology in general - is beneficial like that, even if it takes some searching to find a suitable domain.
I have learned that philosophy remains a big unsolved problem where no one seems to have really gotten anywhere for a long time, yet concerted effort by determined smart people might lead to us answering some of the most important questions that have always plagued human philosophers. I have learned that solving philosophy (where philosophy includes questions like "what is human value?", "what is the nature of intelligence?", "what are the simple equations that unify the physical laws of our universe/multiverse?") is of importance on a mind bogglingly cosmological level.
Keep on thinking, friends.
I have learned that philosophy remains a big unsolved problem where no one seems to have really gotten anywhere for a long time
I disagree. I think most of what has historically been considered "philosophy" has been solved at this point, it just doesn't seem that way because once we understand a philosophical problem well enough to solve it, it doesn't seem like a philosophical problem anymore. Usually it turns into a scientific problem, or an easy question of inference from scientific knowledge, thus losing its aura of respectable mysteriousness.
Even when most peoples beliefs are junk you won't know before you considered the belief in detail. You probably just increase the effect of confirmation bias when you reject beliefs without examining them.
Thinking outside your own set of beliefs is also good training.
I learned that humans are all very alike.
I learned that natural selection uses up diversity.
I learned some more graceful words and arguments for what I wanted than I had. For instance, previously I explained that I think about religion logically because I used to be Catholic and we do that, now I can say that it is because logic is useful for thinking about everything and tell people my backstory later if they ask.
I learned that emotion and rationality are not enemies. Vulcans we are not.
I learned that normally rational people will take sides in emotional name calling once you blame them. Much like everyone else. (See most any mention of gender.)
That it is possible to take confusing issues and write clearly about them.
That this may require sequences.
I've found out about PJ Ebys ideas and even though I just recently managed to use them to make a substantial change, I'm pretty sure it's the largest positive change in my entire life so far.
I'm a member of his group so I've gotten personal assistance but what I've done is basically first diagnose my problems by using his so called RMI technique, which I'm pretty sure he's mentioned several times here in the comments, which basically just consists of sincerely questioning yourself about your problem and passively notice what comes to mind without trying to rationalize it away logically.
Through that technique I found out that I've unconsciously judged all my decisions in life for "goodness", that is I've constantly feared that I'll not be a good person if I make the wrong decisions. Unfortunately the number of rules for things which make me a bad person have been very large so I've basically lived a passive lonely life waiting for someone to come and tell me what to do. One particularly frustrating thing has been that I've felt that I'm a bad person if I actually try to take control over my life, and that includes using PJs methods, so for about six months I've been completely clear on what my problem is, how to solve it, believed that it would work on a rational level, but at the same time feeling completely uninterested in actually doing anything about it. ...
I learned that meditation can be fun, and there are instructions available.
I learned that trying to get an exact definition of a term can be futile, since the meaning in one's mind is structured more like a simple artificial neural network than like the expected kind of verbal definition. Examples: "what is science fiction", "what is a fish".
It states that Less Wrong is a blog devoted to refining the art of rationality. Rationality is about winning and you and me and the rest of humanity can only win if we are able to solve the problem of provably Friendly AI. What I have learnt is that one should take risks from artificial intelligence serious. And I still believe that it is the most important message Less Wrong is able to convey.
Why shouldn't the discussion of risks posed by AI be a central part of this community? If risks from artificial intelligence are the most dangerous existential risk...
"Most peoples' beliefs aren’t worth considering ... dropping the habit of seriously considering all others’ improper beliefs that don’t tell me what to anticipate and are only there for sounding interesting or smart."
Seems you assume that most peoples' beliefs are "improper." Did LW offer you evidence for that conclusion? And don't you also need to assume you have a way to generate beliefs that is substantially better at avoiding the desire to sound interesting or smart?
The most important thing I learned from LessWrong is that my brain isn't always right.
This was a huge thing for me.
I already had the reductionist viewpoint, that I was just a brain. But I only had a part of it. I basically presumed that my thought processes were right: they couldn't be wrong, since if they were wrong, correcting it was merely a matter of changing some of the biological structures and firing patterns. But since I was that structure and those patterns, the 'corrected' version wouldn't be me; it would be someone else. The way I was, was the ...
Not only is the free will problem solved, but it turns out it was easy.
Haha. Ha.
Although it is easy to resolve it to your own satisfaction, it is more difficult to resolve it to other peoples' satisfaction. Which suggests that there is a problem, at least if you want to avoid retreating to fully general counterarguments like "you disagree with me, so you must be irrational." A quote comes to mind here: "The first principle is that you must not fool yourself - and you are the easiest person to fool." - Richard Feynman.
A good resour...
Right, I've read the solution sequence to "free will" and all I've managed to glean from it is that a) I'm physics, whose ontology I'm quite ignorant of and b) free will is conceptually incoherent and needs dissolving. I certainly don't feel like or believe I have free will or that I could influence the creation of FAI by desire for example. Is there something Louis(me) is missing that Louie isn't from the sequence? I find the sequence too long and prosaic to fit in my head to make a visceral impact. Is there a more concise alternative or even just an alternative that would make Louis.belief == Louie.belief? I'm struggling guys please help.
I will read more into this strange rationalist blog, but so far the blog does seem rather arrogant to me. Every other post claims to "solve" a traditional philosophical problem "easily". This post doesn't even bother to do that; it just states the problems have been easily solved. What have I seen so far: the problem of induction, the problem of free will, what the correct meta ethics is, an exact analysis of belief, a proof of metaphysical realism. I hope the writers in this blog are aware that countless people throughout history, with...
Number 4 is totally wrong.
"In order to be able to think up a hypothesis which has a significant chance of being correct, I must already possess a sufficient quantity of information" is obvious, following immediately from the mathematics of information. But that's emphatically not the same thing as "I obtain my hypothesis by applying a 'principle of induction' to generalize the data I have so far."
The way induction was supposed to work was that your observation statements served as the premises of a kind of inference. Just as one can use...
Related to: Goals for which Less Wrong does (and doesn’t) help
I've been compiling a list of the top things I’ve learned from Less Wrong in the past few months. If you’re new here or haven’t been here since the beginning of this blog, perhaps my personal experience from reading the back-log of articles known as the sequences can introduce you to some of the more useful insights you might get from reading and using Less Wrong.
1. Things can be correct - Seriously, I forgot. For the past ten years or so, I politely agreed with the “deeply wise” convention that truth could never really be determined or that it might not really exist or that if it existed anywhere at all, it was only in the consensus of human opinion. I think I went this route because being sloppy here helped me “fit in” better with society. It’s much easier to be egalitarian and respect everyone when you can always say “Well, I suppose that might be right -- you never know!”
2. Beliefs are for controlling anticipation (Not for being interesting) - I think in the past, I looked to believe surprising, interesting things whenever I could get away with the results not mattering too much. Also, in a desire to be exceptional, I naïvely reasoned that believing similar things to other smart people would probably get me the same boring life outcomes that many of them seemed to be getting... so I mostly tried to have extra random beliefs in order to give myself a better shot at being the most amazingly successful and awesome person I could be.
3. Most peoples' beliefs aren’t worth considering - Since I’m no longer interested in collecting interesting “beliefs” to show off how fascinating I am or give myself better odds of out-doing others, it no longer makes sense to be a meme collecting, universal egalitarian the same way I was before. This includes dropping the habit of seriously considering all others’ improper beliefs that don’t tell me what to anticipate and are only there for sounding interesting or smart.
4. Most of science is actually done by induction - Real scientists don’t get their hypotheses by sitting in bathtubs and screaming “Eureka!”. To come up with something worth testing, a scientist needs to do lots of sound induction first or borrow an idea from someone who already used induction. This is because induction is the only way to reliably find candidate hypotheses which deserve attention. Examples of bad ways to find hypotheses include finding something interesting or surprising to believe in and then pinning all your hopes on that thing turning out to be true.
5. I have free will - Not only is the free will problem solved, but it turns out it was easy. I have the kind of free will worth caring about and that’s actually comforting since I had been unconsciously ignoring this out of fear that the evidence appeared to be going against what I wanted to believe. Looking back, I think this was actually kind of depressing me and probably contributing to my attitude that having interesting rather than correct beliefs was fine since it looked like it might not matter what I did or believed anyway. Also, philosophers failing to uniformly mark this as “settled” and move on is not because this is a questionable result... they’re just in a world where most philosophers are still having trouble figuring out if god exists or not. So it’s not really easy to make progress on anything when there is more noise than signal in the “philosophical community”. Come to think of it, the AI community and most other scientific communities have this same problem... which is why I no longer read breaking science news anymore -- it's almost all noise.
6. Probability / Uncertainty isn’t in objects or events - It’s only in minds. Sounds simple after you understand it, but I feel like this one insight often allows me to have longer trains of thought now without going completely wrong.
7. Cryonics is reasonable - Due to reading and understanding the quantum physics sequence, I ended up contacting Rudi Hoffman for a life insurance quote to fund cryonics. It’s only a few hundred dollars a year for me. It’s well within my budget for caring about myself and others... such as my future selves in forward branching multi-verses.
There are countless other important things that I've learned but haven't documented yet. I find it pretty amazing what this site has taught me in only 8 months of sporadic reading. Although, to be fair, it didn't happen by accident or by reading the recent comments and promoted posts but almost exclusively by reading all the core sequences and then participating more after that.
And as a personal aside (possibly some others can relate): I still love-hate Less Wrong and find reading and participating on this blog to be one of the most frustrating and challenging things I do. And many of the people in this community rub me the wrong way. But in the final analysis, the astounding benefits gained make the annoying bits more than worth it.
So if you've been thinking about reading the sequences but haven't been making the time do it, I second Anna’s suggestion that you get around to that. And the rationality exercise she linked to was easily the single most effective hour of personal growth I had this year so I highly recommend that as well if you're game.
So, what have you learned from Less Wrong? I'm interested in hearing others' experiences too.