If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.

(This is the fourth incarnation of the welcome thread, the first three of which which now have too many comments. The text is by orthonormal from an original by MBlume.)

A few notes about the site mechanics

Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax  (you can click the "Help" link below the text box to bring up a primer).
You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.
However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you've any questions about karma or voting, please feel free to ask here.
Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.
It's definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there's a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way.  There's also a succession of open comment threads for discussion of anything remotely related to rationality.
Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion.

A few notes about the community

If you've come to Less Wrong to discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.
If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)
If you want to write a post about a LW-relevant topic, awesome!  I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma- honestly, you don't know what you don't know about the community norms here.)
If you'd like to connect with other LWers in real life, we have  meetups  in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page.
There's also a Facebook group.  If you've your own blog or other online presence, please feel free to link it.

If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address. 
Barry Cotter

A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.

A list of some posts that are pretty awesome

I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:

More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site.

New Comment
850 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings


  • Age: Years since 1995
  • Gender: Female
  • Occupation: Student

I actually started an account two years ago, but after a few comments I decided I wasn't emotionally or intellectually ready for active membership. I was confused and hurt for various reasons that weren't Less Wrong's fault, and I backed away to avoid saying something I might regret. I didn't want to put undue pressure on myself to respond to topics I didn't fully understand. Now, after many thousands of hours reading and thinking about neurology, evolutionary psychology, and math, I'm more confident that I won't just be swept up in the half-understood arguments of people much smarter than I am. :)

Like almost everyone here, I started with atheism. I was raised Hindu, and my home has the sort of vague religiosity that is arguably the most common form in the modern world. For the most part, I figured out atheism on my own, when I was around 11 or 12. It was emotionally painful and socially costly, but I'm stronger for the experience. I started reading various mediocre atheist blogs, but I got bored after a couple of years and wanted to do something more than shoot blind fish in tiny barrels. I wanted to build something... (read more)

Welcome to Less Wrong, and I for one am glad to have you here (again)! You sound like someone who thinks very interesting thoughts.

I had to face the fact that mere biology may have systematically biased my half of the population against greatness. And it hurt. I had to fight the urge to redefine intelligence and/or greatness to assuage the pain.

I can't say that this is something that has ever really bothered me. Your IQ is what it is. Whether or not there's an overall gender-based trend in one direction or another isn't going to change anything for you, although it might change how people see you. (If anything, I found that I got more attention as a "girl who was good at/interested in science"...which, if anything, was irritating and made me want to rebel and go into a "traditionally female" field just because I could.

Basically, if you want to accomplish greatness, it's about you as an individual. Unless you care about the greatness of others, and feel more pride or solidarity with females than with males who accomplish greatness (which I don't), the statistical tendency doesn't matter.

I don't want to lose the hope/idealism/inner happiness that makes me

... (read more)

I know that it's not particularly rational to feel more affiliation with women than men, but I do. It's one of the things my monkey brain does that I decided to just acknowledge rather than constantly fight. It's helped me have a certain kind of peace about average IQ differentials. The pain I described in the parent has mellowed. Still, I have to face the fact that if I want to major in, say, applied math, chances are I might be lonely or below-average or both. I wish I had the inner confidence to care about self-improvement more than competition, but as yet I don't.

ETA: I characterize "idealism" as a hope for the future more than a belief about the present.

Still, I have to face the fact that if I want to major in, say, applied math, chances are I might be lonely or below-average or both.

As long as you know your own skills, there is no need to use your gender as a predictor. We use the worse information only in the absence of better information; because the worse information can be still better than nothing. We don't need to predict the information we already have.

When we already know that e.g. "this woman has IQ 150", or "this woman has won a mathematical olympiad" there is no need to mix general male and female IQ or math curves into the equation. (That's only what you do when you see a random woman and you have no other information.)

If there are hundred green balls in the basket and one red ball, it makes sense to predict that a randomly picked ball will be almost surely green. But once you have randomly picked a ball and it happened to be red... then it no longer makes sense to worry that this specific ball might still be green somehow. It's not; end of story.

If you had no experience with math yet, then I'd say that based on your gender, your chances to be a math genius are small. But that's not the situation; you already had some math experience. So make your guesses based on that experience. Your gender is already included in the probability of you having that specific experience. Don't count it twice!

To be perfectly accurate, any person's chances of being a math genius are going to be small anyway, regardless of that person's gender. There are very few geniuses in the world.
What's true of one apple isn't true of every apple.
It is particularly not rational to ignore the effect of your unconscious in your relationships. That fight is a losing battle (right now), so if having happy relationships is a goal, the pursuit of that requires you pay attention. There is almost no average IQ differential, since men pad out the bottom as well. Greater chromosomal genetic variations in men lead to stupidity as often as intelligence. Really, this gender disparity only matters at far extremes. Men may pad out the top and bottom 1% (or something like that) in IQ, but applied mathematicians aren't all top 1% (or even 10%, in my experience). It is easy to mistake finally being around people who think like you do (as in high IQ) with being less intelligent than them, but this is a trick!
Sorry, you're right, I did know that. (And it's exasperating to see highly intelligent men make the rookie mistake of saying "women are stupid" or "most women are stupid" because they happen to be high-IQ. There's an obvious selection bias - intelligent men probably have intelligent male friends but only average female acquaintances - because they seek out the women for sex, not conversation.) I was thinking about "IQ differentials" in the very broad sense, as in "it sucks that anyone is screwed over before they even start." I also suffer from selection bias, because I seek out people in general for intelligence, so I see the men to the right of the bell curve, while I just sort of abstractly "know" there are more men than women to the left, too.

And it's exasperating to see highly intelligent men make the rookie mistake of saying "women are stupid" or "most women are stupid" because they happen to be high-IQ. There's an obvious selection bias - intelligent men probably have intelligent male friends but only average female acquaintances - because they seek out the women for sex, not conversation.

Another possible explanation comes to mind: people with high IQs consider the "stupid" borderline to be significantly above 100 IQ. Then if they associate equally with men and women, the women will more often be stupid; and if they associate preferentially with clever people, there will be fewer women.

(This doesn't contradict selection bias. Both effects could be at play.)

You'd have to raise the bar really far before any actual gender-based differences showed up. It seems far more likely that the cause is a cultural bias against intellectualism in women (women will under-report IQ by 5ish points and men over-report by a similar margin, women are poorly represented in "smart" jobs, etc.). That makes women present themselves as less intelligent and makes everyone perceive them as less intelligent.
Does anyone know of a good graph that shows this? I've seen several (none citing sources) that draw the crossover in quite different places. So I'm not sure what the gender ratio is at, say, IQ 130.
La Griffe Du Lion has good work on this, but it's limited to math ability, where the male mean is higher than the female mean as well as the male variance being higher than the female variance. The formulas from the first link work for whatever mean and variance you want to use, and so can be updated with more applicable IQ figures, and you can see how an additional 10 point 'reporting gap' affects things.
Unfortunately, intelligence in areas other than math seem to be an "I know it when I see it" kind of thing. It's much harder to design a good test for some of the "softer" disciplines, like "interpersonal intelligence" or even language skills, and it's much easier to pick a fight with results you don't like. It could be that because intelligence tests are biased toward easy measurement, they focus too much on math, so they under-predict women's actual performance at most jobs not directly related to abstract math skills.
Of course, if you use IQ testing, it is specifically calibrated to remove/minimize gender bias (so is the SAT and ACT), and intelligence testing is horribly fraught with infighting and moving targets. I can't find any research that doesn't at least mention that social factors likely poison any experimental result. It doesn't help any that "intelligence" is poorly defined and thus difficult to quantify. Considering that men are more susceptible to critical genetic failure, maybe the mean is higher for men on some tests because the low outliers had defects that made them impossible to test (such as being stillborn)?
The SAT doesn't seem to be calibrated to make sure average scores are the same for math, at least. At least as late as 2006, there's still a significant gender gap.
Apparently, the correction was in the form of altering essay and story questions to de-emphasize sports and business and ask more about arts and humanities. This hasn't been terribly effective. The gap is smaller in the verbal sections, but it's still there. Given that the entire purpose of the test is to predict college grades directly and women do better in college than men, explanations and theories abound.
Not a rigorously conducted study, but this (third poll) suggests a rather greater tendency to at least overestimate if not willfully over-report IQ, with both men and women overestimating, but men overestimating more.
You're right; my explanation was drawn from many PUA-types who had said similar things, but this effect is perfectly possible in non-sexual contexts, too. There's actually little use in using words like "stupid", anyway. What's the context? How intelligent does this individual need to be do what they want to do? Calling people "stupid" says "reaching for an easy insult," not "making an objective/instrumentally useful observation." Sure, there will be some who say they'll use the words they want to use and rail against "censorship", but connotation and denotation are not so separate. That's why I didn't find the various "let's say controversial, unspeakable things because we're brave nonconformists!" threads on this site to be all that helpful. Some comments certainly were both brave and insightful, but I felt on the whole a little bit of insight was brought at the price of a whole lot of useless nastiness.
Arguably, if it was "broken" this way it would be a mistake (specifically, of generalizing from too small a sample size). I have a job where I am constantly confronted with suffering and death, but at the end of the day, I can still laugh just like everyone else, because I know my experience is a biased sample and that there is still lots of good going on in the world.
I like this post more than I like most things; you've helped me, for one, with a significant amount of distress.

I had to face the fact that mere biology may have systematically biased my half of the population against greatness. And it hurt. I had to fight the urge to redefine intelligence and/or greatness to assuage the pain.

Consciously keeping your identity small and thus not identifying with everyone who happens to have the same internal plumbing might be helpful there.

PG is awesome, but his ideas do basically fall into the category of "easier said than done." This doesn't mean "not worth doing," of course, but practical techniques would be way more helpful. It's easier to replace one group with another (arguably better?) group than to hold yourself above groupthink in general.
My approach is to notice when I want to say/write "we", as opposed to "I", and examine why. That's why I don't personally identify as a "LWer" (only as a neutral and factual "forum regular"), despite the potential for warm fuzzies resulting from such an identification. There is an occasional worthy reason to identify with a specific group, but gender/country/language/race/occupation/sports team are probably not good criteria for such a group.
Thank you! I'll look for that.
Here is a typical LW comment that raises the "excessive group identification" red flag for me.
I always think of that in the context of conflict resolution, and refer to it as "telling someone that what they did was idiotic, not that they are an idiot." Self-identifying is powerful, and people are pretty bad at it because of a confluence of biases.
Great to see you here and great to hear you took the time to read up on the relevant material before jumping in. I'm confident that you will find many people who comment quite a bit don't have such prudence, so don't be surprised if you outmatch a long time commenter. (^_^) Yesss! This is exactly how I felt when I found this community.
I'm not sure about Disney, but the you should still be able to enjoy Avatar. Avatar (TLA and Korra) is in many ways a deconstruction of magical worlds. They take the basic premise of kung-fu magic and then let that propagate to it's logical conclusions. The TLA war was enabled by rapid industrialization when one nation realized they could harness their breaking the laws of thermodynamics for energy. The premise of S1 Korra is exploring social inequality in the presence of randomly distributed magical powers. In these ways, Avatar is less Harry Potter and more HPMoR.
They run strongly in families (although it's not clear exactly how, since neither of Katara's parents appears to have been a waterbender). It's not really random.
You are correct. I wouldn't consider it much different from personality. It's part heritable, part environmental and upbringing, and part randomness. Now you've got me wondering if philosophers in the Avatar universe have debates on whether your element/bending is nature vs nurture.
Now I want an ATLA fanfic infused with Star Trek-style pensive philosophizing. :D I would argue that it has even more potential than HP for a rationalist makeover. Aang stays in the iceberg and Sokka saves the planet?
Honestly, I was disappointed with the ending of Season 1 Korra: (rot13) Nnat zntvpnyyl tvirf Xbeen ure oraqvat onpx nsgre Nzba gbbx vg njnl, naq gurer ner ab creznarag pbafrdhraprf gb nalguvat. I'm not necessarily idealistic enough to be happy with a world that has no consequences or really difficult choices; I'm just not cynical enough to find misanthropy and defeatism cool. That's why children's entertainment appeals to me - while it can be overly sugary-sweet, adult entertainment often seems to be both narrow and shallow, and at the same time cynical. Outside of science fiction, there doesn't seem to be much adult entertainment that's about things I care about - saving the world, doing something big and important and good. ETA: What Zach Weiner makes fun of here - that's what I'm sick of. Not just misanthropy and undiscriminating cynicism, but glorifying it as the height of intelligence. LessWrong seemed very pleasantly different in that sense.
I agree; I found the ending very disappointing, as well. The authors throw one of the characters into a very powerful personal conflict, making it impossible for the character to deny the need for a total accounting and re-evaluation of the character's entire life and identity. The authors resolve this personal conflict about 30 seconds later with a Deus Ex Machina. Bleh.
Are you sure that's rot13? It's generating gibberish in two different decoders for me, although I'm pretty sure I know what you're talking about anyway. ETA: Yeah, looks like a shift of three characters right. ETA AGAIN: Fixed now, thanks.
Sorry, I dumped it into Briangle and forgot to change the setting.
V gubhtug vg jnf irel rssrpgvir. Gubhtu irvyrq fb xvqf jba'g pngpu vg, univat gur qnevat gb fubj n znva punenpgre pbagrzcyngvat naq nyzbfg nggrzcgvat fhvpvqr jnf n terng jnl gb pybfr gur nep. Gurer'f nyernql rabhtu 'npgvba' pbafrdhraprf qhr gb gur eribyhgvba, fb vg'f avpr onynapvat bhg univat gur irel raq or gur erfhygvat punatrf gb Xbeen'f punenpgre. Jura fur erwrpgf fhvpvqr nf na bcgvba, fur ernyvmrf gung fur ubyqf vagevafvp inyhr nf n uhzna orvat engure guna nf na Ningne. Cyhf nf bar bs gur ener srznyr yrnqf va puvyqera'f gryrivfvba, gur qenzngvp pyvznk bs gur fgbel orvat gur qr-bowrpgvsvpngvba bs gur srznyr yrnq vf uhtr. Nyfb gur nagv-fhvpvqr zrffntr orvat gung onq thlf pbzzvg zheqre/fhvpvqr naq gur tbbq thlf qba'g vf tbbq gb svavfu jvgu. V'z irel fngvfsvrq jvgu gurz raqvat vg gung jnl. Znal fubjf raq jvgu jvgu ovt onq orvat orngra. Fubjf gung cergraq gb or zngher unir cebgntbavfgf qvr ng gur raq. Ohg Xbeen'f raqvat vf bar bs gur bayl gung fgevxrf zr nf npghnyyl zngher, orpnhfr vg'f qverpgyl n zbeny/cuvybfbcuvpny ceboyrz ng gur raq.
Gung'f na vagrerfgvat jnl gb chg vg, naq V guvax V'z unccvre jvgu gur raqvat orpnhfr bs gung. Ubjrire, V jnf rkcrpgvat Frnfba Gjb gb or Xbeen'f wbhearl gbjneq erpbirel (rvgure culfvpny be zragny be obgu) nsgre Nzba gbbx njnl ure oraqvat. Vg'f abg gung V qba'g jnag ure gb or jubyr naq unccl; vg'f whfg gung vg frrzrq gbb rnfl. V gubhtug Nzba/Abngnx naq Gneybpx'f fgbel nep jnf zhpu zber cbjreshy. Va snpg, gurve zheqre/fhvpvqr frrzrq gb unir fb zhpu svanyvgl gung V svtherq vg zhfg or gur raq bs gur rcvfbqr hagvy V ernyvmrq gurer jrer fvk zvahgrf yrsg. Va bgure jbeqf, vg'f terng gung gur fgbel yraqf vgfrys gb gur vagrecergngvba gung vg jnf nobhg vagevafvp jbegu nf n uhzna orvat qvfgvapg sebz bar'f cbjref, ohg gurl unq n jubyr frnfba yrsg gb npghnyyl rkcyvpvgyl rkcyber gung. Nnat'f wbhearl jnf nobhg yrneavat gb fgbc ehaavat njnl naq npprcg gur snpg gung ur vf va snpg gur Ningne, naq ur pna'g whfg or nal bgure xvq naq sbetrg nobhg uvf cbjre naq erfcbafvovyvgl. Xbeen'f wbhearl jnf gb or nobhg npprcgvat gung whfg orpnhfr fur vf gur Ningne, naq fur ybirf vg naq qrevirf zrnavat sebz vg, qbrfa'g zrna fur'f abguvat zber guna n ebyr gb shysvyy. Vg sryg phg fubeg. Nnat tnir vg gb Xbeen; fur qvqa'g svaq vg sbe urefrys.
V funerq BaGurBgureUnaqyr'f qvfnccbvagzrag jvgu gur raqvat, naq V jnfa'g irel vzcerffrq jvgu Xbeen'f rzbgvbany erfbyhgvba ng gur raq. Fur uvgf n anqve bs qrcerffvba, frrzvatyl pbagrzcyngrf fhvpvqr, naq gura... rirelguvat fhqqrayl erfbyirf vgfrys. Fur trgf ure oraqvat onpx, jvgubhg nal rssbeg be cynaavat, naq jvgu ab zber fvtavsvpnag punenpgre qrirybczrag guna univat orra erqhprq gb qrfcrengvba. Gur Ovt Onq vf xvyyrq ol fbzrbar ryfr juvyr gur cebgntbavfgf' nggragvba vf ryfrjurer, naq Xbeen tnvaf gur novyvgl gb haqb nyy gur qnzntr ur pnhfrq va gur svefg cynpr. Gur fbpvrgny vffhrf sebz juvpu ur ohvyg uvf onfr bs fhccbeg jrer yrsg hanqqerffrq, ohg jvgubhg n pyrne nirahr gb erfbyir gurz nf n pbagvahngvba bs gur qenzngvp pbasyvpg. Vs Xbeen unq orra qevira gb qrfcrengvba, naq nf n erfhyg, frnepurq uneqre sbe fbyhgvbaf naq sbhaq bar, V jbhyq unir sbhaq gung n ybg zber fngvfslvat. Gung'f bar bs gur ernfbaf V engr gur raqvat bs Ningne: Gur Ynfg Nveoraqre uvture guna gung bs gur svefg frnfba bs Xbeen. Vg znl unir orra vanqrdhngryl sberfunqbjrq naq orra fbzrguvat bs n Qrhf Rk Znpuvan, ohg ng yrnfg Nnat qrnyg jvgu n fvghngvba jurer ur jnf snprq jvgu bayl hanpprcgnoyr pubvprf ol frrxvat bgure nygreangvirf, svaqvat, naq vzcyrzragvat bar. Ohg Xbeen'f ceboyrzf jrer fbyirq, abg ol frrxvat fbyhgvbaf, ohg ol pbzvat va gbhpu jvgu ure fcvevghny fvqr ol ernpuvat ure rzbgvbany ybj cbvag. Jung Fcvevg!Nnat fnvq unf erny jbeyq gehgu gb vg. Crbcyr qb graq gb or zber fcvevghny va gurve ybjrfg naq zbfg qrfcrengr pvephzfgnaprf. Ohg engure guna orvat fbzrguvat gb ynhq, V guvax guvf ercerfragf n sbez bs tvivat hc, jurer crbcyr ghea gb gur fhcreangheny sbe fbynpr be ubcr orpnhfr gurl qba'g oryvrir gurl pna fbyir gurve ceboyrzf gurzfryirf. Fb nf erfbyhgvbaf bs punenpgre nepf tb, V gubhtug gung jnf n cerggl onq bar. Nyy va nyy V jnf n sna bs gur frevrf, ohg gur raqvat haqrefubg zl rkcrpgngvbaf.
Have you seen the new My Little Pony show? It's really good. It's sweet without being twee.
I've been through this kind of thing before, and Less Wrong did nothing for me in this respect (although Less Wrong is awesome for many other reasons). Reading Ayn Rand on the other hand made all the difference in the world in this respect, and changed my life.
I haven't read Ayn Rand, but those who do seem to talk almost exclusively about the politics, and I just can't work up the energy to get too excited about something I have such little chance of affecting. Would you mind telling me where/how Ayn Rand discussed evolutionary psychology or modular minds? I'm curious now. :)
She doesn't, is the short answer. She does discuss, however, the integration of personal values into one's philosophical system. I was struggling with a possibly similar issue; I had previously regarded rationalism as an end in itself. Emotions were just baggage that had to be overcome in order to achieve a truly enlightened state. If this sounds familiar to you, her works may help. The short version: You're a human being. An ethical system that demands you be anything else is fatally flawed; there is no universal ethical system, what is ethical for a rabbit is not ethical for a wolf. It's necessary for you to live, not as a rabbit, not as a rock, not as a utility or paperclip maximizer, but as a human being. Pain, for example, isn't to be denied - for to do so is as sensible as denying a rock - but experienced as a part of your existence. (That you shouldn't deny pain is not the same as that you should seek it; it is simply a statement that it's a part of what you are.) Objectivism, the philosophy she founded, is named on the claim that ethics are objective; not subjective, which is to say, whatever you want it to be; not universal, which is to say, there's a single ethics system in the whole universe that applies equally to rocks, rabbits, mice, and people; but objective, which is to say, it exists as a definable property for a given subject, given certain preconditions (ethical axioms; she chose "Life" as her ethical axiom).
I don't know that I would call that "objective." I mean, the laws of physics are objective because they're the same for rabbits and rocks and humans alike. I honestly don't trust myself to go much more meta than my own moral intuitions. I just try not to harm people without their permission or deceive/manipulate them. Yes, this can and will break down in extreme hypothetical scenarios, but I don't want to insist on an ironclad philosophical system that would cause me to jump to any conclusions on, say, Torture vs. Dust Specks just yet. I suspect that my abstract reasoning will just be nuts. My understanding of morality is basically that we're humans, and humans need each other, so we worked out ways to help one another out. Our minds were shaped by the same evolutionary processes, so we can agree for the most part. We've always seemed to treat those in our in-group the same way; it's just that those we included in the in-group changed. Slowly, women were added, and people of different races/religions, etc.
See this comment regarding this common confusion about 'objective'...
It's a sticky business, and different ethicists will frame the words different ways. On one view, objective includes "It's true even if you disagree" and subjective includes "You can make up whatever you want". On another, objective includes "It's the same for everybody" and subjective includes "It's different for different people". The first distinction better matches the usual meaning of 'objective', and the second distinction better matches the usual meaning of 'subjective', so I think the terms were just poorly-chosen as different sides of a distinction. Because of this, my intuition these days is to say that ethics is both subjective and objective, or "subjectively objective" as Eliezer has said about probability. Though I'd like it if we switched to using "subject-sensitive" rather than "subjective", as is now commonly used in Epistemology.
So, this isn't the first time I've seen this distinction made here, and I have to admit I don't get it. Suppose I'm studying ballistics in a vacuum, and I'm trying to come up with some rules that describe how projectiles travel, and I discover that the trajectory of a projectile depends on its mass. I suppose I could conclude that ballistics is "subjectively objective" or "subject-sensitive," since after all the trajectory is different for different projectiles. But this is not at all a normal way of speaking or thinking about ballistics. What we normally say is that ballistics is "objective" and it just so happens that the proper formulation of objective ballistics takes projectile mass as a parameter. Trajectory is, in part, a function of mass. When we say that ethics is "subject-sensitive" -- that is, that what I ought to do depends on various properties of me -- are we saying it's different from the ballistics example? Or is this just a way of saying that we haven't yet worked out how to parametrize our ethics to take into account differences among individuals? Similarly, while we acknowledge that the same projectile will follow a different trajectory in different environments, and that different projectiles of the same mass will follow different trajectories in different environments, we nevertheless say that ballistics is "universal", because the equations that predict a trajectory can take additional properties of the environment and the projectile as parameters. Trajectory is, in part, a function of environment. When we say that ethics is not universal, are we saying it's different from the ballistics example? Or is this just a way of saying that we haven't yet worked out how to parametrize our ethics to take into account differences among environments?
I think it's an artifact of how we think about ethics. It doesn't FEEL like a bullet should fly the same exact way as an arrow or as a rock, but when you feel your moral intuitions they seem like they should obviously apply to everyone. Maybe because we learn about throwing things and motion through infinitely iterated trial and error, but we learn about morality from simple commands from our parents/teachers, we think about them in different ways.
So, I'm not quite sure I understood you, but you seem to be explaining how someone might come to believe that ethics are universal/objective in the sense of right action not depending on the actor or the situation at all, even at relatively low levels of specification like "eat more vegetables" or whatever. Did I get that right? If so... sure, I can see where someone whose moral intuitions primarily derive from obeying the commands of others might end up with ethics that work like that.
"the proper formulation of objective ballistics takes projectile mass as a parameter" I think the best analogy here is to say something like, the proper formulation of decision theory takes terminal values as a parameter. Decision theory defines a "universal" optimum (that is, universal "for all minds"... presumably anyway), but each person is individually running a decision theory process as a function of their own terminal values - there is no "universal" terminal value, for example if I could build an AI then I could theoretically put in any utility function I wanted. Ethics is "universal" in the sense of optimal decision theory, but "person dependent" in the sense of plugging in one's own particular terminal values - but terminal values and ethics are not necessarily "mind-dependent", as explained here.
I would certainly agree that there is no terminal value shared by all minds (come to that, I'm not convinced there are any terminal values shared by all of any given mind). Also, I would agree that when figuring out how I should best apply a value-neutral decision theory to my environment I have to "plug in" some subset of information about my own values and about my environment. I would also say that a sufficiently powerful value-neutral decision theory instructs me on how to optimize any environment towards any value, given sufficiently comprehensive data about the environment and the value. Which seems like another way of saying that decision theory is objective and universal, in the same sense that ballistics is. How that relates to statements about ethics being universal,objective, person-dependent, and/or mind-dependent is not clear to me, though, even after following your link.
Surprisingly, this isn't a bad short explanation of her ethics. I've been reading a lot of Aristotle lately (I highly recommend Aristotle by Randall, for anyone who is in to that kind of thing), and Rand mostly just brought Aristotle's philosophy into the 20th century - of course note now that it's the 21st century, so she is a little dated at this point. Take for example, Rand was offered by various people to get fully paid-for cryonics when she was close to death, but for unknown reasons she declined, very sadly (if you're looking for someone to take her philosophy into the 21st century, you will need to talk to, well... ahem... me). It's important to mention that politics is only one dimension of her philosophy and of her writing (although, naturally, it's the subject that all the pundits and mind-killed partisans obsess over) - and really it is the least important, since it is the most derivative of all of her other more fundamental philosophical ideas on metaphysics, epistemology, man's nature, and ethics.
I'll willingly confess to not being interested in Aristotle in the least. Philosophy coursework cured me of interest in Greek philosophy. Give me another twenty years and I might recover from that. Have you read TVTropes' assessment of Objectivism? It's actually the best summary I've ever read, as far as the core of the philosophy goes.
No I haven't! That was quite good, thanks. By the way, I fully share yours (and Eliezer's) sentiment in regard to academic philosophy. I took a "philosophy of mind" course in college, thinking that would be extremely interesting, and I ended up dropping the class in short order. It was only after a long study of Rand that I ever became interested in philosophy again, once I realized I had a sane basis on which to proceed.
Specifically, her non-fiction work (if you find that sort of thing palatable) provides a lot more concrete discussion of her philosophy. Unfortunately, Ayn Rand is little too... abrasive... for many people who don't agree entirely with her. She has a lot of resonant points that get rejected because of all the other stuff she presents along with it.
I wonder why it is that so many people get here from TV Tropes. Also, you're not the only one to give up on their first LW account.
Possibly: TV Tropes approaches fiction the way LessWrong approaches reality.
How do you mean?
At a guess, I would say: looking for recurring patterns in fiction, and extrapolating principles/tropes. It's a very bottom-up approach to literature, taking special note of subversions, inversions, aversions, etc, as opposed to the more top-down academic study of literature that loves to wax poetic about "universal truths" while ignoring large swaths of stories (such as Sci Fi and Fantasy) that don't fit into their grand model. Quite frankly, from my perspective, it seems they tend to force a lot of stories into their preferred mold, falling prey to True Art tropes.
Because it uses as many examples from HP:MoR as it possibly could?
Welcome to Less Wrong! I would say something about a rabbit hole but it would be pointless, since you already seem to be descending at quite a high rate of speed.
We seem to have a lot of Airbender fans here at LW -- Alicorn was the one who started me watching it, and I know SarahC and rubix are fans. Welcome =)
Did you see Brave? I thought it was great.
I did. :) I was so happy to see a mother-daughter movie with no romantic angle (other than the happily married king and queen).
I thought she was going to have to end up married at the end and I was so. angry. Brave ranked up there with Mulan in terms of kids movies that I think actually teach kids good lessons, which is a pretty high honor in my book.

Personally, for their first female protagonist, I felt like Pixar could have done a lot better than a Rebellious Princess. It's cliche, and I would have liked to see them exercise more creativity, but besides that, I think the instructive value is dubious. Yes, it's awfully burdensome to have one's life direction dictated to an excessive degree by external circumstances and expectations. But on the other hand, Rebellious Princesses, including Merida, tend to rail against the unfairness of their circumstances without stopping to consider that they live in societies where practically everyone has their lives dictated by external circumstances, and there's no easy transition to a social model that allows differently.

Merida wants to live a life where she's free to pursue her love of archery and riding, and get married when and to whom she wants? Well she'd be screwed if she were a peasant, since all the necessary house and field work wouldn't leave her with the time, her family wouldn't own a horse, unless it was a ploughhorse she wouldn't be able to take out for pleasure riding, and she'd be married off at an early age out of economic rather than political necessity. And she'd be sim... (read more)

I thought that Brave was actually a somewhat subversive movie -- perhaps inadvertently so. The movie is structured and presented in a way that makes it look like the standard Rebellious Princess story, with the standard feminist message. The protagonist appears to be a girl who overcomes the Patriarchy by transgressing gender norms, etc. etc. This is true to a certain extent, but it's not the main focus of the movie. Instead, the movie is, at its core, a very personal story of a child's relationship with her parent, the conflict between love and pride, and the difference between having good intentions and being able to implement them into practice. By the end of the movie, both Merida and her mother undergo a significant amount of character development. Their relationship changes not because the social order was reformed, or because gender norms were defeated -- but because they have both grown as individuals. Thus, Brave ends up being a more complex (and IMO more interesting) movie than the standard "Rebellious Princess" cliche would allow. In Brave, there are no clear villains; neither Merida nor her mother are wholly in the right, or wholly in the wrong. Contrast this with something like Disney's Rapunzel, where the mother is basically a glorified plot device, as opposed to a full-fledged character.
How boring. Was there at least some monsters to fight or an overtly evil usurper to slay? What on earth remains as motivation to watch this movie?
The antagonist is the rapey cultural artifact of forced marriage. Vg vf fynva.
There should be a word for forcing other people to have sex (with each other, not yourself). The connotations of calling a forced arranged marriage 'rapey' should be offensive to the victims. It is grossly unfair to imply that the wife is a 'rapist' just because her husband's father forced his son to marry her for his family's political gain. (Or vice-versa.)
I wasn't specifying who was being rapey. Just that the entire setup was rapey.
That was clear and my reply applies. (The person to whom the applies is the person who forces the marriage. Rape(y/ist) would also apply if that person was also a participant in the marriage.)
As per my post above, I'd argue that the "rapey cultural artifact of forced marriage" is less of a primary antagonist, and more of a bumbling comic relief character.
Cute rot13. I never would have predicted that in a Pixar animation!
There is an evil monster to fight, of a more literal sort, but it would be a bit of a stretch to call it the primary antagonist.
Upvoted. My thoughts on Brave are over here, but basically Merida is actually a really dark character, and it's sort of sickening that she gets away with everything she does. Interesting enough to repeat is my suggestion for a better setting: Of course, it's difficult to make a movie glorifying sweatshop labor, whereas princesses are distant enough to be a tame example.
I understand your critique, and I mostly agree with it. I actually would have been even happier if Merida had bitten the bullet and married the winner - but for different reasons. She would have married because she loved her mother and her kingdom, and understood that peace must come at a cost - it would still very much count as a movie with no romantic angle. She would have been like Princess Yue in Avatar, a character I had serious respect for. When Yue was willing to marry Han for duty, and then was willing to fnpevsvpr ure yvsr gb orpbzr gur zbba, that was the first time I said to myself, "Wow, these guys really do break convention." Merida would have been a lot more brave to accept the dictates of her society (but for the right reasons), or to find a more substantial compromise than just convincing the other lords to yrg rirelbar zneel sbe ybir. But I still think it was a sweet movie.
I agree that it was a sweet movie, and overall I enjoyed watching it. The above critique is a lot harsher than my overall impression. But when I heard that Pixar was making their first movie with a female lead, I expected a lot out of them and thought they were going to try for something really exceptional in both character and message, and it ended up undershooting my expectations on those counts. I can sympathize with the extent to which simply having competent important female characters with relatable goals is a huge step forward for a lot of works. Ironically, I don't think I really grasped how frustrating the lack of them must be until I started encountering works which are supposed to be some sort of wish fulfillment for guys. There are numerous anime and manga, particularly harem series, which are full of female characters graced with various flavors of awesomeness, without any significant male protagonists other than the lead who's a total loser, and I find it infuriating when the closest thing I have to a proxy in the story is such a lousy and overshadowed character. It wasn't until I started encountering works like those that it hit me how painful it must be to be hard pressed to find stories that aren't like that on some level.
One thing that disappointed me about this whole story was that it was the one and only Pixar movie that was set in the past. Pixar has always been about sci fi, not fantasy, and its works have been set in contemporary America (with Magic Realism), alternate universes, or the future. Did "female protagonist" pattern-match so strongly with "rebellious medieval princess" that even Pixar didn't do anything really unusual with it? Even though I was happy Merida wasn't rebelling because of love, it seems like they stuck with the standard old-fashioned feminist story of resisting an arranged marriage, when they could have avoided all of that in a work set in the present or the future, when a woman would have more scope to really be brave. All in all, it seems like their father-son movie was a lot stronger than their mother-daughter movie.
I don't think "This Loser Is You" is the right trope for that. Actually, I don't think TV Tropes has the right trope for that; as best I can tell, harem protagonists are the way they are not because they're supposed to stand for the audience in a representative sort of way but because they're designed as a receptacle for the audience to pour their various insecurities into. They can display negative traits, because that's assumed to make them more sympathetic to viewers that share them. But they can't display negative traits strong enough to be grounds for actual condemnation, or to define their characters unambiguously; you'll never see Homer Simpson as a harem lead. And they can't show positive traits except for a vague agreeableness and whatever supernatural powers the plot requires, because that breaks the pathos. Yes, Tenchi Muyo, that's you I'm looking at. More succinctly, we're all familiar with sex objects, right? Harem anime protagonists are sympathy objects.
I agree that This Loser Is You isn't quite the right trope. There's a more recent launch, Loser Protagonist, which doesn't quite describe it either, but uses the same name as I did when I tried to put the trope which I thought accurately described it through the YKTTW ages ago. If I understand what you mean by "sympathy objects," I think we have the same idea in mind. I tend to think of them as Lowest Common Denominator Protagonists, because they lack any sort of virtue or achievement that would alienate them from the most insecure or insipid audience members.
That's a very fair critique. A few things though: First, you might want to put that in ROT13 or add a [SPOILER](http://lh5.ggpht.com/_VZewGVtB3pE/S5C8VF3AgJI/AAAAAAAAAYk/5LJdTCRCb8k/eliezer_yudkowskyjpg_small.jpg) tag or something. Meridia learned to value her relationship with her mother, which I think a lot of kids need to hear going into adolescence. When you put it this way it doesn't seem nearly as trite as your phrasing makes it sound. Well yeah, but the answer to "society sucks and how can I fix it" isn't "oh it sucks for everyone and even more for others, I'll just sit down and shut up". (Not that you argue it is.) From TV Tropes: This is exactly why I thought Brave was good - it moved away from this trope. It wasn't "I don't love this person, I love this other person!", it was "I don't have to love/marry someone to be a competent and awesome person". She was the hero of her own story, and didn't need anyone else to complete her. That doesn't have to be true for everyone, but the counterpoint needs to be more present in society. And I said it ranked up there. Not that it passed Mulan. :) And it gets that honor by being literally one of the two movies I can think of that has a positive message in this respect. Although I will concede that I'm not very familiar with a particularly high number of kids movies.
I edited my comment to rot13 the ending spoilers; I left in the stuff that's more or less advertised as the premise of the movie. You might want to edit your reply so that it doesn't quote the uncyphered text. I think that's a valuable lesson, but I felt like Brave's presentation of it suffered for the fact that Merida and her mother really only reconcile after Merida essentially gets her way about everything. Teenagers who feel aggrieved in their relationships with their parents and think that they're subject to pointless unfairness are likely to come away with the lesson "I could get along so much better with my parents if they'd stop being pointlessly unfair to me!" rather than "Maybe I should be more open to the idea that my parents have legitimate reasons for not being accommodating of all my wishes, and be prepared to cut them some slack." A more well rounded version of the movie's approximate message might have been something like "Some burdensome social expectations and life restrictions have good reasons behind them and others don't, learn to distinguish between them so you can focus your effort on solving the right ones." But instead, it came off more like "Kids, you should love and appreciate your parents, at least when you work past their inclination to arbitrarily oppress you."
Now that I think about it, very few movies or TV shows actually teach that lesson. There are plenty of works of fiction that portray the whiney teenager in a negative light, and there are plenty that portray the unreasonable parent in a negative light, but nothing seems to change. It all plays out with the boring inevitability of a Greek tragedy.

I'm Aaron Swartz. I used to work in software (including as a cofounder of Reddit, whose software that powers this site) and now I work in politics. I'm interested in maximizing positive impact, so I follow GiveWell carefully. I've always enjoyed the rationality improvement stuff here, but I tend to find the lukeprog-style self-improvement stuff much more valuable. I've been following Eliezer's writing since before even the OvercomingBias days, I believe, but have recently started following LW much more carefully after a couple friends mentioned it to me in close succession.

I found myself wanting to post but don't have any karma, so I thought I'd start by introducing myself.

I've been thinking on-and-off about starting a LessWrong spinoff around the self-improvement stuff (current name proposal: LessWeak). Is anyone else interested in that sort of thing? It'd be a bit like the Akrasia Tactics Review, but applied to more topics.

Instead of a spinoff, maybe Discussion should be split into more sections (one being primarily about instrumental rationality/self-help).
Topic-related discussion seems a good idea to me. Some here may be interested in rationality/cognitive bias but not in IA or not in space exploration or not in cryonics, ... This would also allow to lift the "bans" like "no politics", if it says in a dedicated section not "polluting" those not interested in it.
I endorse this idea.
Yay, it is you! (I've followed your blog and your various other deeds on-and-off since 2002-2003ish and have always been a fan; good to have you here.)
LessWeak - good idea. On the name: cute but I imagine it getting old. But it's not as embarrassing as something unironically Courage Wolf, like 'LiveStrong'.
Welcome to LessWrong! Apparently I used to comment on your blog back in 2004 - my, how time flies!
Reboot in peace, friend.

'Twas about time that I decided to officially join. I discovered LessWrong in the autumn of 2010, and so far I felt reluctant to actually contribute -- most people here have far more illustrious backgrounds. But I figured that there are sufficiently few ways in which I could show myself as a total ignoramus in an intro post, right?

I don't consider my gender, age and nationality to be a relevant part of my identity, so instead I'd start by saying I'm INTP. Extreme I (to the point of schizoid personality disorder), extreme T. Usually I have this big internal conflict going on between the part of me that wishes to appear as a wholly rational genius and the other part, who has read enough psychology and LW (you guys definitely deserve credit for this) to know I'm bullshitting myself big time.

My educational background so far is modest, a fact for which procrastination is the main culprit. I'm currently working on catching up with high school level math... so far I've only reviewed trigonometry, so I'm afraid I won't be able to participate in more technical discussions around here. Aside from a few Khan Academy videos, I'm still ignorant about probability; I did try to solve that cancer ... (read more)

2Swimmer963 (Miranda Dixon-Luinenburg)
Welcome! That's interesting... I don't think I've ever had someone respond to my pointing out flaws in this way. I've had people argue back plenty of times, but never tell me that we shouldn't be arguing about it. Can you give some examples of topics where this has happened? I would be curious what kind of topics engender this reaction in people.

I've seen this happen where one person enjoys debate/arguing and another does not. To one person it's an interesting discussion, and to the other it feels like a personal attack. Or, more commonly, I've seen onlookers get upset watching such a discussion, even if they don't personally feel targeted. Specifically, I'm remembering three men loudly debating about physics while several of their wives left the room in protest because it felt too argumentative to them.

Body language and voice dynamics can affect this a lot, I think - some people get loud and frowny when they're excited/thinking hard, and others may misread that as angry.

I ended up having to include a disclaimer in the FAQ for an older project of mine, saying that the senior staff tends to get very intense when discussing the project and that this doesn't indicate drama on our part but is actually friendly behavior. That was a text channel, though, so body dynamics and voice wouldn't have had anything to do with it. I think a lot of people just read any intense discussion as hostile, and quality of argument doesn't really enter into it -- probably because they're used to an arguments-as-soldiers perspective.

We used to say of two friends of mine that "They don't so much toss ideas back and forth as hurl sharp jagged ideas directly at one another's heads."

--Steven Erikson, House of Chains (2002)
Oh, it's not a topic-specific behavior. Every time I go too far down a chain of reasoning ("too far" meaning as few as three causal relationships), sometimes people start complaining that I'm giving too much thought to it, and imply they are unable to follow the arguments. I'm just not surrounded by a lot of people that like long and intricate discussions. (Funnily, both my parents are the type that get tired listening to complex reasoning, and I turned out the complete opposite.)
8Swimmer963 (Miranda Dixon-Luinenburg)
That is...intensely frustrating. I've had people tell me that "well, I find all the points you're trying to make really complicated, and it's easier for me to just have faith in God" or that kind of thing, but I've never actually been rebuked for applying an analytical mindset to discussions. Props on having acquired those habits anyway, in spite of what sounds like an unfruitful starting environment!
Thanks! Anyway, there's the internet to compensate for that. The wide range of online forums built around ideas of varied intellectual depth means you even get to choose your difficulty level...
This happens frequently in places where reasoning is suspect, or not valued. Kids in poor areas with few scholastic or academic opportunities find more validation in pursuits that are non-academic, and they tend to deride logic. It's parodied well by Colbert, but it's not uncommon. I just avoid those people, now know few of them. Most of the crowd here, I suspect, is in a similar position.
0Swimmer963 (Miranda Dixon-Luinenburg)
I may be in a similar position of never having known anyone who was like this. Also, I'm very conflict averse myself (but like discussing), so any discussion I start is less likely to have any component of raised voices or emotional involvement that could make it sound like an argument.
The best way for me to get good at some particular type of math, or programming, or skill, in my experience, is to put yourself in a position where you need to do it for something. Find a job that requires you to do a bit of programming, or pick a task that requires it. Spend time on it, and you'll learn a bit. Then go back and realize you missed some basics, and pick them up. Oh, and read a ton. You're interested in a lot of things, and trying to catch up with what you feel you should know, which is wonderful. What do you do with your time? Are you working? College?
I prefer the practice-based approach too, but from my position theoretical approaches are cheaper and much more available, if slower and rather tedious. In school they taught us that the only way to get better in an area is to do extra homework, and frankly my methods haven't improved much since. My usual way is to take an exercise book and solve everything in it, if that counts for practice; other than that, I only have the internet and a very limited budget. Senior year in high school. Right now I have 49 vacation days left, after which school will start, studying will get replaced with busywork and my learning rates will have no choice but to fall dramatically. So now I'm trying to maximize studying time while I still can... It's all kind of backwards, isn't it?
Where you go to college and the amount of any scholarships you get are a bigger deal for your long term personal growth than any of the specific subjects you will learn right now. In the spirit of long term decision making, figure out where you want to go to college, or what your options are, and spend the summer maximizing the odds of getting in to your first choice schools. I cannot imagine that it won't be a better investment of your time than any one subject you are studying (unless you are preparing for SAT or some such test.) So I guess you should spend the summer on Khan, and learning and practicing vocabulary to get better at taking the tests that will get you into a great college, where your opportunities to learn are greatly expanded.
I'm afraid all of this is not really applicable to me... My country isn't Western enough for such a wide range of opportunities. Here, institutes for higher education range from almost acceptable (state universities) to degree factories (basically all private colleges). Studying abroad in a Western country costs, per semester, somewhere between half and thrice my parents' yearly income. On top of everything, my grades would have to be impeccable and my performances worthy of national recognition for a foreign college to want me as a student so much as to step over the money issue and cover my whole tuition. (They're not, not by a long shot.) Thanks for the support, in any case...

I've commented infrequently, but never did one of these "Welcome!" posts.

Way back in the Overcoming Bias days, my roomate raved constantly about the blog and Eliezer Yudkowsky in particular. I pattern matched his behaviour to being in a cult, and moved on with my life. About two years later (?), a common friend of ours recommended Harry Potter and the Methods of Rationality, which I then read, which brought me to Lesswrong, reading the Sequences, etc. About a year later, I signed up for cryonics with Alcor, and I now give more than my former roomate to the Singularity Institute. (He is very amused by this.)

I spend quite a bit of time working on my semi-rationalist fanfic, My Little Pony: Friendship is Optimal, which I'll hopefully release on a timeframe of a few months. (I previously targeted releasing this damn thing for April, but...planning fallacy. I've whittled my issue list down to three action items, though, and it's been through it's first bout of prereading.)

My Little Pony: Friendship is Optimal


Could I convince you to perhaps post on the weekly rationality diaries about progress, or otherwise commit yourself, or otherwise increase the probability that you'll put this fic up soon? :D

Hi! I got here from reading Harry Potter and the Methods of Rationality, which I think I found on TV Tropes. Once I ran out of story to catch up on, I figured I'd start investigating the source material.

I've read a couple of sequences, but I'll hold off on commenting much until I've gotten through more material. (Especially since the quality of discussions in the comment sections is so high.) Thanks for an awesome site!

Hi All,

I'm Will Crouch. Other than one other, this is my first comment on LW. However, I know and respect many people within the LW community.

I'm a DPhil student in moral philosophy at Oxford, though I'm currently visiting Princeton. I work on moral uncertainty: on whether one can apply expected utility theory in cases where one is uncertain about what is of value, or what one one ought to do. It's difficult to do so, but I argue that you can.

I got to know people in the LW community because I co-founded two organisations, Giving What We Can and 80,000 Hours, dedicated to the idea of effective altruism: that is, using one's marginal resources in whatever way the evidence supports as doing the most good. A lot of LW members support the aims of these organisations.

I woudn't call myself a 'rationalist' without knowing a lot more about what that means. I do think that Bayesian epistemology is the best we've got, and that rational preferences should conform to the von Neumann-Morgenstern axioms (though I'm uncertain - there are quite a lot of difficulties for that view). I think that total hedonistic utilitarianism is the most plausible moral theory, but I'm extremely uncertain in that conclusion, partly on the basis that most moral philosophers and other people in the world disagree with me. I think that the more important question is what credence distribution one ought to have across moral theories, and how one ought to act given that credence distribution, rather than what moral theory one 'adheres' to (whatever that means).

Pretense that this comment has a purpose other than squeeing at you like a 12-year-old fangirl: what arguments make you prefer total utilitarianism to average?
Haha! I don't think I'm worthy of squeeing, but thank you all the same. In terms of the philosophy, I think that average utilitarianism is hopeless as a theory of population ethics. Consider the following case: Population A: 1 person exists, with a life full of horrific suffering. Her utility is -100. Population B: 100 billion people exist, each with lives full of horrific suffering. Each of their utility levels is -99.9 Average utilitarianism says that Population B is better than Population A. That definitely seems wrong to me: bringing into existence people whose lives aren't worth living just can't be a good thing.
That's not obvious to me. IMO, the reason why in the real world “bringing into existence people whose lives aren't worth living just can't be a good thing” is that they consume resources that other people could use instead; but if in the hypothetical you fix the utility of each person by hand, that doesn't apply to the hypothetical. I haven't thought about these things that much, but my current position is that average utilitarianism is not actually absurd -- the absurd results of thought experiments are due to the fact that those thought experiments ignore the fact that people interact with each other.
I don't understand your comment. Average utilitarianism implies that a world in which lots and lots of people suffer a lot is better than a world in which a single individual suffers just a little bit more. If you don't think that such a world would be better, then you must agree that average utilitarianism is false. Here's another, even more obviously decisive, counterexample to average utilitariainsm. Consider a world A in which people experience nothing but agonizing pain. Consider next a different world B which contains all the people in A, plus arbitrarily many more people all experiencing pain only slightly less intense. Since the average pain in B is less than the average pain in A, average utilitarianism implies that B is better than A. This is clearly absurd, since B differs from A only in containing a surplus of agony.
I do think that the former is better (to the extent that I can trust my intuitions in a case that different from those in their training set).
Interesting. The deeper reasons why I reject average utilitarianism is that it makes the value of lives non-seperable. "Separability" of value just means being able to evaluate something without having to look at anything else. I think that, whether or not it's a good thing to bring a new person into existence depends only on facts about that person (assuming they don't have any causal effects on other people): the amount of their happiness or suffering. So, in deciding whether to bring a new person into existence, it shouldn't be relevant what happened in the distant past. But average utilitarianism makes it relevant: because long-dead people affect the average wellbeing, and therefore affect whether it's good or bad to bring that person into existence. But, let's return to the intuitive case above, and make it a little stronger. Now suppose: Population A: 1 person suffering a lot (utility -10) Population B: That same person, suffering an arbitrarily large amount (utility -n, for any arbitrarily large n), and a very large number, m, of people suffering -9.9. Average utilitarianism entails that, for any n, there is some m such that Population B is better than Population A. I.e. Average utilitarianism is willing to add horrendous suffering to someone's already horrific life, in order to bring into existence many other people with horrific lives. Do you still get the intuition in favour of average here?
Suppose your moral intuitions cause you to evaluate worlds based on your prospects as a potential human - as in, in pop A you will get utility -10, in pop B you get an expected (1/m)(-n) + (m-1/m)(-9.9). These intuitions could correspond to a straightforward "maximize expected util of 'being someone in this world'", or something like "suppose all consciousness is experienced by a single entity from multiple perspectives, completing all lives and then cycling back again from the beginning, maximize this being's utility". Such perspectives would give the "non-intuitive" result in these sorts of thought experiments.
Hm, a downvote. Is my reasoning faulty? Or is someone objecting to my second example of a metaphysical stance that would motivate this type of calculation?
Perhaps people simply objected to the implied selfish motivations.
Perhaps! Though I certainly didn't intend to imply that this was a selfish calculation - one could totally believe that the best altruistic strategy is to maximize the expected utility of being a person.
Once you make such an unrealistic assumption, the conclusions won't necessarily be non-unrealistic. (If you assume water has no viscosity, you can conclude that it exerts no drag on stuff moving in it.) In particular, ISTM that as long as my basic physiological needs are met, my utility almost exclusively depends on interacting with other people, playing with toys invented by other people, reading stuff written by other people, listening to music by other people, etc.
When discussing such questions, we need to be careful to distinguish the following: 1. Is a world containing population B better than a world containing population A? 2. If a world with population A already existed, would it be moral to turn it into a world with population B? 3. If Omega offered me a choice between a world with population A and a world with population B, and I had to choose one of them, knowing that I'd live somewhere in the world, but not who I'd be, would I choose population B? I am inclined to give different answers to these questions. Similarly for Parfit's repugnant conclusion; the exact phrasing of the question could lead to different answers. Another issue is background populations, which turn out to matter enormously for average utilitarianism. Suppose the world already contains a very large number of people wth average utility 10 (off in distant galaxies say) and call this population C. Then the combination of B+C has lower average utility than A+C, and gets a clear negative answer on all the questions, so matching your intuition. I suspect that this is the situation we're actually in: a large, maybe infinite, population elsewhere that we can't do anything about, and whose average utility is unknown. In that case, it is unclear whether average utilitarianism tells us to increase or decrease the Earth's population, and we can't make a judgement one way or another.
While I am not an average utilititarian, (I think,) A world containing only one person suffering horribly does seem kinda worse.
Both worlds contain people "suffering horribly".
One world contains pople suffering horribly. The other contains a person suffering horribly. And no-one else.
So, the difference is that in one world there are many people, rather than one person, suffering horribly. How on Earth can this difference make the former world better than the latter?!
Because it doesn't contain anyone else. There's only one human left and they're "suffering horribly".
Suppose I publicly endorse a moral theory which implies that the more headaches someone has, the better the world becomes. Suppose someone asks me to explain my rationale for claiming that a world that contains more headaches is better. Suppose I reply by saying, "Because in this world, more people suffer headaches." What would you conclude about my sanity?
Most people value humanity's continued existence.
I'm glad you're here! Do you have any comments on Nick Bostrom and Toby Ord's idea for a "parliamentary model" of moral uncertainty?
Thanks! Yes, I'm good friends with Nick and Toby. My view on their model is as follows. Sometimes intertheoretic value comparisons are possible: that is, we can make sense of the idea that the difference in value (or wrongness) between two options A and B one one moral theory is greater, lesser, or equal to the difference in value (or wrongness) between two options C and D on another moral theory. So, for example, you might think that killing one person in order to save a slightly less happy person is much more wrong according to a rights-based moral view than it is according to utilitarianism (even though it's wrong according to both theories). If we can make such comparisons, then we don't need the parliamentary model: we can just use expected utility theory. Sometimes, though, it seems that such comparisons aren't possible. E.g. I add one person whose life isn't worth living to the population. Is that more wrong according to total utilitarianism or average utilitarianism? I have no idea. When such comparisons aren't possible, then I think that something like the parliamentary model is the right way to go. But, as it stands, the parliamentary model is more of a suggestion than a concrete proposal. In terms of the best specific formulation, I think that you should normalise incomparable theories at the variance of their respective utility functions, and then just maximise expected value. Owen Cotton-Barratt convinced me of that! Sorry if that was a bit of a complex response to a simple question!
Hi Will, I think most LWer's would agree that; "Anyone who tries to practice rationality as defined on Less Wrong." is a passible description of what we mean by 'rationalist'.
Thanks for that. I guess that means I'm not a rationalist! I try my best to practice (1). But I only contingently practice (2). Even if I didn't care one jot about increasing happiness and decreasing suffering in the world, then I think I still ought to increase happiness and decrease suffering. I.e. I do what I do not because it's what I happen to value, but because I think it's objectively valuable (and if you value something else, like promoting suffering, then I think you're mistaken!) That is, I'm a moral realist. Whereas the definition given in Eliezer's post suggests that being a rationalist presupposes moral anti-realism. When I talk with other LW-ers, this often seems to be a point of disagreement, so I hope I'm not just being pedantic!
Not at all. (Eliezer is a sort of moral realist). It would be weird if you said "I'm a moral realist, but I don't value things that I know are objectively valuable". It doesn't really matter whether you're a moral realist or not - instrumental rationality is about achieving your goals, whether they're good goals or not. Just like math lets you crunch numbers, whether they're real statistics or made up. But believing you shouldn't make up statistics doesn't therefore mean you don't do math.
Could you provide a link to a blog post or essay where Eliezer endorses moral realism? Thanks!
Sorting Pebbles Into Correct Heaps notes that 'right' is the same sort of thing as 'prime' - it refers to a particular abstraction that is independent of anyone's say-so. Though Eliezer is also a sort of moral subjectivist; if we were built differently, we would be using the word 'right' to refer to a different abstraction. Really, this is just shoehorning Eliezer's views into philosophical debates that he isn't involved in.
"It doesn't really matter whether you're a moral realist or not - instrumental rationality is about achieving your goals, whether they're good goals or not." It seems to me that moral realism is an epistemic claim - it is a statement about how the world is - or could be - and that is definitely a matter that impinges on rationality.
This seems to be similar to Eliezer's beliefs. Relevant quote from Harry Potter and the Methods of Rationality:
I don't think that's what Harry is saying there. Your quote from HPMOR seems to me to be more about the recognition that moral considerations are only one aspect of a decision-making process (in humans, anyway), and that just because that is true doesn't mean that moral considerations won't have an effect.

Hello, everyone!

I'd been religious (Christian) my whole life, but was always plagued with the question, "How would I know this is the correct religion, if I'd grown up with a different cultural norm?" I concluded, after many years of passive reflection, that, no, I probably wouldn't have become Christian at all, given that there are so many good people who do not. From there, I discovered that I was severely biased toward Christianity, and in an attempt to overcome that bias, I became atheist before I realized it.

I know that last part is a common idiom that's usually hyperpole, but I really did become atheist well before I consciously knew I was. I remember reading HPMOR, looking up lesswrong.com, reading the post on "Belief in Belief", and realizing that I was doing exactly that: explaining an unsupported theory by patching the holes, instead of reevaluating and updating, given the evidence.

It's been more than religion, too, but that's the area where I really felt it first. Next projects are to apply the principles to my social and professional life.

Welcome! The least attractive thing about the rationalist life-style is nihilism. It's there, it's real, and it's hard to handle. Eliezer's solution is to be happy and the nihilism will leave you alone. But if you have a hard life, you need a way to spontaneously generate joy. That's why so many people turn to religion as a comfort when they are in bad situations. The problem that I find is that all ways to spontaneously generate joy have some degree of mysticism. I'm looking into Tai Chi as a replacement for going to church. But that's still eastern mumbo-jumbo as opposed to western mumbo-jumbo. Stoicism might be the most rational joy machine I can find. Let me know if you ever un-convert.

The problem that I find is that all ways to spontaneously generate joy have some degree of mysticism.

What? What about all the usual happiness inducing things? Listening to music that you like; playing games; watching your favourite TV show; being with friends? Maybe you've ruled these out as not being spontaneous? But going to church isn't less effort than a lot of things on that list.

I suspect that a tendency towards mysticism just sort of spontaneously accretes onto anything sufficiently esoteric; you can see this happening over the last few decades with quantum mechanics, and to a lesser degree with results like Gödel's incompleteness theorems. Martial arts is another good place to see this in action: most of those legendary death touch techniques you hear about, for example, originated in strikes that damaged vulnerable nerve clusters or lymph nodes, leading to abscesses and eventually a good chance of death without antibiotics. All very explicable. But layer the field's native traditional-Chinese-medicine metaphor over that and run it through several generations of easily impressed students, partial information, and novelists without any particular incentive to be realistic, and suddenly you've got the Five-Point Palm Exploding Heart Technique.

So I don't think the mumbo-jumbo is likely to be strictly necessary to most eudaemonic approaches, Eastern or Western. I expect it'd be difficult to extract from a lot of them, though.

It would be difficult to do it on your own, but it's not very hard to find e.g. guides to meditation that have been bowlderized of all the mysterious magical stuff.
Maybe it's incomprehensibility itself that makes some people happy? If you don't understand it, you don't feel responsible, and ignorance being bliss, all that weird stuff there is not your problem, and that's the end of it as far as your monkey bits are concerned.

Hello everyone,

Thought it was about time to do one of these since I've made a couple of comments!

My name's Carl. I've been interested in science and why people believe the strange things they believe for many years. I was raised Catholic but came to the conclusion around the age of ten that it was all a bit silly really, and as yet I have found no evidence that would cause me to update away from that.

I studied physics as an undergrad and switched to experimental psychology for my PhD, being more interested at that point in how people work than how the universe does. I started to study motor control and after my PhD and a couple of postdocs I know way more about how humans move their arms than any sane person probably should. I've worked in behavioural, clinical and computational realms, giving me a wide array of tools to use when analysing problems.

My current postdoc is coming to an end and a couple of months ago I was undergoing somewhat of a crisis. What was I doing, almost 31 and with no plan for my life? I realised that motor control had started to bore me but I had no real idea what to do about it. Stay in science, or abandon it and get a real job? That hurts after almost a de... (read more)


Greetings LWers,

I'm an aspiring Friendliness theorist, currently based at the Australian National University -- home to Marcus Hutter, Rachael Briggs and David Chalmers, amongst others -- where I study formal epistemology through the Ph.B. (Hons) program.

I wasn't always in such a stimulating environment -- indeed I grew up in what can only be deemed intellectual deprivation, from which I narrowly escaped -- and, as a result of my disregard for authority and despise for traditional classroom learning, I am largely self-taught. Unlike most autodidacts, though, I never was a voracious reader, on the contrary I barely opened books at all, instead preferring to think things over in my head; this has left me an ignorant person -- something I'm constantly striving to improve on -- but has also protected me from many diseased ideas and even allowed me to better appreciate certain notions by having to rediscover them myself. (case in fact, throughout my adolescence I took great satisfaction in analysing my mental mechanisms and correcting for what I now know to be biases, yet I never came across the relevant literature, essentially missing out on a wealth of knowledge)

For a long time I've a... (read more)

Nice! What part of FAI interests you?

Too soon to say, as I discovered FAI a mere two months ago -- this, incidentally, could mean that it's a fleeting passion -- but CEV has definitely caught my attention, while the concept of a reflective decision theory I find really fascinating. The latter is something I've been curious about for quite some time, as plenty of moral precepts seem to break down once an agent -- even a mere homo sapiens -- reaches certain levels of self-awareness and, thus, is able to alter their decision mechanisms.
Isn't that a proper IQ test? At least it is where I live. Funny how we like to talk about things we're good at. The real test is "time from passing test to time you leave to save the yearly fee." That's awesome. Don't miss Marcus' lectures, such a sharp mind. Also, midi - Imperial March (used to be?) playing on his home page.
Yes and no; it's some version of the Cattell, but it's not administered individually, has a lowish ceiling and they don't reveal your exact result. For the record, you needn't join in order to take their heavily subsidised admission test.
Is your info Aussie-specific? (EDIT: We're not quite antipodes, but not far off, either) They did when I took it, ceiling 145, was administered in a group setting. 'Twas free even, in my case, some kind of promo action.
Yep I had Australia in mind, though it's by no means the only country where it works that way. Also, various national Mensa chapters have stopped releasing scores -- something to do with egalitarianism, go figure... -- and pardon my imprecise language, but by lowish I meant around 145 SD15. (didn't mean it in a patronising manner, it's just that plenty of tests have a ceiling of 160 SD15 and some, e.g. Stanford-Binet Form L-M, are employed even above that cutoff)
I do wonder if someone who'd score, say 155 on a 160 ceiling test would probably score 145 on a 145 ceiling test. You project an aura of knowledgeability on the subject, so I'll just go ahead and ask you. Consider yourself asked.
I'm afraid I'm not sufficiently knowledgeable to answer that and I have no intention of becoming one of those self-proclaimed internet experts! (plus the rest of the internet, outside of LW, already does a good enough job at spreading misinformation)
"machine/emergent intelligence theorist" would not box you in as much. Friendliness is only one model, you know, no matter how convincing it may sound.
"machine intelligence researcher" is also much more employable -- which isn't saying much.
One can signal differently to make oneself more palatable to different audiences and, indeed, "machine/emergent intelligence theorist" is less confining, while "machine intelligence researcher" is more suitable for academia or industry; here at LW, however, I needn't conceal my specific interests, which happen to be in AI safety and friendliness.

Hello everyone! I've been a lurker on here for awhile, but this is my first post. I've held out on posting anything because I've never felt like I knew enough to actually contribute to the conversation. Some things about me:

I'm currently 22, female, and a recent graduate of college with a degree in computer science. I'm currently employed as a software engineer at a health insurance company, though I am looking into getting into research some day. I mainly enjoy science, playing video games, and drawing.

I found this site through a link on the Skeptics Stack Exchange page. The post was about cryonics, which is how I got over here. I've been reading the site for about six months now and I have found it extremely helpful. It has also been depressing, though, because I've since realized many of the "problems" in the world were caused by the ineptitude of the species and aren't easily fixed. I've had some problems with existential nihilism since then and if anyone has any advice on the matter, I'd love to hear it.

My journey to rationality probably started with atheism and a real understanding of the scientific method and human psychology. I grew up Mormon, which has since give... (read more)

You describe "problems with existential nihilism." Are these bouts of disturbed, energy-sucking worry about the sheer uselessness of your actions, each lasting between a few hours and a few days? Moreover, did you have similar bouts of worry about other important seeming questions before getting into LW?
Yes, that is how I would describe it. It normally comes and goes, with the longest period lasting a few weeks. I'm not entirely sure if it's a byproduct of recent life events or if I am suffering from regular depression, but it's something I've had on and off for a few years. LW hasn't specifically made it worse, but it hasn't made it better either.
In that case, it sounds very, very similar to what I've learned to deal with -- especially as you describe feeling isolated from the people around you. I started to write a long, long comment, and then realized that I'd probably seen this stuff written down better, somewhere. This matches my experience precisely. For me, the most important realization was that the feeling of nihilism presents itself as a philosophical position, but is never caused or dispelled by philosophy. You can ruminate forever and find no reason to value anything; philosophical nihilism is fully internally consistent. Or, you can get exercise, and spend some time with friends, and feel better due not to philosophy, but to physiology. (I know this is glib, and that getting exercise when you just don't care about anything isn't exactly easy. The link above discusses this.) That above post, and Alicorn's sequence on luminosity -- effective self-awareness -- probably lay out the right steps to take, if you'd like to most-effectively avoid these crappy moods. Moreover, if you'd like to chat more, over skype some time, or via pm, or whatever, I'd be happy to. I'm pretty busy, so there may be high latency, but it sounds like you're dealing with things that are very similar to my own experience, and I've partly learned how to handle this stuff over the past few years.

Hi! Long-time lurker, first-time... joiner?

I was inspired to finally register by this post being at the top of Main. Not sure yet how much I'll actually post, but the removal of the passive barrier of, you know, not actually being registered is gone, so we'll see.

Anyway. I'm a dude, live in the Bay Area, work in finance though I secretly think I'm actually a writer. I studied cog sci in college, and that angle is what I tend to find most interesting on Less Wrong.

I originally came across LW via HPMoR back in 2010. Since then, I've read the Sequences, been to a few meetups, and attended the June minicamp (which, P.S., was awesome).

I'm still struggling a bit with actually applying rationality tools in my life, but it's great to have that toolbox ready and waiting. Sometimes... I hear it calling out to me. "Sean! This is an obvious place to apply Bayes! Seaaaaaaan!"


Hi all,

Not quire recently joined, but when I first joined, I read some, then got busy and didn't participate after that.

Age: Not yet 30. Former Occupation: Catastrophe Risk Modeling New Occupation: Graduate Student, Public Policy, RAND Corporation.

Theist Status: Orthodox Jew, happy with the fact that there are those who correctly claim that I cannot prove that god exists, and very aware of the confirmation bias and lack of skepticism in most religious circles. It's one reason I'm here, actually. And I'll be glad to discuss it in the future, elsewhere.

I was initially guided here, about a year ago, by a link to The Best Textbooks on Every Subject . I was a bit busy working at the time, building biased mathematical models of reality. (Don't worry, they weren't MY biases, they were those of the senior people and those of the insurance industry. And they were normalized to historical experience, so as long as history is a good predictor of the future...) So I decided that I wanted to do something different, possibly something with more positive externalities, less short term thinking about how the world could be more profitable for my employer, and more long-term thinking about how it ... (read more)


Hello and goodbye.

I'm a 30 year old software engineer with a "traditional rationalist" science background, a lot of prior exposure to Singularitarian ideas like Kurzweil's, with a big network of other scientist friends since I'm a Caltech alum. It would be fair to describe me as a cryocrastinator. I was already an atheist and utilitarian. I found the Sequences through Harry Potter and the Methods of Rationality.

I thought it would be polite, and perhaps helpful to Less Wrong, to explain why I, despite being pretty squarely in the target demographic, have decided to avoid joining the community and would recommend the same to any other of my friends or when I hear it discussed elsewhere on the net.

I read through the entire Sequences and was informed and entertained; I think there are definitely things I took from it that will be valuable ("taboo" this word; the concept of trying to update your probability estimates instead of waiting for absolute proof; etc.)

However, there were serious sexist attitudes that hit me like a bucket of cold water to the face - assertions that understanding anyone of the other gender is like trying to understand an alien, for example.

Com... (read more)

Thanks for writing this. It's true that LW has a record of being bad at talking about gender issues; this is a problem that has been recognized and commented on in the past. The standard response seems to have been to avoid gender issues whenever possible, which is unfortunate but maybe better than the alternative. But I would still like to comment on some of the specific things you brought up:

assertions that understanding anyone of the other gender is like trying to understand an alien, for example.

I think I know the post you're referring to, I didn't read this as sexist, and I don't think that indicates a male-techy failure mode on my part about sexism. Some men are just really, really bad at understanding women (and maybe commit the typical mind fallacy when they try to understand men, and maybe just don't know anyone who doesn't fall into one of those categories), and I don't think they should be penalized for being honest about this.

gender essentialist

I haven't seen too much of this. Edit: Found some more.

women-are-objects-not-people-like-us crap

Where? Edit: Found some of this too.

I think it has fallen very squarely into the "nothing more than sexism, th

... (read more)
4Eliezer Yudkowsky
Try to keep in mind selection effects. The post was titled Failed Utopia - people who agreed with this may have posted less than those who disagreed. I confess to being somewhat surprised by this reaction. Posts and comments about gender probably constitute around 0.1% of all discussion on LessWrong.

Whenever I see a high quality comment made by a deleted account (see for example this thread where the two main participants are both deleted accounts), I'd want to look over their comment history to see if I can figure out what sequence of events alienated them and drove them away from LW, but unfortunately the site doesn't allow that. Here SamLL provided one data point, for which I think we should be thankful, but keep in mind that many more people have left and not left visible evidence of the reason.

Also, aside from the specific reasons for each person leaving, I think there is a more general problem: why do perfectly reasonable people see a need to not just leave LW, but to actively disidentify or disaffiliate with LW, either through an explicit statement (SamLL's "still less am I enthused about identifying myself as part of a community where that's so widespread"), or by deleting their account? Why are we causing them to think of LW in terms of identity in the first place, instead of, say, a place to learn about and discuss some interesting ideas?

Why are we causing them to think of LW in terms of identity in the first place, instead of, say, a place to learn about and discuss some interesting ideas?

Some possibilities:

  1. There have been deliberate efforts at community-building, as evidenced by all the meetup-threads and one whole sequence, which may suggest that one is supposed to identify with the locals. Even relatively innocuous things like introduction and census threads can contribute to this if one chooses to take a less than charitable view of them, since they focus on LW itself instead of any "interesting idea" external to LW.

  2. Labeling and occasionally hostile rhetoric: Google gives dozens of hits for terms like "lesswrongian" and "LWian", and there have been recurring dismissive attitudes regarding The Others and their intelligence and general ability. This includes all snide digs at "Frequentists", casual remarks to the effect of how people who don't follow certain precepts are "insane", etc.

  3. The demographic homogeneity probably doesn't help.

4Wei Dai
I agree with these, and I wonder how we can counteract these effects. For example I've often used "LWer" as shorthand for "LW participant". Would it be better to write out the latter in full? Should we more explicitly invite newcomers to think of LW in instrumental/consequentialist terms, and not in terms of identity and affiliation? For example, we could explain that "joining the LW community" ought to be interpreted as "making use of LW facilities and contributing to LW discussions and projects" rather than "adopting 'LW member' as part of one's social identity and endorsing some identifying set of ideas", and maybe link to some articles like Paul Graham's Keep Your Identity Small.

"Here at LW, we like to keep our identity small."

Nice one.
I think so. The other thing about "snide digs" the grandparent is talking about is they are not just bad image, they are also wrong (as in incorrect). I think the LW "hit rate" on specific enough technical matters is not all that good, to be honest.
One of the times the issue of overidentifying with LW came up here, about a year ago, I mentioned that my self-description is "LW regular [forum participant]". It means that I post regularly, but does not mean that I derive any sense of identity from it. "LWer" certainly sounds more like "this is my community", so I stay away from using it except toward people who explicitly self-identify as such. I also tend to discount quite a bit of what someone here posts, once I notice them using the pronoun "we" when describing the community, unless I know for sure that they are not caught up in the sense of belonging to a group of cool "rationalists".
I think the "LWer" appellation is just plain accurate (but then I've used the term myself). Any blog with a regular group of posters & commenters constitutes a community, so LW is a community. Posting here regularly makes us members of this community by default, and being coy about that fact would make me feel odd, given that we've strewn evidence of it all over the site. But I suspect I'm coming at this issue from a bit of an odd angle.
It may be because lot of LW regulars visibly think of it in terms of identity. LW is described by most participants as a community rather than a discussion forum, and there has been a lot of explicit effort to strengthen the communitarian aspect.
As a hypothesis, they may be ambivalent about discontinuing their hobby ("Two souls alas! are dwelling in my breast; (...)) and prefer to burn their bridges to avoid further ambivalence and decision pressures. Many prefer a course of action being locked in, as opposed to continually being tempted by the alternative.
Some people come from a background where they're taught to think of everything in terms of identity.
LW is a hub for several abnormal ideas. An implication that you're affiliated with LW is an implication that you take these ideas seriously, which no reasonable person would do.
Your comment's first sentence answers your second paragraph.
I guess you get considered fully unclean even if you're only observed breaking a taboo a few times.
Did you use a Rawlsian veil of ignorance when judging it? From a totally selfish point of view, I would very, very, very much rather be myself in this world than myself in that scenario (given that, among plenty of other things, I dislike most people of my gender), but think of, say, starving African children or people with disabilities. I don't know much about what it feels like to be in such dire straits so I'm not confident that I'd rather be a randomly chosen person in Failed Utopia 4-2 than a randomly chosen person in the actual world, but the idea doesn't sound obviously absurd to me.
Is that ... like ... allowed? edit: I agree with you and object to all the conditioning against contradicting "sacred" values (sexism = ugh, bad).
By whom? (Of course, that's not literally true, since the overwhelming majority of all 3.5 billion male humans alive are people I've never met or heard of and so I have little reason to dislike, but...)
Since I cannot imagine anything but a few cherry picked examples that could have led to your impression, let me use some of my own (the number of cases is low): The extremely positive reception of Alicorns "Living Luminously" sequence (karma +50 for the main post alone, Anja's great and technical posts (karmas +13, +34, +29) all indicate that good content is not filtered along gender lines, which it should be if there were some pervasive bias. Even asserting that understanding anyone of the other gender is "like trying to understand an alien" does not imply any sort of male superiority complex. If you object to sexism as just pointing out that there are differences both based on culture and genetics, well you got me there. Quite obviously there are, I assume you don't live in a hermaphrodite community. Why is it bad when/if that comes up? Forbidden knowledge? Are you sure that's the rationalist thing to do? Gender imbalance and a few misplaced or easily misinterpreted remarks need not be representative of a community, just as a predominantly male CS program at Caltech and frat jokes need not be representative of College culture.
It's possible that user is sensitive to gender issues precisely because it's comparatively difficult and not entirely rationalist to leave a community like Caltech. It's generally the stance of gender-sensitive humans that no one should have to listen to the occasional frat joke if they don't want to. I agree with everything else in your post; that final "can't you take a frat joke?" strikes me as defensive and unnecessary.
You're right, it was too carelessly formulated.
Will you fix it? =) Is there an established protocol for fixing these sorts of things?
The edit button? :P
Is that a protocol, strictly speaking? "Pressing the edit button" would be a protocol with only one action (not sufficient). Maybe there will be a policy post on this soon.
You're right, strictly speaking, the protocol would be TCPIP. :) (There is no mandatory or even authoritative social protocol for this situation. The typical behavior is editing and then putting an EDIT: brief explanation of edit, but just editing with no explanation is also fine, particularly if nobody's replied yet, or the edit is explained in child comments).
Well earlier today I clarified (euphemism for edited) a comment shortly after it was made, then found a reply that cited the old, unclarified version. You know what that looks like, once the tribe finds out? OhgodImdone. In a hushed voice I just found out that EY can edit his comments without an asterisk appearing.
Why not stay around and try to help fix the problem?

Ordinarily I'd leave this for SamLL to respond to, but I'd say the chances of getting a response in this context are fairly low, so hopefully it won't be too presumptuous for me to speculate.

First of all, we as a community suck at handling gender issues without bias. The reasons for this could span several top-level posts and in any case I'm not sure of all the details; but I think a big one is the unusually blurry lines between research and activism in that field and consequent lack of a good outside view to fall back on. I don't think we're methodologically incapable of overcoming that, but I do think that any serious attempt at doing so would essentially convert this site into a gender blog.

To make matters worse, for one inclined to view issues through the lens of gender politics, Failed Utopia 4-2 is close to the worst starting point this site has to offer. Never mind the explicitly negative framing, or its place within the fun theory sequence: we have here a story that literally places men on Mars on gender-essentialist grounds, and doesn't even mention nonstandard genders or sexual preferences. No, that's not meant to be taken all that seriously or to inform people's real... (read more)

As far as I can tell, we as a species suck at handling gender issues without bias, the closest thing to an exception to that I recall seeing being some (not all) articles (but usually not the comments) on the Good Men Project and the discussions on Yvain's “The $Nth Meditation on $Thing” blog post series.
Yeah, I was fairly impressed with Yvain's posts on the subject; if we did want to devote some serious effort to tackling this issue, I can think of far worse starting points.
s/gender// Though I think that this particular forum sucks less at handling at least some issues.
Fixing the problem needs less people with a highly polarizing agenda, not more.

Hello! I'm David.

I'm 26 (at the time of writing), male, and an IT professional. I have three (soon to be four) children, three (but not four) of which have a different dad.

My immediate links here were through the Singularity Institute and Harry Potter and the Methods of Rationality, which drove me here when I realized the connection (I came to those things entirely separately!). When I came across this site, I had read through the Wikipedia list of biases several times over the course of years, come to many conscious conclusions about the fragility of my own cognition, and had innumerable arguments with friends and family that changed minds, but I never really considered that there would be a large community of people that got together on those grounds.

I'm going to do the short version of my origin story here, since writing it all out seems both daunting and pretentious. I was raised rich and lucky by an entrepreneur/university professor/doctor father and a mother who always had to be learning something or go crazy (she did some of both). I dropped out of a physics major in college and got my degree in gunsmithing instead, but only after I worked a few years. Along the way, I've p... (read more)

Hi LWers,

I am Robert and I am going to change the world. Maybe just a little bit, but that’s ok, since it’s fun to do and there’s nothing else I need to do right now. (Yay for mini-retirements!)

I find some of the articles here on LW very useful, especially those on heuristics and biases, as well as material on self-improvement although I find it quite scattered among loads of way to theoretic stuff. Does it seem odd that I have learned much more useful tricks and gained more insight from reading HPMOR than from reading 30 to 50 high-rated and “foundational” articles on this site? I am sincerely sad that even the leading rationalists on LW seem to struggle getting actual benefits out of their special skills and special knowledge (Yvain: Rationality is not that great; Eliezer: Why aren't "rationalists" surrounded by a visible aura of formidability?) and I would like to help them change that.

My interest is mainly in contributing more structured, useful content and also to band together with fellow LWers to practice and apply our rationalist skills. As a stretch goal I think that we could pick someone really evil as our enemy and take them down, just to show our superiority.... (read more)

Welcome! Because they don't project high status with their body language? Re: Taking out someone evil. Let's be rational about this. Do we want to get press? Will taking them out even be worthwhile? What sort of benefits from testing ideas against reality can we expect? I think humans who study rationality might be better than other humans at avoiding certain basic mistakes. But that doesn't mean that the study of rationality (as it currently exists) amounts to a "success spray" that you can spray on any goal to make it more achievable. Also, if the recent survey is to be believed, the average IQ at Less Wrong is very high. So if LW does accomplish something, it could very well be due to being smart rather than having read a bunch about rationality. (Sometimes I wonder if I like LW mainly because it seems to have so many smart people.)
Some lessWrongians believe it is
That comment doesn't rule out selection effects, e.g. the IQ thing I mentioned.
IQ without study will not make you are super philosopher or super anything else.
Don't be too pessimistic to the newcomer, John. We're not completely useless. It doesn't grant any new abilities as such, admittedly, but if you're interested in making the right decision, then rationality is quite useful; to the extent that choosing correctly can help you, then this is place to be. Of course, how much the right choices can help you varies a bit, but it's hard to know how much you could achieve if you're biased, isn't it?
Hm. My correction on that would be: To the extent that your native decisionmaking mechanisms are broken and can be fixed by reading blog posts on Less Wrong, then this is the place to be. In other words, how useful the study of rationality is depends on how important and easily beaten the bugs Less Wrong tries to fix in human brains are. Many people are interested in techniques for becoming more successful and getting more out of life. Techniques range from reading The Secret to doing mindfulness meditation to reading Less Wrong. I don't see any a priori reason to believe that the ROI from reading Less Wrong is substantially higher than other methods. (Though, come to think of it, self-improvement guru Sebastian Marshall gives LW a rave review. So in practice LW might work pretty well, but I don't think that is the sort of thing you can derive from first principles, it's really something that you determine through empirical investigation.)
I'm evil by some people's standards. You'll have to get a little bit more specific about what you think constitutes evil. From what I've seen, real evil tends to be petty. Most grand atrocities are committed by people who are simply incorrect about what the right thing to do is.
You may follow HJPEV in calling world domination "world optimization", but running on some highly unreliable wetware means that grand projects tend to become evil despite best intentions, due to snowballing unforeseen ramifications. In other words, your approach seems to be lacking wisdom.
2Jonathan Paulson
You seem to be making a fully general argument against action.
Against any sweeping action without carefully considering and trying out incremental steps.
Thanks to all for the warm welcome and the many curious questions about my ambition! And special thanks to MugaSofer, Peterdjones, and jpaulsen for your argumentative support. I am very busy writing right now, and I hope that my posts will answer most of the initial questions. So I’ll rather use the space here to write a little more about myself. I grew up a true Ravenclaw, but after grad school I discovered that Hufflepuff’s modesty and cheering industry also have their benefits when it comes to my own happiness. HPMOR made me discover my inner Slytherin because I realized that Ravenclaw knowledge and Hufflepuff goodness do not suffice to bring about great achievements. The word “ambition” in the first line of the comment is therefore meant in professor Quirrell’s sense. I also have a deep respect for the principles of Gryffindor’s group (of which the names of A. Swartz and J. Assange have recently caught much mainstream attention), but I can’t find anything of that spirit in myself. If I have ever appeared to be a hero, it was because I accidentally knew something that was of help to someone. @shminux: I love incremental steps and try to incorporate them into any of my planning and acting! My mini-retirement is actually such a step that, if successful, I’d like to repeat and expand. @John_Maxwell_IV: Yay for empirical testing of rationality! @OrphanWilde: “Don't be frightened, don't be sad, We'll only hurt you if you're bad.“ Or to put it into more utilitarian terms: If you are in the way of my ambition, for instance if I would have to hurt your feelings to accomplish any of my goals for the greater good, I would not hesitate to do what has to be done. All I want is to help people to be happy and to achieve their goals, whatever they are. And you’ll probably all understand that I might give a slight preference to helping people whose goals align with mine. ;-) May you all be happy and healthy, may you be free from stress and anxiety, and may you achieve your
Anything more specific you have in mind?

Greetings. I am Error.

I think I originally found the place through a comment link on ESR's blog. I'm a geek, a gamer, a sysadmin, and a hobbyist programmer. I hesitate to identify with the label "rationalist"; much like the traditional meaning of "hacker", it feels like something someone else should say of me, rather than something I should prematurely claim for myself.

I've been working through the Sequences for about a year, off and on. I'm now most of the way through Metaethics. It's been a slow but rewarding journey, and I think the best thing I've taken out of it is the ability to identify bogus thoughts as they happen. (Identifying is not always the same as correcting them, unfortunately) Another benefit, not specifically from the sequences but from link-chasing, is the realization that successful mental self-engineering is possible; I think the tipping point for me there was Alicorn's post about polyhacking. The realization inspired me to try and beat the tar out of my akrasia, and I've done fairly well so far.

My current interests center around "updating efficiently." I just turned 30; I burnt my 20s establishing a living instead of learning all... (read more)

Welcome! It's acceptable and welcome to comment in the Sequences. The Recent Comments feature (link on the right sidebar, with distinct Recent Comments for the Main section and for the Discussion section) mean that there's a chance that new comments on old threads will get noticed.
Welcome! Commenting on the Sequences isn't against any rules. You stand a chance of getting responses from who watch the Recent Comments. However, in Discussion you'll see [SEQ RERUN] posts (which are bringing up old posts in the Sequences in chronological order) that encourage comments on the rerun, not the original. If you happen to be reading a post that's been recently re-run, you might get a better response in the rerun thread.

Hey everyone,

As I continue to work through the sequences, I've decided to go ahead and join the forums here. A lot of the rationality material isn't conceptually new to me, although much of the language is very much so, and thus far I've found it to be exceptionally helpful to my thinking.

I'm a 24 year old video game developer, having worked on graphics on a particular big-name franchise for a couple years now. It's quite the interesting job, and is definitely one of the realms I find the heady, abstract rationality tools to be extremely helpful. Rationality is what it is, and that seems to be acknowledged here, a fact I'm quite grateful for.

When I'm not discussing the down-to-earth topics here, people may find I have a sometimes anxiety-ridden attachment to certain religious ideas. Religious discussion has been extremely normal for me throughout my life, so while the discussion doesn't make me uncomfortable, my inability to come to answers that I'm happy with does, and has caused me a bit of turmoil outside of discussion. Obviously there is much to say about this, and much people may like to say to me, but I'd like to first get through all the sequences, get all of my quest... (read more)

Welcome! Glad to see you here. :D

I've been commenting for a few months now, but never introduced myself in the prior Welcome threads. Here goes: Student, electrical engineering / physics (might switch to math this fall), female, DC area.

I encountered LW when I was first linked to Methods a couple years ago, but found the Sequences annoying and unilluminating (after having taken basic psych and stats courses). After meeting a couple of LWers in real life, including my now-boyfriend Roger (LessWrong is almost certainly a significant part of the reason we are dating, incidentally), I was motivated to go back and take a look, and found some things I'd missed: mostly, reductionism and the implications of having an Occam prior. This was surprising to me; after being brought up as an anti-religious nut, then becoming a meta-contrarian in order to rebel against my parents, I thought I had it all figured out, and was surprised to discover that I still had attachments to mysticism and agnosticism that didn't really make any sense.

My biggest instrumental rationality challenge these days seems to be figuring out what I really want out of life. Also, dealing with an out-of-control status obsession.

To cover some typical LW clus... (read more)

I'm not quite sure what you're referring to by "the prominent belief patterns," but neither low confidence that signing up for cryonics results in life extension, nor low confidence that AI research increases existential risk, are especially uncommon here. That said, high confidence in those things is far more common here than elsewhere.
That is more or less what I am trying to say. It's just that I've noticed several people on Welcome threads saying things like, "Unlike many LessWrongers, I don't think cryonics is a good idea / am not concerned about AI risk."

Hi, I'm Edward and have been reading the occasional article on here for a while. I've finally decided to officially join as this year I'm starting to do more work on my knowledge and education (especially maths & science) and I like the thoughtful community I see here. I'm a programmer, but also have a passion for history. Just as I was finishing university, my thinking led me to abandon the family religion (many of my friends are still theists). I was going to keep thinking and exploring ideas but I ended up just living - now I want to begin thinking again.

Regards, Edward


I'm Abd ul-Rahman Lomax, introducing myself. I have six grandchildren, from five biological children, and I have two adopted girls, age 11 from China, and age 9 from Ethiopia.

Born in 1944, Abd ul-Rahman is not my birth name, I accepted Islam in 1970. Not being willing to accept pale substitutes, I learned to read the Qur'an in Arabic by reading the Qur'an in Arabic.

Back to my teenage years, I was at Cal Tech for a couple of years, being in Richard P. Feynman's two years of undergraduate physics classes, the ones made into the textbook. I had Linus Pauling for freshman chemistry, as well. Both of them helped create how I think.

I left Cal Tech to pursue a realm other than "science," but was always interested in direct experience rather than becoming stuffed with tradition, though I later came to respect tradition (and memorization) far more than at the outset. I became a leader of a "spiritual community," and a successor to a well-known teacher, Samuel L. Lewis, but was led to pursue many other interests.

I delivered babies (starting with my own) and founded a school of midwifery that trained midwives for licensing in Arizona.

Self-taught, I started an electronics d... (read more)

Welcome! That's a fascinating biography. I have been to one introductory Landmark seminar and wrote about the experience here.

Hello. I was brought here by HPMOR, which I finished reading today. Back in 1999 or something I found the site called sysopmind.com which had interesting reads on AI, Bayes theorem (that I didn't understand) and 12 virtues of rationality. I loved it for the beauty that reminded me of Asimov. I kept it in my bookmarks forever. (I knew him before he was famous? ;-))

I like SF (I have read many SF books but most were from before 1990 for some reason) and I'm a computer nerd, among other things. I want to learn everything, but I have a hard time putting in the work. I study to become a psychologist, scheduled to finish in 2013. My favorite area of psychology is social psychology, especially how humans make decisions, how humans are influenced by biases or norms or high status people. I'm married and have a daughter born in 2011.

I like to watch tv-shows, but I have high standards. It is SF if it is based in science and rationality, otherwise it's just space drama/space action and I have no patience for it. I also like psychological drama, but it has to be realistic and believable. Please give recommendations if you like. (edited:) Also, someone could explain in what way Star Trek, Babylon 5 or Battlestar Galactica is really SF or Buffy is feminist, so I know if they are worth my while.

Of those, the only one I've seen is Star Trek. They can be a bit handwavey about the science sometimes; I liked it, but if you're looking for hard science then you might not. As far as recommendations go, may I recommend the Chanur series (books, not TV) by one C.J. Cherryh?
For realistic psychological drama, I haven't seen any show that beats Mad Men.
Not without knowing you well enough. Sherlock), on the other hand should suit you just fine.
Ah, yes, thank you. I have seen Sherlock and loved it. Too few episodes though! =)

I highly doubt that I'll be posting articles or even joining discussions anytime soon, since right now, I'm just getting started on reading the sequences and exploring other parts of the site, and don't feel prepared yet to get involved in discussions. However, I'll probably comment on things now and then, so because of that (and, honestly, just because I'm a very social person), I figured I might as well post an introduction here.

I appreciate the way that discussions are described as ending on here, because I've noticed in other debates that "tapping out" is seen as running away, and the main trait that gives me problems in my quest for rationality is that I'm inherently a competitive person, and get more caught up in the idea of "winning" than of improving my thinking. I'm working on this, but if I do get involved in discussions, the fact that they aren't seen as much as competitions here compared to other places should be helpful to me.

Anyway, I guess I'll introduce myself. I'm Alexandra, and I'm a seventeen year old high-school student in the United States (I applied to the camp in August, but I never received any news about it, so I assume that I wasn't acc... (read more)

I'm not affiliated with SIAI or the summer camps in any way, but IMO this sounds like a breakdown somewhere in the organization's communication protocols. If I were you, I wouldn't just assume that I wasn't accepted, I would ask for an explanation.
I'll contact them, then. I wasn't expecting to be accepted, but on the off chance that I was, it's hopefully not too late.
I like your description of yourself. You remind me a bit of myself, actually. I think I'd enjoy conversing with you. Though I have nothing on my mind at the moment that I feel like discussing. Hm, I kind of feel like my comment ought to have a bit more content than "you seem interesting" but that's really all I've got.

Hellow Lesswrong! (I posted this in the other July2012 welcome thread aswell. :P Though apparently it has too many comments at this point or something to that effect.)

My name is Ryan and I am a 22 year old technical artist in the Video Game industry. I recently graduated with honors from the Visual Effects program at Savannah College of Art and Design. For those who don't know much about the industry I am in, my skill set is somewhere between a software programmer, a 3D artist, and a video editor. I write code to create tools to speed up workflows for the 3D things I or others need to do to make a game, or cinematic.

Now I found lesswrong.com through the Harry Potter and the Methods of Rationality podcast. Up unto that point I had never heard of Rationalism as a current state of being... so far I greatly resonate with the goals and lessons that have come up in the podcast, and what I have seen about rationalism. I am excited to learn more.

I wouldn't go so far to claim the label for myself as of yet, as I don't know enough and I don't particularly like labels for the most part. I also know that I have several biases, I feel like I know the reasons and causes for most, but I have not ... (read more)

I disagree with this claim. If you are capable of understanding concepts like the Generalized Anti-Zombie Principle, you are more than capable of recognizing that there is no god and that that hypothesis wouldn't even be noticeable for a bounded intelligence unless a bunch of other people had already privileged it thanks to anthropomorphism. Also, please don't call what we do here, "rationalism". Call it "rationality".
Welcome to LessWrong! There are a few of us here in the Game Industry, and a few more that like making games in their free time. I also played around with Houdini, though never produced anything worth showing.
Thanks for the welcome! Houdini can be a lot of fun- but without a real goal it is almost too open for anything of value to be easily made. Messing around in Houdini is a time sink without a plan. :) That said, I absolutely love it as a program.


My name is Trent Fowler. I started leaning toward scientific and rational thinking while still a child, thanks in part to a variety of aphorisms my father was fond of saying. Things like "think for yourself" and "question your own beliefs" are too general to be very useful in particular circumstances, but were instrumental in fostering in me a skepticism and respect for good argument that has persisted all my life (I'm 23 as of this writing). These tools are what allowed me to abandon the religion I was brought up in as a child, and to eventually begin salvaging the bits of it that are worth salvaging. Like many atheists, when I first dropped religion I dropped every last thing associated with it. I've since grown to appreciate practices like meditation, ritual, and even outright mysticism as techniques which are valuable and pursuable in a secular context.

What I've just described is basically the rationality equivalent of lifting weights twice a week and going for a brisk walk in the mornings. It's great for a beginner, but anyone who sticks with it long enough will start to get a glimpse of what's achievable by systematizing training and ramping ... (read more)

I am Yan Zhang, a mathematics grad student specializing in combinatorics at MIT (and soon to work at UC Berkeley after graduation) and co-founder of Vivana.com. I was involved with building the first year of SPARC. There, I met many cool people at CFAR, for which I'm now a curicculum consultant.

I don't know much about LW but have liked some of the things I have read here; AnnaSalamon described me as a "street rationalist" because my own rationality principles are home-grown from a mix of other communities and hobbies. In that sense, I'm happy to step foot into this "mainstream dojo" and learn your language.

Recently Anna suggested I may want to cross-post something I wrote to LW and I've always wanted to get to know the community better, so this is the first step, I suppose. I look forward to learning from all of you.

Welcome! It's good to see you here.
Haha hey QC. Remind me sometime to learn the "get ridiculously high points in karma-based communities and learn a lot" metaskill from you... you seem to be off to a good start here too ;)
Step 1 is to spend too much time posting comments. I'm not sure I recommend this to someone whose time is valuable. I would like to see you share your "street rationalist" skills here, though!


My name is Hannah. I'm an American living in Oslo, Norway (my husband is Norwegian). I am 24 (soon to be 25) years old. I am currently unemployed, but I have a bachelor's degree in Psychology from Truman State University. My intention is to find a job working at a day care, at least until I have children of my own. When that happens, I intend to be a stay-at-home mother and homeschool my children. Anything beyond that is too far into the future to be worth trying to figure out at this point in my life.

I was referred to LessWrong by some German guy on OkCupid. I don't know his name or who he is or anything about him, really, and I don't know why he messaged me randomly. I suppose something in my profile seemed to indicate that I might like it here or might already be familiar with it, and that sparked his interest. I really can't say. I just got a message asking if I was familiar with LessWrong or Harry Potter and the Methods of Rationality (which I was not), and if so, what I thought of them. So I decided to check them out. I thought the HP fanfiction was excellent, and I've been reading through some of the major series here for the past week or so. At one point I had a comment ... (read more)

Welcome here!

Hello LW,

Last Thursday, I was asked by User:rocurley if, in his absence, I wanted to organize a hiking event (originally my idea) for this week's DC metro area meetup, during which I discovered I could not make posts, etc. here because I had zero karma. I chose to cancel the meetup on account of weather. I had registered my account previously, but realizing that I might have need to post here in the future, and that I had next to nothing to lose, I have decided to introduce myself finally.

I discovered LW through HPMOR, through Tvtropes, I believe. I've read some LW articles, but not others. Areas of interest include sciences (I have a BS in physics), psychology, personality disorders, some areas of philosophy, reading, and generally learning new things. One of my favorite books (if not /the/ favorite) is Godel, Escher, Bac