If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.

(This is the fifth incarnation of the welcome thread; once a post gets over 500 comments, it stops showing them all by default, so we make a new one. Besides, a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves.)

A few notes about the site mechanics

Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax (you can click the "Help" link below the text box to bring up a primer).

You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.

However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.

Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.

It's definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there's a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way.  There's also a succession of open comment threads for discussion of anything remotely related to rationality.

Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.

There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion.

A few notes about the community

If you've come to Less Wrong to  discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.

If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)

If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma- honestly, you don't know what you don't know about the community norms here.)

If you'd like to connect with other LWers in real life, we have  meetups  in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page. There's also a Facebook group. If you have your own blog or other online presence, please feel free to link it.

If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter

A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.

A list of some posts that are pretty awesome

I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:

More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site!

Note from orthonormal: MBlume and other contributors wrote the original version of this welcome post, and I've edited it a fair bit. If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post. Finally, once this gets past 500 comments, anyone is welcome to copy and edit this intro to start the next welcome thread.


New Comment
Rendering 1000/1746 comments, sorted by (show more) Click to highlight new comments since: Today at 7:47 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hello! I call myself Atomliner. I'm a 23 year old male Political Science major at Utah Valley University.

From 2009 to 2011, I was a missionary for the Mormon Church in northeastern Brazil. In the last month I was there, I was living with another missionary who I discovered to be a closet atheist. In trying to help him rediscover his faith, he had me read The God Delusion, which obliterated my own. I can't say that book was the only thing that enabled me to leave behind my irrational worldview, as I've always been very intellectually curious and resistant to authority. My mind had already been a powder keg long before Richard Dawkins arrived with the spark to light it.

Needless to say, I quickly embraced atheism and began to read everything I could about living without belief in God. I'm playing catch-up, trying to expand my mind as fast as I can to make up for the lost years I spent blinded by religious dogma. Just two years ago, for example, I believed homosexuality was an evil that threatened to destroy civilization, that humans came from another planet, and that the Lost Ten Tribes were living somewhere underground beneath the Arctic. Needless to say, my re-education process has ... (read more)

Welcome to LW! Don't worry about some of the replies you're getting, polls show we're overwhelmingly atheist around here.

This^ That said, my hypothetical atheist counterpart would have made the exact same comment. I can't speak for JohnH, but I can see someone with experience of Mormons not holding those beliefs being curious regardless of affiliation. And, of course, the other two - well, three now - comments are from professed atheists. So far nobody seems willing to try and reconvert him or anything.
Welcome to LessWrong! Good for you! You might want to watch out for assuming that everyone had a similar experience with religion; many theists will fin this very annoying and this seems to be a common mistake among people with your background-type. Huh. I must say, I found the GD pretty terrible (despite reading it multiple times to be sure,) although I suppose that powder-keg aspect probably accounts for most of your conversion (deconversion?) I'm curious, could you expand on what you found so convincing in The God Delusion? I think we can all say that :)

Welcome to LessWrong!

Thank you! :)

Good for you! You might want to watch out for assuming that everyone had a similar experience with religion; many theists will fin this very annoying and this seems to be a common mistake among people with your background-type.

I apologize. I had no idea I was making this false assumption, but I was. I'm embarrassed.

I'm curious, could you expand on what you found so convincing in The God Delusion?

I replied to JohnH about this. I don't know if I could go into a lot of detail on why it was convincing, it was almost two years ago that I read it. But what really convinced me to start doubting my religion was when I prayed to God very passionately asking him whether or not The God Delusion was true and after I felt this tingly warm sensation telling me it was. I had done the same thing with The Book of Mormon multiple times and felt this same sensation, and I was told in church that this was the Holy Spirit telling me that it was true. I had been taught I could pray about anything and the Spirit would tell me whether or not it was true. After being told by the Spirit that The God Delusion was true, I decided that the only explanation is that what I thought of as the Spirit was just happening in my head and that it wasn't a sure way of finding knowledge. It was a very dramatic experience for me.

What kind of theist are you, personal or more of the general theism (which includes deism) variety? Any holy textstring you believe has been divinely inspired?
About as Deist as you can be while still being technically Christian. I'd be inclined to say there's something in all major religions, simply for selection reasons, but the only thing I'd endorse as "divinely inspired" as such would be the New Testament? I guess? Even that is filtered by cultural context and such, obviously,.
If you can readily articulate your reasons for evaluating the New Testament differently from other scriptures, I'm interested. (It's possible that you've already done so, perhaps even in response to this question from me; feel free to point me at writeups elsewhere if you wish.)
How many of your younger Mormon peers and friends do you think are secretly atheists?
I've only had two of my Mormon peers/friends/relatives reveal to me after knowing them for a substantial amount of time that they are atheists. Based on that, I would guess the percentage of active Latter-day Saints that are closet atheists is pretty low, around 1%-3%?
That implies that you have more-or-less a hundred close friends/peers/relatives, who you have known for a substantial amount of time and would expect them to tell you if they are closet atheists.
9Eliezer Yudkowsky10y
Mormons have lots of friends, and lots of relatives.
Over twenty-three years the numbers add up. I think I could easily find more than a hundred active Latter-day Saints just counting members of my extended family that I routinely encounter every year.
I am Mormon so I am curious where you got the beliefs that Homosexuality would destroy civilization, that humans came from another planet, that the Ten Tribes live underground beneath the Arctic? Those are not standard beliefs of Mormons (see for instance the LDS churches Mormonsandgays.org) and only one of those have I ever even encountered before (Ten Tribes beneath the Arctic) but I couldn't figure out where that belief comes from or why anyone would feel the need to believe it. I also have to ask, the same as MugaSofer, could you explain how The God Delusion obliterated your faith? It seemed largely irrelevent to me.
I have visited mormonsandgays.org. That came out very recently. It seems that the LDS Church is now backing off of their crusade against homosexuality and same-sex marriage. In the middle of the last decade, though, I can assure you what I was taught in church and in my family was that civilizations owed their stability to the prevalence of traditional marriages. I was told that Sodom and Gomorrah were destroyed because homosexuality was not being penalized and because of the same crime the Roman Empire collapsed. It is possible that these teachings, while not official doctrine, were inspired by the last two paragraphs of the LDS Church's 1995 proclamation The Family. In the second to last paragraph it says: I have a strong feeling my interpretation of this doctrine is also held by most active believing American Mormons, having lived among them my entire life. I don't think that most Mormons believe that mankind came from another planet, but I started believing this after I read something from the Journal of Discourses, in which Brigham Young stated: This doctrine has for good reason been de-emphasized by the LDS Church, but never repudiated. I read this and other statements made by Brigham Young and believed it. I did believe he was a prophet of God, after all. I began to believe that the Ten Tribes were living underneath the Arctic after reading The Final Countdown by Clay McConkie which details the signs that will precede the Second Coming. In the survey he apparently conducted of active Latter-day Saints, around 15% believed the Ten Tribes were living somewhere underground in the north. This belief is apparently drawn from an interpretation of Doctrine & Covenants 133:26-27, which states: I liked the interpretation that this meant there was a subterranean civilization of Israelites and believed it was true. I apologize that I gave examples of these extraordinary former beliefs right after I wrote "I'm playing catch-up, trying to expand my mind as fast as I


My name is Sandy and despite being a long time lurker, meetup organizer and CFAR minicamp alumnus, I've got a giant ugh field around getting involved in the online community. Frankly it's pretty intimidating and seems like a big barrier to entry - but this welcome thread is definitely a good start :)

IIRC, I was linked to Overcoming Bias through a programming pattern blog in the few months before LW came into existence, and subsequently spent the next three months of my life doing little else than reading the sequences. While it was highly fascinating and seemed good for my cognitive health, I never thought about applying it to /real life/.

Somehow I ended up at CFAR's January minicamp, and my life literally changed. After so many years, CFAR helped me finally internalize the idea that /rationalists should win/. I fully expect the workshop to be the most pivotal event in my entire life, and would wholeheartedly recommend it to absolutely anyone and everyone.

So here's to a new chapter. I'm going to get involved in this community or die trying.

PS: If anyone is in the Kitchener/Waterloo area, they should definitely come out to UW's SLC tonight at 8pm for our LW meetup. I can guarantee you won't be disappointed!

Hello, Less Wrong; I'm Laplante. I found this site through a TV Tropes link to Harry Potter and the Methods of Rationality about this time last year. After I'd read through that as far as it had been updated (chapter 77?), I followed Yudkowsky's advice to check out the real science behind the story and ended up here. I mucked about for a few days before finding a link to yudkowsky.net, where I spent about a week trying learn what exactly Bayes was all about. I'm currently working my way through the sequences, just getting into the quantum physics sequence now.

I'm currently in the dangerous position of having withdrawn from college, and my productive time is spent between a part-time job and this site. I have no real desire to return to school, but I realize that entry into any sort of psychology/neuroscience/cognitive science field without a Bachelor's degree - preferably more - is near impossible.

I'm aware that Yudkowsky is doing quite well without a formal education, but I'd rather not use that as a general excuse to leave my studies behind entirely.

My goals for the future are to make my way through MIRI's recommended course list, and the dream is to do my own research in a related field. We'll see how it all pans out.

my productive time is spent between a part-time job and this site.

Perhaps I'm reading a bit much into a throwaway phrase, but I suggest that time spent reading LessWrong (or any self-improvement blog, or any blog) is not, in fact, productive. Beware the superstimulus of insight porn! Unless you are actually using the insights gained here in a measureable way, I very strongly suggest you count LessWrong reading as faffing about, not as production. (And even if you do become more productive, observe that this is probably a one-time effect: Continued visits are unlikely to yield continual improvement, else gwern and Alicorn would long since have taken over the world.) By all means be inspired to do more work and smarter work, but do not allow the feeling of "I learned something today" to substitute for Actually Doing Things.

All that aside, welcome to LessWrong! We will make your faffing-about time much more interesting. BWAH-HAH-HAH!

Learning stuff can be pretty useful. Especially stuff extremely general in its application that isn't easy to just look up when you need it, like rationality. If the process of learning is enjoyable, so much the better.
I think you may have misinterpreted a critical part of the sentence: 'do not allow the FEELING of "I learned something today" to substitute for Actually Doing Things.' Insight porn, so to speak, is that way because it makes you feel good, like you can Actually Do Things and like you have the tools to now Actually Do Things. But if you don't get up and Actually Do Things, you have only learned how to feel like you can Actually Do Things, which isn't nearly as useful as it sounds.
Sure, I agree. IMO, any self-improvement effort should be intermixed with lots of attempts to accomplish object-level goals so you can get empirical feedback on what's working and what isn't.

My standard advice to all newcomers is to skip the quantum sequence, at least on the first reading. Or at least stop where the many worlds musings start. The whole thing is way too verbose and controversial for the number of useful points it makes. Your time is much better spent reading about cognitive biases. If you want epistemology, try the new sequence.

7Eliezer Yudkowsky10y
Bad advice for technical readers. Mihaly Barasz (IMO gold medalist) got here via HPMOR but only became seriously interested in working for MIRI after reading the QM sequence. Given those particular circumstances, can I ask that you stop with that particular bit of helpful advice?

Bad advice for technical readers. Mihaly Barasz (IMO gold medalist) got here via HPMOR but only became seriously interested in working for MIRI after reading the QM sequence.

Do you have a solid idea of how many technical readers get here via HPMOR but become disinterested in working for MIRI after reading the QM sequence? If not, isn't this potentially just the selection effect?

EY can rationally prefer the certain evidence of some Mihaly-Barasz-caliber researchers joining when exposed to the QM sequence over speculations whether the loss of Mihaly Barasz (had he not read the QM sequence) would be outweighed by even more / better technical readers becoming interested in joining MIRI, taking into account the selection effect. Personally, I'd go with what has been proven/demonstrated to work as a high-quality attractor.
7Eliezer Yudkowsky10y
Yep. I also tend to ignore nontechnical folks along the lines of RationalWiki getting offended by my thinking that I know something they don't about MWI. Carl often hears about, anonymizes, and warns me when technical folks outside the community are offended by something I do. I can't recall hearing any warnings from Carl about the QM sequence offending technical people. Bluntly, if shminux can't grasp the technical argument for MWI then I wouldn't expect him to understand what really high-class technical people might think of the QM sequence. Mihaly said the rest of the Sequences seemed interesting but lacked sufficient visible I-wouldn't-have-thought-of-that nature. This is very plausible to me - after all, the Sequences do indeed seem to me like the sort of thing somebody might just think up. I'm just kind of surprised the QM part worked, and it's possible that might be due to Mihaly having already taken standard QM so that he could clearly see the contrast between the explanation he got in college and the explanation on LW. It's a pity I'll probably never have time to write up TDT.

I have a phd in physics (so I have at least some technical skill in this area) and find the QM sequence's argument for many worlds unconvincing. You lead the reader toward a false dichotomy (Copenhagen or many worlds) in order to suggest that the low probability of copenhagen implies many worlds. This ignores a vast array of other interpretations.

Its also the sort of argument that seems very likely to sway someone with an intro class in college (one or two semesters of a Copenhagen based shut-up-and-calculate approach), precisely because having seen Copenhagen and nothing else they 'know just enough to be dangerous', as it were.

For me personally, the quantum sequence threw me into some doubt about the previous sequences I had read. If I have issues with the area I know the most about, how much should I trust the rest? Other's mileage may vary.

I have a phd in physics (so I have at least some technical skill in this area) and find the QM sequence's argument for many worlds unconvincing.

Actually, attempting to steelman the QM Sequence made me realize that the objective collapse models are almost certainly wrong, due to the way they deal with the EPR correlations. So the sequence has been quite useful to me.

On the other hand, it also made me realize that the naive MWI is also almost certainly wrong, as it requires uncountable worlds created in any finite instance of time (unless I totally misunderstand the MWI version of radioactive decay, or any emission process for that matter). It has other issues, as well. Hence my current leanings toward some version of RQM, which EY seems to dislike almost as much as his straw Copenhagen, though for different reasons.

For me personally, the quantum sequence threw me into some doubt about the previous sequences I had read.

Right, I've had a similar experience, and I heard it voiced by others.

As a result of re-examining EY's take on epistemology of truth, I ended up drifting from the realist position (map vs territory) to an instrumentalist position (models vs inputs&outputs... (read more)

How is that any more problematic than doing physics with real or complex numbers in the first place?
I defected from physics during my Master's, but this is basically the impression I had of the QM sequence as well.

Carl often hears about, anonymizes, and warns me when technical folks outside the community are offended by something I do. I can't recall hearing any warnings from Carl about the QM sequence offending technical people.

That sounds like reasonable evidence against the selection effect.

Bluntly, if shminux can't grasp the technical argument for MWI then I wouldn't expect him to understand what really high-class technical people might think of it.

I strongly recommend against both the "advises newcomers to skip the QM sequence -> can't grasp technical argument for MWI" and "disagrees with MWI argument -> poor technical skill" inferences.


I'm just kind of surprised the QM part worked, and it's possible that might be due to Mihaly having already taken standard QM so that he could clearly see the contrast between the explanation he got in college and the explanation on LW.

I'm no IMO gold medalist (which really just means I'm giving you explicit permission to ignore the rest of my comment) but it seems to me that a standard understanding of QM is necessary to get anything out of the QM sequence.

It's a pity I'll probably never have time to write up TDT.

Revealed preferences are rarely attractive.

Revealed preferences are rarely attractive.

Adds to "Things I won't actually get put on a T-shirt but sort of feel I ought to" list.

As others noted, you seem to be falling prey to the selection bias. Do you have an estimate of how many "IMO gold medalists" gave up on MIRI because its founder, in defiance of everything he wrote before, confidently picks one untestable from a bunch and proclaims it to be the truth (with 100% certainty, no less, Bayes be damned), despite (or maybe due to) not even being an expert in the subject matter? EDIT: My initial inclination was to simply comply with your request, probably because I grew up being taught deference to and respect for authority. Then it struck me as one of the most cultish things one could do.

with 100% certainty, no less, Bayes be damned

Is this an April Fool's joke? He says nothing of the kind. The post which comes closest to this explicitly says that it could be wrong, but "the rational probability is pretty damned small." And counting the discovery of time-turners, he's named at least two conceivable pieces of evidence that could change that number.

What do you mean when you say you "just don't put nearly as much confidence in it as you do"?

The number of IMO gold medalists is sufficiently low, and the probability of any one of them having read the QM sequence is sufficiently small, that my own estimate would be less than one regardless of X. (I don't have a good model of how much more likely an IMO gold medalist would be to have read the QM sequence than any other reference class, so I'm not massively confident.)
3Eliezer Yudkowsky10y
Well, I'm sorry to say this, but part of what makes authority Authority is that your respect is not always required. Frankly, in this case Authority is going to start deleting your comments if you keep on telling newcomers who post in the Welcome thread not to read the QM sequence, which you've done quite a few times at this point unless my memory is failing me. You disagree with MWI. Okay. I get it. We all get it. I still want the next Mihaly to read the QM Sequence and I don't want to have this conversation every time, nor is it an appropriate greeting for every newcomer.

Sure, your site, your rules.

Just to correct a few inaccuracies in your comment:

You disagree with MWI.

I don't, I just don't put nearly as much confidence in it as you do. It is also unfortunately abused on this site quite a bit.

nor is it an appropriate greeting for every newcomer.

I don't even warn every newcomer who mentions the QM sequence, let alone "every newcomer", only those who appear to be stuck on it. Surely Mihaly had no difficulties with it, so none of my warnings would interfere with "still want the next Mihaly to read the QM Sequence".

nor is it an appropriate greeting for every newcomer.

I don't even warn every newcomer who mentions the QM sequence, let alone "every newcomer"

The claim you made that prompted the reply was:

My standard advice to all newcomers is to skip the quantum sequence, at least on the first reading.

It is rather disingenuous to then express exaggerated 'let alone' rejections of the reply "nor is it an appropriate greeting for every newcomer".

Uhuh. That said, kudos to you for remaining calm and reasonable
You have a point, it's easy to read my first comment rather uncharitably. I should have been more precise: "My standard advice to all newcomers [who mention difficulties with the QM sequence]..." which is much closer to what actually happens. I don't bring it up out of the blue every time I greet someone.
Hmm, the above got a lot of upvotes... I have no idea why.

Hmm, the above got a lot of upvotes... I have no idea why.

Egalitarian instinct. Eliezer is using power against you, which drastically raises the standards of behavior expected from him while doing so---including less tolerance of him getting things wrong.

Your reply used the form 'graceful' in a context where you would have been given a lot of leeway even to be (overtly) rude. The corrections were portrayed as gentle and patient. Whether the corrections happen to be accurate or reasonable is usually almost irrelevant for the purpose of determining people's voting behavior this far down into a charged thread.

Note that even though I approve of Eliezer's decision to delete comments of yours disparaging the QM sequence to newcomers I still endorse your decision to force Eliezer to use his power instead of deferring to his judgement simply because he has the power. It was the right decision for you to make from your perspective and is also a much more desirable precedent.

I deliberately invoke this tactic on occasion in arguments on other people's turf, particularly where the rules are unevenly applied. I was once accused by an acquaintance who witnessed it of being unreasonably reasonable.

It's particularly useful when moderators routinely take sides in debates. It makes it dangerous for them to use their power to shut down dissent.

Nailed it on the head. As my cursor began to instinctively over the "upvote" button on shminux's comment I caught myself and thought, why am I doing this?. And while I didn't come to your exact conclusion I realized my instinct had something to do with EY's "use of power" and shminux's gentle reply. Some sort of underdog quality that I didn't yet take the time to assess but that my mouse-using-hand wanted badly to blindly reward. I'm glad you pieced out the exact reasoning behind the scenes here. Stopping and taking a moment to understand behavior and then correct based on that understanding is why I am here. That said, I really should think for a long time about your explanation before voting you up, too!
If it is as right as it is insightful (which it undeniably is), I would expect those who come across wedifid's explanation to go back and change their vote, resulting in %positive going sharply down. It doesn't appear to be happening.

If it is as right as it is insightful (which it undeniably is), I would expect those who come across wedifid's explanation to go back and change their vote, resulting in %positive going sharply down.

A quirk (and often a bias) humans have is that we tend to assume that just because a social behavior or human instinct can be explained it must thereby be invalidated. Yet everything can (in principle) be explained and there are still things that are, in fact, noble. My parents' love for myself and my siblings is no less real because I am capable of reasoning about the inclusive fitness of those peers of my anscestors that happened to love their children less.

In this case the explanation given was, roughly speaking "egalitarian instinct + politeness". And personally I have to say that the egalitarian instinct is one of my favorite parts of humanity and one of the traits that I most value in those I prefer to surround myself with (Rah foragers!).

All else being equal the explanation in terms of egalitarian instinct and precedent setting regarding authority use describes (what I consider to be) a positive picture and in itself is no reason to downvote. (The comment deserves to... (read more)

I believe that I already knew I was acting on egalitarian instinct when I upvoted your comment.
They could just be a weird sort of lazy whereby they don't scroll back up and change anything. Or maybe they never see his post. Or something else. I don't think the -%positive-not-going-down-yet is any indication that wedrifid's comment is not right.
This is the second time [http://lesswrong.com/lw/h3p/welcome_to_less_wrong_5th_thread_march_2013/8osv] you mention shminux having talked about QM for years. But I can't find any comments [http://lesswrong.com/user/shminux/overview/?after=t1_4ktb] or posts [http://lesswrong.com/user/shminux/submitted/?count=28&after=t3_8xn] he's made before July 2011. Does he have a dupe account or something else I don't know about?
Since you are asking... July 2011 is right for the join date and some time later is when I voiced any opinion related to the QM sequence and MWI (I did read through it once and browsed now and again since). No, I did not have another account before that, as a long-term freenode ##physics IRC channel moderator, I dislike being confused about user's previous identities, so I don't do it myself (hence the silly nick chosen a decade or so ago, which has lost all relevance by now). On the other hand, I don't mind people wanting a clean slate with a new nick, just not using socks to express a controversial or karma-draining opinion they are too chicken to have linked to their main account. I also encourage you to take whatever wedrifid writes about me with a grain of salt. While I read what he writes and often upvote when I find it warranted, I quite publicly announced here about a year ago that I will not be replying to any of his comments, given how counterproductive it had been for me. (There are currently about 4 or 5 people on my LW "do-not-reply" list.) I have also warned other users once or twice, after I noticed them in a similarly futile discussion with wedrifid. I would be really surprised if this did not color his perception and attitude. It certainly would for me, were the roles reversed.
I'm also interested in this. Hopefully it's not an overt lie or something.
I don't keep an exact mental record of the join dates. My guess from intuitive feel was "2 years". It's April 2013. It was July 2011 when the account joined. If anything you have prompted me to slightly increase my confidence in the calibration of my account-joining estimator. If the subject of how long user:shminux has been complaining about the QM sequence ever becomes relevant again I'll be sure to use Wei Dai's script, search the text and provide a link to the exact first mention. In this case, however, the difference hardly seems significant or important. I doubt it. If so I praise him for his flawless character separation.
Thanks for clarifying. I asked not because the exact timing is important but because the overstatement seemed uncharacteristic (albeit modest), and I wasn't sure whether it was just offhand pique or something else. (Also, if something funny had been going on, it might've explained the weird rancour/sloppiness/mindkilledness in the broader thread.)
Just an error. Note that in the context there was no particular pique. I intended acknowledgement of established disrespect [http://www.overcomingbias.com/2008/09/disagreement-is.html], not conveyance of additional disrespect. The point was that I was instinctively (as well as rationally) motivated to support shminux despite also approving of Eliezer's declared intent, which illustrates the strength of the effect. Fortunately nothing is lost if I simply remove the phrase you quote entirely. The point remains clear even if I remove the detail of why I approve of Eliezer's declaration. The main explanation there is just that incarnations of this same argument have been cropping up with slight variations for (what seems like) a long time. As with several other subjects there are rather clear battle lines drawn and no particular chance of anyone learning anything. The quality of the discussion tends to be abysmal, riddled with status games and full of arguments that are sloppy in the extreme. As well as the problem of persuasion through raw persistence.
Bluntly, IMO gold medalists who can conceive of working on something 'crazy' like FAI would be expected to better understand the QM sequence than that. Even more so they would be expected to understand the core arguments better than to get offended by my having come to a conclusion. I haven't heard from the opposite side at all, and while the probability of my hearing about it might conceivably be low, my priors on it existing are rather lower than yours, and the fact that I have heard nothing is also evidence. Carl, who often hears (and anonymizes) complaints from the outside x-risk community, has not reported to me anyone being offended by my QM sequence. Smart people want to be told something smart that they haven't already heard from other smart people and that doesn't seem 'obvious'. The QM sequence is demonstrably not dispensable for this purpose - Mihaly said the rest of LW seemed interesting but insufficiently I-wouldn't-have-thought-of-that. Frankly I worry that QM isn't enough but given how long it's taking me to write up the Lob problem, I don't think I can realistically try to take on TDT.
Again, you seem to be generalizing from a single example, unless you have more data points than just Mihaly.
Note that the original text was "gold," not "good". I assume IMO is the International Mathematical Olympiad [http://en.wikipedia.org/wiki/International_Mathematical_Olympiad](1). Not that this in any way addresses or mitigates your point; just figured I'd point it out. (1) If I've understood the wiki article, ~35 IMO gold medals are awarded every year.
Thanks, I fixed the typo.
QM Sequence is two parts: (1) QM for beginners (2) Philosophy-of-science on believing things when evidence is equipoise (or absent) - pick the simpler hypothesis. I got part (1) from reading Dancing Wu-Li Masters [http://www.amazon.com/Dancing-Wu-Li-Masters-Overview/dp/0060959681], but I can clearly see the value to readers without that background. But teaching foundational science is separate from teaching Bayesian rationalism. The philosophy of the second part is incredibly controversial. Much more than you acknowledge in the essays, or acknowledge now. Treating the other side of any unresolved philosophical controversy as if it is stupid, not merely wrong, is excessive and unjustified. In short, the QM sequence would seriously benefit from the sort of philosophical background stuff that is included in your more recent essays. Including some more technical discussion of the opposing position.

If you learned quantum mechanics from that book, you may have seriously mislearned it. It's actually pretty decent describing everything up to but excluding quantum physics. When it comes to QM, however, the author sacrifices useful understanding in favor of mysticism.

If you want to learn things/explore what you want to do with your life, take a few varied courses at Coursera [https://www.coursera.org/].
Hi, Laplante. Why do you want to enter psychology/neuroscience/cognitive science? I ask this as someone who is about to graduate with a double major in psychology/computer science and is almost certain to go into computer science as my career.

It's a forum where taking atheism for granted is widespread, and the 10% of non-atheists have some idea of what the 90% are thinking. Being atheist isn't part of the official charter, but you can make a function call to atheism without being questioned by either the 10% or the 90% because everyone knows where you're coming from. If I was on a 90% Mormon forum which theoretically wasn't about Mormonism but occasionally contained posters making function calls to Mormon theology without further justification, I would not walk in and expect to be able to make atheist function calls without being questioned on it. If I did, I wouldn't be surprised to be downvoted to oblivion if that forum had a downvoting function. This isn't groupthink; it's standard logical courtesy. When you know perfectly well that a supermajority of the people around you believe X, it's not just silly but logically rude to ask them to take Y as a premise without defending it. I would owe this hypothetical 90%-Mormon forum more acknowledgement of their prior beliefs than that.

I regard all of this as common sense.

As part of said minority, I fully endorse this comment.

I like your use of "function calls" as an analogy here, but I don't think it's a good idea; you could just as easily say "use concepts from" without alienating non-programmer readers.

I understand it now knowing that it's a programming reference (I program), but I wouldn't have recognized it otherwise. Thanks for the clarification.
Since I'm momentarily feeling remarkably empowered about my own life, I'm going to take this chance to officially bow out for a few weeks. We all knew it was coming—it's the typical reaction for an overwhelmed newbie like me, I know, and I'm always very determined not to give up, but I really think I had better take a break. My last week has hardly involved anything except LW and related sites, and we all know that having one's mind blown is a very strenuous task. I've learned a lot, and I will definitely be back after four weeks or so. I've decided I'm not going to let myself be pressured into expressly arguing in favor of religion. I've said several times I'm not interested in that, and that I don't have these supposed strong arguments in favor of religion. If you guys want a good theist, check out William Lane Craig [http://commonsenseatheism.com/?p=392]. When I come back I will, however, explain my own beliefs and why I can't fully accept the LW way of thinking. Please don't get misunderstand what I'm saying: I think you guys are right, more so than any group of people I've ever met. But for now I'm going to shelve philosophy and take advantage of my situation. In the next four weeks I'm going to a) learn Lambda Calculus and b) study Arabic intensively. May the Force be with you 'til we meet again.

For the record, I once challenged Craig to a Bloggingheads but he refused.

I'm a male senior in high school. I found this site in November or so, and started reading the sequences voraciously.

I feel like I might be a somewhat atypical LessWrong reader. For one, I'm on the young side. Also, if you saw me and talked to me, you would probably not guess that I was a "rationalist" from the way I act/dress but, I don't know, perhaps you might. When I first found this website, I was pretty sure I wanted to be an art major, now I'm pretty sure I want to be an art/comp sci double major and go into indie game development (correlation may or may not imply causation). I also love rap music (and not the "good" kind like Talib Kweli) and I read most of the sequences while listening to Lil Wayne, Lil B, Gucci Mane, Future, Young Jeezy, etc. I occasionally record my own terrible rap songs with my friends in my friend's basement. Before finding this site, the word "rational" had powerful negative affect around it. Science was far and away my least favorite subject in school. I have absolutely no interest at the moment in learning any science or anything about science, except for maybe neuroscience, and maybe metaphysics. I've always found t... (read more)

lulz. You have my attention. You sound like quite an intelligent and awesome person. (bad rap, art, rationality. only an interesting person could have such a nonstandard combination of interests. Boring people come prepackaged...) Glad to have you around. It's only a matter of time ;) I remember that feeling. I'm more skeptical now, but I can't help but notice more awesomeness in my life due to LW. It really is quite cool isn't it? This is the part that's been elusive to me. What kind of things are you doing? How do you knwo you are actually getting benefits and not just producing that "this is awesome" feeling which unfortunately often gets detached from realty? keep your identity small [http://paulgraham.com/identity.html]. Where do you live? Do you attend meetups?
Thank you :) I guess essentially what I do is try to read self-help stuff. I try to spend half my "work time", so to speak, doing this, and half working on creative projects. I've read both books and assorted stuff on the internet. My goal for April is to read a predetermined list of six self-help books. I'm currently on track for this goal. So far I've read * Part of the massive tome that is Psychological Self Help by Clayton Tucker-Ladd * Success - How We Can Reach Our Goals by Heidi Halverson * How to Talk to Anyone by Leil Lowndes * 59 Seconds by Richard Wiseman * Thinking Things Done by PJ Eby * the first 300 pages of Feeling Good by David Burns, the last 200 seem to be mostly about the chemical nature of depression and have little practical value, so I'm saving them for later If meditation books count * Mindfulness in Plain English by Henepola Gunaratana * most of Mastering the Core Teachings of the Buddha by Daniel Ingram I also have been keeping a diary, which is something I've wanted to get in the habit of all my life but have never been able to do. Every day, in addition to summarizing the day's events, I rate my happiness out of ten, my productivity out of ten, and speculate on how I can do better. I've only been keeping the diary a month, which is too small of a sample size. However, during this time, I had three weeks off for spring break, and I told myself that I would work as much as I could on self-improvement and personal projects. I ended up not really getting that much done, unfortunately. However, I managed to put in a median of... probably about five hours every day, and more importantly, I was in a fantastic mood the whole break. It might even have been the best mood I've been in for an extended time in the last few years. In the past, every time I have had a break from school, I ended up in a depressed, lonely, lethargic state, where I surfed the internet for hours on end, in which I paradoxically want to go back to sch
I think you need to talk to daenerys [http://lesswrong.com/user/daenerys], IIRC, she runs the Ohio stuff. Actually doing, for one, though it sounds like you're doing that too. yet. Some day you will want to take over the world, and then you will need to talk to big winners. I've had this problem, too (I've got so much free time, why is it all getting pissed away?). Have you tried beeminder [https://www.beeminder.com/]? I cannot overstate how much that site is just conscientiousness in a can, so to speak. Thanks for the list. A variety of evidence is making me want to check out the self-help community more closely.
I have yet to read a self-help book that doesn't emphatically state "If you do not take care to apply these principles as much as you can in your daily life, you will not gain anything from reading this book." So, yeah, I agree, and by "reading self-help" I mean "reading self-help and applying the knowledge". I've seen it, and checked it out a little, but I can't think of any way to quantify the stuff that I have problems getting done. Also I wish there was an option to donate money to charity, but I guess they have to make money somehow.
I have yet to see this. Which major LW contributor is advocating racism, and where can I read about it?
I'm sorry, I can't really remember any specific links to discussions, and I don't really know exactly who believes in what ideas, but I feel like there are a lot of people here, and especially people who show up in the comments, who believe that certain races are inherently more or less intelligent/violent/whatever on average than others. I specifically remember nyan_sandwich saying that he believes this, calling himself a "proto-racist" but that's the only example I can recall. The "reactionary" philosophy is discussed a lot here too, and I feel like most people who subscribe to this philosophy are racist. Mencius Moldbug is the biggest name in this, I believe. Also I've seen a lot of links to this site http://isteve.blogspot.com/ [http://isteve.blogspot.com/] which seems to basically be arguing in favor of racism. This blog post http://slatestarcodex.com/2013/03/03/reactionary-philosophy-in-an-enormous-planet-sized-nutshell/ [http://slatestarcodex.com/2013/03/03/reactionary-philosophy-in-an-enormous-planet-sized-nutshell/] contains a discussion of these issues.

The one basically follows from the other, I think. This isn't a reactionary site by any means; the last poll showed single-digit support for the philosophy here, if it's fair to consider it a political philosophy exclusive with liberalism, libertarianism, and/or conservatism. However, neoreaction/Moldbuggery gets a less hostile reception here than it does on most non-reactionary sites, probably because it's an intensely contrarian philosophy and LW seems to have a cultural fondness for clever contrarians, and we have do have several vocal reactionaries among our commentariat. Among them, perhaps unfortunately, are most of the people talking about race.

It's also pretty hard to dissociate neoreaction from... let's say "certain hypotheses concerning race", since "racism" is too slippery and value-laden a term and most of the alternatives are too euphemistic. The reasons for this seem somewhat complicated, but I think we can trace a good chunk of them to just how much of a taboo race is among what Moldbug calls the Cathedral; if your basic theory is that there's this vast formless cultural force shaping what everyone can and can't talk about without being brande... (read more)

If someone were to correctly point out genetic differences between groups (let's assume correctness as a hypothetical), would that be - in your opinion - 1) racist and reprehensible, 2) racist but not reprehensible, or (in the hypothetical) 3) not racist? Would your opinion differ if those genetic differences were relating to a) IQ, or b) lactose intolerance?
Yes to the second question, in that I would give the answer of 2 for A and 3 for B. Racism has at least three definitions colloquially that I can think of * 1: A belief that there is a meaningful way to categorize human beings into races, and that certain races have more or less desirable characteristics than others. This is the definition that Wikipedia uses. Not that many educated people are racist according to this definition, I think. * 2: The tendency to jump to conclusions about people based on their skin color, which can manifest as a consequence of racism-1, or unconsciously believing in racism-1. Pretty much everyone is racist to some extent according to this definition. * 3: Contempt or dislike of people based on their skin color, i.e. "I hate Asians". You could further divide this into consciously and unconsciously harboring these beliefs if you wanted. In the sexism debate, these three definitions are sort of given separate names: "belief in differences between the sexes", "sexism", and "misogyny" respectively. Racism-3 seems to be pretty clearly evil, and racism-2 causes lots of suffering, but racism-1 basically by definition cannot be evil if it is a true belief and you abide by the Litany of Tarski or whatever. But because they have the same name, it gets confusing. Some people might object to calling racism-1 racism, and instead will decide to call it "human biodiversity" or "race realism". I think this is bullshit. Just fucking call it what it is. Own up to your beliefs. (I am not racist-1, for the record.)

Some people might object to calling racism-1 racism, and instead will decide to call it "human biodiversity" or "race realism". I think this is bullshit. Just fucking call it what it is.

"What it fucking is" is a straw man. ie. "and that certain races have more or less desirable characteristics than others" is not what the people you are disparaging are likely to say, for all that it is vaguely related.

Own up to your beliefs.

Seeing this exhortation used to try to shame people into accepting your caricature as their own position fills me with the same sort of disgust and contempt that you have for racism. Failure to "own up" and profess their actual beliefs is approximately the opposite of the failure mode they are engaging in (that of not keeping their mouth shut when socially expedient). In much the same way suicide bombers are not cowards.

According to Wikipedia, "racism is usually defined as views, practices and actions reflecting the belief that humanity is divided into distinct biological groups called races and that members of a certain race share certain attributes which make that group as a whole less desirable, more desirable, inferior or superior." This definition appears to exactly match the beliefs of the people I am talking about. I guess it's all in how you define superior, inferior, more desirable, etc. But most of the discourse revolves around intelligence which is a pretty important trait and I don't think these people believe that black people, for example, have traits that make up for their supposed lack of intelligence, or that Asians have flaws that make up for their supposed above-average intelligence (and no, dick size doesn't count). In particular, these people seem to believe that an innate lack of intelligence is to blame for the fact that so many African countries are in total chaos and unless you believe in a soul or something, it's hard to imagine that a race physically incapable of sustaining civilization is not in some meaningful way "inferior". If you hold a belief that is described with a name that has negative connotations, you have two options. You can either hide behind some sort of euphemism, or you can just come out and say "yes I do believe that, and I am proud of it". I think the second choice is much more noble, and if I were to adopt these beliefs, I would just go ahead and describe myself as a racist. It's not really a major issue though and I probably shouldn't have used the word "fucking" in my previous post. But anyway, since the term is completely accurate, the only reason I can think of to not call the people I'm describing racists is because it might offend them, which is deeply ironic.
There is also a third option: Keep your identity small [http://www.paulgraham.com/identity.html] and pick your battles [http://www.paulgraham.com/say.html]. Just because the society happens to disagree with you in one specific topic, that is no reason to make that one topic central to your life, and to let all other people define you by that one topic regardless of what other traits or abilities you have -- which will probably happen if you are open about that disagreement. Imagine that you live in a society where people believe that 2+2=5, and they also believe that anyone who says 2+2=4 is an evil person and must be killed. (There seems to be a good reason for that. Hundred years ago there was an evil robot who destroyed half of the planet, and it is know that the robot believed that 2+2=4. Because this is the most known fact about the robot, people concluded that beliving that 2+2=4 must be the source of all evil, and needs to be eradicated from the society. We don't want any more planetary destruction, do we?) What are your choices? You could say that 2+2=4 and get killed. Or you could say that 2+2=4.999, avoid being killed, only get a few suspicious looks and be rejected at a few job interviews; and hope that if people keep doing that long enough, at one moment it will become acceptable to say that 2+2=4.9, or even 4.5, and perhaps one day no one will be killed for saying that it equals 4. The third option is to enjoy food and wine, and refuse to comment publicly on how much 2+2 is. Perhaps have a few trusted friends you can discuss maths with.
Okay, but all I'm saying is that if you do decide to talk about your beliefs, you should use a more honest term for your belief system. I definitely agree with you that racists should not go around talking publicly about their beliefs! You seem to have inferred something from my post that I didn't mean, sorry about that.
I think that “group as a whole” is the key word. Men are taller than women in average, and being tall is usually considered desirable; is pointing that out sexist? I'd say that until you treat that fact as a reason to consider a gender “as a whole” more desirable than another, it isn't.
Most people do consider a gender as a whole more desirable than another ... (and can also supply some "facts" on which that preference is based).
Possibly related: Overcoming Bias : Mate Racism [http://www.overcomingbias.com/2010/02/mate-racism.html].
Doesn't contradict what I said, because I never claimed that most people aren't sexist. (And BTW, I'm not sure whether what you mean by “desirable” is what was meant in WP's definition of racism. I'm not usually sexually attracted to males or Asians, but I consider this a fact about me, not about males or Asians, and I don't consider myself sexist or racist for that.) (EDIT: to be more pedantic, one could say that the fact that I'm normally only attracted to people with characteristics X, Y, and Z is a fact about me and that the fact that males/Asians seldom have characteristics X, Y and Z is a fact about them, though.)
If they believed you, consistency bias might make them lean more toward racist-2 and racist-3. Or it might shame them into lowering their belief in the entire reactionary memeplex, which would be epistemically sub-optimal. It might lower their status, or even their earning ability if justified accusations of racism became associated with their offline identities. There's many ways leveraging emotionally loaded terms can have negative effects.
Why not?
As far as racism-1 goes, I am told that high levels of melanin in the skin lead to an immunity to sunburn. So black people can't get sunburnt - that's a desirable characteristic, to my mind. (There's still negative effects - such as a headache - from being in the sun too long. Just not sunburn).
Science: [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2671032/]
Well, if you think races are a real thing, then calling this belief race realism seems fairly clear, and helps distinguish your belief from type-3 racism. Human biodiversity implies something more like support for eugenics, to me, since you're saying that humans are diverse, not that race is a functional Schelling point.
Stripped of connotations, "race realism" to me implies the belief that empirical clusters exist within the space of human diversity and that they map to the traditional racial classifications, but not necessarily that those clusters affect intellectual or ethical dimensions to any significant degree. I'm not sure if there's an non-euphemistic value-neutral term for racism-1 in the ancestor's typology, but that isn't it. (The first thing that comes to mind is "scientific racism", which I'd happily use for ideas like this in a 19th- or early 20th-century context, but I have qualms about using it in a present-day context.)
Ah, good point.
If it helps, the LW user I most consistently associate with the "certain races are inherently more or less intelligent/violent/whatever on average than others" (as gothgirl420666 says below) is Eugine Nier [http://lesswrong.com/user/Eugine_Nier/overview/]. A quick Google search ("site:http://lesswrong.com [http://lesswrong.com] Eugine_Nier rac intelligence") turns up just about any proxy measure of intelligence, from SAT scores, to results of IQ tests, to crime rates, will correlate with race [http://lesswrong.com/lw/f8r/in_defense_of_moral_investigation/7r8o], for example. That said, were someone to describe Eugine Nier or their positions as "racist," I suspect they would respond that "racist" means lots of different things to different people and is not a useful descriptor.
Welcome! I'm unable to read while listening to music with words in it. I wonder how universal that is.
I know of at least three possible minds for this. Pretty sure we all assumed we were typical until talking about it. * One friend of mine is like you, and finds music horribly distracting to reading. * Another friend becomes practically deaf while reading, so music is just irrelevant. * I, on the third hand, can sing along to songs I know, while reading. I can possibly even do this for simple songs I don't know. I would suspect this is not optimal reading from a comprehension or speed perspective, but it's a lot of fun.
Pretty much the same here. I can only read when I tune out the lyrics. Well, not quite true, I can certainly read, but the content just doesn't register.

Hello, I'm E. I'll be entering university in September planning to study some subset of {math, computer science, economics}. I found Less Wrong in April 2012 through HPMoR and started seriously reading here after attending SPARC. I haven't posted because I don't think I can add too much to discussions, but reading here is certainly illuminating.

I'm interested in self-improvement. Right now, I'm trying to develop better social skills, writing skills, and work ethic. I'm also collecting some simple data from my day-to-day activities with the belief that having data will help me later. Some concrete actions I am currently taking:

  • Conditioning myself (focusing on smiling and positive thoughts) to enjoy social interaction. I don't dislike social interaction, but I'm definitely averse to talking to strangers. This aversion seems like it will hurt me long-term, so I'm trying to get rid of it.
  • Writing in a journal every night. Usually this is 200-300 words of my thoughts and summaries of the more important events that happened. I started this after noticing that I repeatedly tried and failed to recall my thoughts from a few months or years ago.
  • Setting daily schedules for myself. When I
... (read more)
Welcome! You sound remarkably driven. Math and CS are foundational fields which can be used for nearly anything, while economics past intro level is much more specialized. I'd suggest putting the least focus on economics unless/until you're sure you want to do something with it. (Warning: I am a programmer with an econ degree. I may be projecting, here.) Subjective happiness, maybe? The old "how good do you feel right now on a scale of 1-10" could be one way to quantify this. They are the worst thing.

Hi everyone. I have been lurking on this site for a long time, and somewhat recently have made an account, but I still feel pretty new here. I've read most of the sequences by now, and I feel that I've learned a lot from them. I have changed myself in some small ways as a result, most notably by donating small amounts to whatever charity I feel is most effective at doing good, with the intention that I will donate much more once I am capable of doing so.

I'm currently working on a Ph.D. in Mathematics right now, and I am also hoping that I can steer my research activities towards things that will do good. Still not sure exactly how to do this, though.

I also had the opportunity to attend my local Less Wrong meetup, and I have to say it was quite enjoyable! I am looking forward toward future interactions with my local community.

Hi Adele. Given what you write in your introduction, it's likely that you have already heard of this organization, but if this is not the case: you may want to check out 80,000 Hours [http://80000hours.org/]. They provide evidence-based career advice for people that want to make a difference.
Welcome! I like your username. EDIT: I know several people in this community who dropped out of math grad school, and most of them were happy with the decision. I'm choosing to graduate with a PhD in a useless field because I find myself in a situation where I can get one in exchange for a few months of work. I know someone who switched to algebraic statistics, which is a surprisingly useful field that involves algebraic geometry.
I haven't looked at this issue in detail, but I seem to recall that not getting more education was one of the more common regrets among "Terman's geniuses", whoever those are. Link [http://www.psych.cornell.edu/sec/pubPeople/tdg1/Hattiangadi.pdf].
What is their reasoning?
I can't speak for them, but I expect it's something like this: One can make more money, do more good, have a more fun career, and have more freedom in where one lives by dropping out than by going into academia. And having a PhD when hunting for non-academic jobs is not worth spending several years as a grad student doing what one feels is non-valuable work for little pay. You'd have to speak to someone who successfully dropped out to get more details; and of course even if all their judgments are correct, they may not be correct for you.
There are several people on LW (myself included) who continue to be in graduate school in mathematics. If you're interested in just talking math, there'll be an audience for that. I would personally be interested in more academic networking happening here--even if most people on LW will end up leaving mathematics as such.


I'm Jennifer; I'm currently a graduate student in medieval literature and a working actor. Thanks to homeschooling, though, I do have a solid background and abiding interest in quantum physics/pure mathematics/statistics/etc., and 'aspiring rationalist' is probably the best description I can provide! I found the site through HPMoR.

Current personal projects: learning German and Mandarin, since I already have French/Latin/Spanish/Old English/Old Norse taken care of, and much as I personally enjoy studying historical linguistics and old dead languages, knowing Mandarin would be much more practical (in terms of being able to communicate with the greatest number of people when travelling, doing business, reading articles, etc.)

Hey, another homeschooled person! There seem to be a lot of us here. How was your experience? Mine was the crazy religious type, but I still consider it to have been an overall good thing for my development relative to other feasible options.
Me three-- I thought I was the only one, where are we all hiding? :)
My experience was, overall, excellent - although my parents are definitely highly religious. (To be more precise, my father is a pastor, so biology class certainly contained some outdated ideas!) However, I'm in complete agreement - relative to any other possible options, I don't think I could have gotten a better education (or preparation for postsecondary/graduate studies) any other way.
Yeah, I got taught young earth creationism instead of evolution. But despite this, i think I was better prepared academically than most of my peers.
Your self-description is one of the best arguments for homeschooling I have ever seen or could imagine being made. (See also: Lillian Pierce [http://www.princeton.edu/admission/whatsdistinctive/alumniprofiles/pierce/].) Welcome to LW, and please keep existing.
Impressive! How do you plan to learn Mandarin? Immersion? Rosetta Stone?
Combination of methods based on what has worked for me in the past with other languages! I've used Rosetta Stone before, for French & Spanish, and while it's definitely got advantages, I (personally - I also know people who love it!) also found it very time-consuming for very little actual learning, and it's also expensive for what it is. Basically: a) I have enough friends who are either native or fluent speakers of Mandarin that once I'm a little more confident with the basics, I will draft them to help me practice conversation skills :) b) My university offers inexpensive part-time courses to current students. c) Lots of reading, textbook exercises, watching films, listening to music, translating/reading newspapers, etc. in the language. d) I'm planning to go to China to teach English in the not-too-distant future, so while I'd like to have basic communication skills down before I go, immersion will definitely help!


I’ve been interested in how to think well since early childhood. When I was about ten, I read a book about cybernetics. (This was in the Oligocene, when “cybernetics” had only recently gone extinct.) It gave simple introductions to probability theory, game theory, information theory, boolean switching logic, control theory, and neural networks. This was definitely the coolest stuff ever.

I went on to MIT, and got an undergraduate degree in math, specializing in mathematical logic and the theory of computation—fields that grew out of philosophical investigations of rationality.

Then I did a PhD at the MIT AI Lab, continuing my interest in what thinking is. My work there seems to have been turned into a surrealistic novel by Ken Wilber, a woo-ish pop philosopher. Along the way, I studied a variety of other fields that give diverse insights into thinking, ranging from developmental psychology to ethnomethodology to existential phenomenology.

I became aware of LW gradually over the past few years, mainly through mentions by people I follow on Twitter. As a lurker, there’s a lot about the LW community I’ve loved. On the other hand, I think some fundamental, generally-accepted ideas her... (read more)

Hey everyone!

I'm ll, my real name is Lukas. I am a student at a technical university in the US and a hobbyist FOSS programmer.

I discovered Harry Potter and the Methods of Rationality accidentally one night, and since then I've been completely hooked on it. After I caught up, I decided to check out the Less Wrong community. I've been lurking since then, reading the essays, comments, hanging out in the IRC channel.

Welcome to Less Wrong III!
It's not III, it's lll.
We can just call him CL for short, to distinguish him from IIV.
Damn sans-serif fonts...
If I were reading this in inconsolata, I'd have known that. Thanks.
It seems like my username is already sparking some controversies. It's three lowercase L letters. My initial is LL, but I can't have a two letter username, so LLL, but I thought uppercase would be too much, so lll it is.
Thank you! I am definitely enjoying this community. I am a recent Reddit expat, too, so I will focus my internet browsing time here. I don't think I will miss Reddit at all.
If your Reddit time commitment was anything like that of other people I know, you should be able to blow through all the sequences in about a day or two : )

Hey, my name is Roman. You can read my detailed bio here, as well as some research papers I published on the topics of AI and security. I decided to attend a local LW meet up and it made sense to at least register on the site. My short term goal is to find some people in my geographic area (Louisville, KY, USA) to befriend.

Nice to see more AI experts here.
Hi Roman. Would you mind answering a few more questions that I have after reading your interview [http://intelligence.org/2013/07/15/roman-interview/] with Luke? Carl Shulman and Nick Bostrom have a paper coming out arguing that embryo selection can eventually (or maybe even quickly) lead to IQ gains of 100 points or more. Do you think Friendly AI will still be an unsolvable problem for IQ 250 humans? More generally, do you see any viable path to a future better than technological stagnation short of autonomous AGI? What about, for example, mind uploading followed by careful recursive upgrading of intelligence?
Hey Wei, great question! Agents (augmented humans) with IQ of 250 would be superintelligent with respect to our current position on the intelligence curve and would be just as dangerous to us, unaugment humans, as any sort of artificial superintelligence. They would not be guaranteed to be Friendly by design and would be as foreign to us in their desires as most of us are from severely mentally retarded persons. For most of us (sadly?) such people are something to try and fix via science not something for whom we want to fulfill their wishes. In other words, I don’t think you can rely on unverified (for safety) agent (event with higher intelligence) to make sure that other agents with higher intelligence are designed to be human-safe. All the examples you give start by replacing humanity with something not-human (uploads, augments) and proceed to ask the question of how to safe humanity. At that point you already lost humanity by definition. I am not saying that is not going to happen, it probably will. Most likely we will see something predicted by Kurzweil (merger of machines and people).
I think if I became an upload (assuming it's a high fidelity emulation) I'd still want roughly the same things that I want now. Someone who is currently altruistic towards humanity should probably still be altruistic towards humanity after becoming an upload. I don't understand why you say "At that point you already lost humanity by definition".
Wei, the question here is would rather than should, no? It's quite possible that the altruism that I endorse as a part of me is related to my brain's empathy module, much of which might be broken if I see cannot relate to other humans. There are of course good fictional examples of this, e.g. Ted Chiang's "Understand" - http://www.infinityplus.co.uk/stories/under.htm [http://www.infinityplus.co.uk/stories/under.htm] and, ahem, Watchmen's Dr. Manhattan.
4Eliezer Yudkowsky10y
Logical fallacy: Generalization from fictional evidence. A high-fidelity upload who was previously altruistic toward humanity would still be altruistic during the first minute after awakening; their environment would not cause this to change unless the same sensory experiences would have caused their previous self to change. If you start doing code modification, of course, some but not all bets are off.
Well, I did put a disclaimer by using the standard terminology :) Fiction is good for suggesting possibilities, you cannot derive evidence from it of course. I agree on the first-minute point, but do not see why it's relevant, because there is the 999999th minute by which value drift will take over (if altruism is strongly related to empathy). I guess upon waking up I'd make value preservation my first order of business, but since an upload is still evolution's spaghetti code it might be a race against time.
Perhaps the idea is that the sensory experience of no longer falling into the category of "human" would cause the brain to behave in unexpected ways? I don't find that especially likely, mind, although I suppose long-term there might arise a self-serving "em supremacy" meme.
+1 for linking to Understand ; I remembered reading the story long ago, but I forgot the link. Thanks for reminding me !
We can talk about what high fidelity emulation includes. Will it be just your mind? Or will it be Mind + Body + Environment? In the most common case (with an absent body) most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent. People are mostly defined by their physiological needs (think of Maslow’s pyramid). An entity with no such needs (or with such needs satisfied by virtual/simulated abandoned resources) will not be human and will not want the same things as a human. Someone who is no longer subject to human weaknesses or relatively limited intelligence may lose all allegiances to humanity since they would no longer be a part of it. So I guess I define “humanity” as comprised on standard/unaltered humans. Anything superior is no longer a human to me, just like we are not first and foremost Neanderthals and only after homo sapiens.
Insofar as Maslow's pyramid accurately models human psychology (a point of which I have my doubts), I don't think the majority of people you're likely to be speaking to on the Internet are defined in terms of their low-level physiological needs. Food, shelter, physical security -- you might have fears of being deprived of these, or even might have experienced temporary deprivation of one or more (say, if you've experienced domestic violence, or fought in a war) but in the long run they're not likely to dominate your goals in the way they might for, say, a Clovis-era Alaskan hunter. We treat cases where they do as abnormal, and put a lot of money into therapy for them. If we treat a modern, first-world, middle-class college student with no history of domestic or environmental violence as psychologically human, then, I don't see any reason why we shouldn't extend the same courtesy to an otherwise humanlike emulation whose simulated physiological needs are satisfied as a function of the emulation process.
I don’t know you, but for me only a few hours a day is devoted to thinking or other non-physiological pursuits, the rest goes to sleeping, eating, drinking, Drinking, sex, physical exercise, etc. My goals are dominated by the need to acquire resources to support physiological needs of me and my family. You can extend any courtesy you want to anyone you want but you (human body) and a computer program (software) don’t have much in common as far as being from the same group is concerned. Software is not humanity; at best it is a partial simulation of one aspect of one person.
It seems to me that there are a couple of things going on here. I spend a reasonable amount of time (probably a couple of hours of conscious effort each day; I'm not sure how significant I want to call sleep) meeting immediate physical needs, but those don't factor much into my self-image or my long-term goals; I might spend an hour each day making and eating meals, but ensuring this isn't a matter of long-term planning nor a cherished marker of personhood for me. Looked at another way, there are people that can't eat or excrete normally because of one medical condition or another, but I don't see them as proportionally less human. I do spend a lot of time gaining access to abstract resources that ultimately secure my physiological satisfaction, on the other hand, and that is tied closely into my self-image, but it's so far removed from its ultimate goal that I don't feel that cutting out, say, apartment rental and replacing it with a proportional bill for Amazon AWS cycles would have much effect on my thoughts or actions further up the chain, assuming my mental and emotional machinery remains otherwise constant. I simply don't think about the low-level logistics that much; it's not my job. And I'm a financially independent adult; I'd expect the college student in the grandparent to be thinking about them in the most abstract possible way, if at all.
Well, yes, a lot depends on what we assume the upload includes, and how important the missing stuff is. If Dave!upload doesn't include X1, and X2 defines Dave!original's humanity, and X1 contains X2, then Dave!upload isn't human... more or less tautologically. We can certainly argue about whether our experiences of hunger, thirst, fatigue, etc. qualify as X1, X2, or both... or, more generally, whether anything does. I'm not nearly as confident as you sound about either of those things. But I'm not sure that matters. Let's posit for the sake of comity that there exists some set of experiences that qualify for X2. Maybe it's hunger, thirst, fatigue, etc. as you suggest. Maybe it's curiosity. Maybe it's boredom. Maybe human value is complex and X2 actually includes a carefully balanced brew of a thousand different things, many of which we don't have words for. Whatever it is, if it's important to us that uploads be human, then we should design our uploads so that they have X2. Right? But you seem to be taking it for granted that whatever X2 turns out to be, uploads won't experience X2. Why?
Just because you can experience something someone else can does not mean that you are of the same type. Belonging to a class of objects (ex. Humans) requires you to be one. A simulation of a piece of wood (visual texture, graphics, molecular structure, etc.) is not a piece of wood and so does not belong to the class of pieces of wood. A simulated piece of wood can experience simulated burning process or any other wood-suitable experience, but it is still not a piece of wood. Likewise a piece of software is by definition not a human being, it is at best a simulation of one.
Ah. So when you say "most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent" you're making a definitional claim that whatever the new agent experiences, it won't be a human feeling, because (being software) the agent definitionally won't be a human. So on your view it might experience hunger, thirst, fatigue, etc., or it might not, but if it does they won't be human hunger, thirst, fatigue, etc., merely simulated hunger, thirst, fatigue, etc. Yes? Do I understand you now? FWIW, I agree that there are definitions of "human being" and "software" by which a piece of software is definitionally not a human being, though I don't think those are useful definitions to be using when thinking about the behavior of software emulations of human beings. But I'm willing to use your definitions when talking to you. You go on to say that this agent, not being human, will not want the same things as a human. Well, OK; that follows from your definitions. One obvious followup question is: would a reliable software simulation of a human, equipped with reliable software simulations of the attributes and experiences that define humanity (whatever those turn out to be; I labelled them X2 above), generate reliable software simulations of wanting what a human wants? Relatedly, do we care? That is, given a choice between an upload U1 that reliably simulates wanting what a human wants, and an upload U2 that doesn't reliable simulate wanting what a human wants, do we have any grounds for preferring to create U1 over U2? Because if it's important to us that uploads reliably simulate being human, then we should design our uploads so that they have reliable simulations of X2. Right?
Have you ever had the unfortunate experience of hanging out with really boring people; say, at a party ? The kind of people whose conversations are so vapid and repetitive that you can practically predict them verbatim in your head ? Were you ever tempted to make your excuses and duck out early ? Now imagine that it's not a party, but the entire world; and you can't leave, because it's everywhere. Would you still "feel altruistic toward humanity" at that point ?
It's easy to conflate uploads and augments, here, so let me try to be specific (though I am not Wei Dai and do not in any way speak for them). I experience myself as preferring that people not suffer, for example, even if they are really boring people or otherwise not my cup of tea to socialize with. I can't see why that experience would change upon a substrate change, such as uploading. Basically the same thing goes for the other values/preferences I experience. OTOH, I don't expect the values/preferences I experience to remain constant under intelligence augmentation, whatever the mechanism. But that's kind of true across the board. If you did some coherently specifiable thing that approximates the colloquial meaning of "doubled my intelligence" overnight, I suspect that within a few hours I would find myself experiencing a radically different (from my current perspective) set of values/preferences. If instead of "doubling" you "multiplied by 10" I expect that within a few hours I would find myself experiencing an incomprehensible (from my current perspective) set of values/preferences.
I'm going to throw out some more questions. You are by no means obligated to answer. In your AI Safety Engineering paper you say, "We propose that AI research review boards are set up, similar to those employed in review of medical research proposals. A team of experts in artificial intelligence should evaluate each research proposal and decide if the proposal falls under the standard AI – limited domain system or may potentially lead to the development of a full blown AGI." But would we really want to do this today? I mean, in the near future--say the next five years--AGI seems pretty hard to imagine. So might this be unnecessary? Or, what if later on when AGI could happen, some random country throws the rules out? Do you think that promoting global cooperation now is a useful way to address this problem, as I assert in this shamelessly self-promoted blog post [http://humanpetition.blogspot.hk/2013/09/the-singularity-returns.html]? The general question I am after is, How do we balance the risks and benefits of AI research? Finally you say in your interview, "Conceivable yes, desirable NO" on the question of relinquishment. But are you not essentially proposing relinquishment/prevention?
Just because you can’t imaging AGI in the next 5 years, doesn’t mean that in four years someone will not propose a perfectly workable algorithm for achieving it. So yes, it is necessary. Once everyone sees how obvious AGI design is, it will be too late. Random countries don’t develop cutting edge technology; it is always done by the same Superpowers (USA, Russia, etc.). I didn’t read your blog post so can’t comment on “global cooperation”. As to the general question you are asking, you can get most conceivable benefits from domain expert AI without any need for AGI. Finally, I do think that relinquishment/delaying is a desirable thing, but I don’t think it is implementable in practice.
Is there a short form of where you see the line between these two types of systems? For example, what is the most "AGI-like" AI you can conceive of that is still "really a domain-expert AI" (and therefore putatively safe to develop), or vice-versa? My usual sense is that these are fuzzy terms people toss around to point to very broad concept-clusters, which is perfectly fine for most uses, but if we're really getting to the point of trying to propose policy based on these categories, it's probably good to have a clearer shared understanding of what we mean by the terms. That said, I haven't read your paper; if this distinction is explained further there, that's fine too.
Great question. To me a system is domain specific if it can’t be switched to a different domain without re-designing it. I can’t take Deep Blue and use it to sort mail instead. I can’t take Watson and use it to drive cars. An AGI (for which I have no examples) would be capable of switching domains. If we take humans as an example of general intelligence, you can take an average person and make them work as a cook, driver, babysitter, etc, without any need for re-designing them. You might need to spend some time teaching that person a new skill, but they can learn efficiently and perhaps just by looking at how it should be done. I can’t do this with domain expert AI. Deep Blue will not learn to sort mail regardless of how many times I demonstrate that process.
(nods) That's fair. Thanks for clarifying.
I've heard repeatedly that the correlation between IQ and achievement after about 120 (z = 1.33) is pretty weak, possibly even with diminishing returns up at the very top. Is moving to 250 (z = 10) passing a sort of threshold of intelligence at some point where this trend reverses? Or is the idea that IQ stops strongly predicting achievement above 120 wrong? This is something I've been curious about for a while, so I would really appreciate your help clearing the issue up a bit.
In agreement with Vaniver's comment [http://lesswrong.com/lw/h3p/welcome_to_less_wrong_5th_thread_march_2013/9rk2], there is evidence that differences in IQ well above 120 are predictive of success, especially in science. For example: * IQs of a sample of eminent scientists were much higher [http://infoproc.blogspot.com/2008/07/annals-of-psychometry-iqs-of-eminent.html] than the average for science PhDs (~160 vs ~130) * Among those who take the SAT at age 13, scorers in the top .1% end up outperforming [http://infoproc.blogspot.com/2009/01/horsepower-matters-psychometrics-works.html] the top 1% in terms of patents and scientific publications produced as adults I don't think I have good information on whether these returns are diminishing, but we can at least say that they are not vanishing. There doesn't seem to be any point beyond which the correlation disappears.
I just read the "IQ's of eminent scientists" and realized I really need to get my IQ tested. I've been relying on my younger brother's test (with the knowledge that older brothers tend to do slightly better but usually within an sd) to guesstimate my own IQ but a) it was probably a capped score like Feynman's since he took it in middle school and b) I have to know if there's a 95% chance of failure going into my field. I'd like to think I'm smart enough to be prominent, but it's irrational not to check first. Thanks for the information; you might have just saved me a lot of trouble down the line, one way or the other.
I'd be very careful generalizing from that study to the practice of science today. Science in the 1950s was VERY different, the length of time to the phd was shorter, postdocs were very rare, and almost everyone stepped into a research faculty position almost immediately. In today's world, staying in science is much harder- there are lots of grad students competing for many postdocs competing for few permanent science positions. In today's world, things like conscientiousness, organization skills,etc (grant writing is now a huge part of the job) play a much larger role in eventually landing a job in the past, and luck is a much bigger driver (whether a given avenue of exploration pays off requires a lot of luck. Selecting people whose experiments ALWAYS work is just grabbing people who have been both good AND lucky). It would surprise me if the worsening science career hasn't changed the make up of an 'eminent scientist'.
At the same time, all of those points except the luck one could be presented as evidence that the IQ required to be eminent has increased rather than the converse. Grant writing and schmoozing are at least partially a function of verbal IQ, IQ in general strongly predicts academic success in grad school, and competition tends to winnow out the poor performers a lot more than the strong. Not that I really disagree, I just don't see it as particularly persuasive. That's just one of the unavoidable frustrations of human nature though; an experiment which dis-confirms it's hypothesis worked perfectly, it just isn't human nature to notice negatives [http://en.wikipedia.org/wiki/Silver_Blaze].
I disagree for several reasons. Mostly, conscientiousness, conformity,etc are personality traits that aren't strongly correlated with IQ (conscientiousness may even be slightly negatively correlated). Would it surprise you to know that the most highly regarded grad students in my physics program all left physics? They had a great deal of success before and in grad school (I went to a top 5 program) , but left because they didn't want to deal with the administrative/grant stuff, and because they didn't want to spend years at low pay. I'd argue that successful career in science is selecting for some threshhold IQ and then much more strongly for a personality type.
No kidding.
Are you American? If you've taken the SAT, you can get a pretty good estimate of your IQ here [http://www.iqcomparisonsite.com/].
Mensa apparently doesn't consider the SAT to have a high-enough g loading to be useful as an intelligence test after 1994. Although the website's figure are certainly encouraging, it's probably best to take them with a bit of salt.
True, but note that, in contrast with Mensa, the Triple Nine Society continued to accept scores on tests taken up through 2005, though with a higher cutoff (of 1520) than on pre-1995 tests (1450). Also, SAT scores in 2004 were found [http://en.wikipedia.org/wiki/SAT#Correlations_with_IQ] to have a correlation of about .8 with a battery of IQ tests, which I believe is on par with the correlations IQ tests have with each other. So the SAT really does seem to be an IQ test (and an extremely well-normed one at that if you consider their sample size, though perhaps not as highly g-loaded as the best, like Raven's). But yeah, if you want to have high confidence in a score, probably taking additional tests would be the best bet. Here's a list of high-ceiling tests [http://www.eskimo.com/~miyaguch/], though I don't know if any of them are particularly well-normed or validated.
Is this what you intended to say? "Diminishing returns" seems to apply at the bottom the scale you mention. You've already selected the part where returns have started diminishing. Sometimes it is claimed that that at the extreme top the returns are negative. Is that what you mean?
Yeah, that's just me trying to do everything in one draft. Editing really is the better part of clear writing. I meant something along the lines of "I've heard it has diminishing returns and potentially [, probably due to how it affects metabolic needs and rate of maturation] even negative returns at the high end."
Most IQ tests are not very well calibrated above 120ish, because the number of people in the reference sample that scored much higher is rather low. It's also the case that achievement is a function of several different factors, which will probably become the limiting factor for most people at IQs higher than 120. That said, it does seem that in physics, first-tier physicists score better on cognitive tests than second-tier physicists, which suggests that additional IQ is still useful for achievement in the most cognitively demanding fields. It seems likely that augmented humans who do several times better than current humans on cognitive tests will also be able to achieve several times as much in cognitively demanding fields.
First, IQ tests don't go to 250 :-) Generally speaking standard IQ tests have poor resolution in the tails -- they cannot reliably identify whether you have the IQ of, say, 170 or 190. At some point all you can say is something along the lines of "this person is in the top 0.1% of people we have tested" and leave it at that. Second, "achievement" is a very fuzzy word. People mean very different things by it. And other than by money it's hard to measure.
I wonder how they propose to avoid the standard single-trait selective breeding issues, like accumulation of undesirable traits. For example, those geniuses might end up being sickly and psychotic.
It seems to me that this would not be a problem with iterated embryo selection [http://theuncertainfuture.com/faq.html#7], but I might be wrong. See also Yvain's "modal human" post [http://squid314.livejournal.com/345414.html].
Would it matter? C.f. goldmage [http://goldmage.elcenia.com/].
Note also that Roman co-authored 3 of the papers on MIRI's publications page [http://intelligence.org/all-publications/].
His paper http://cecs.louisville.edu/ry/LeakproofingtheSingularity.pdf [http://cecs.louisville.edu/ry/LeakproofingtheSingularity.pdf] seriously discusses ways to confine a potentially hostile superintelligence, a feat MIRI seems to consider hopeless. Did you guys have a good chat about it?

I think most everyone at MIRI and FHI thinks boxing is a good thing, even if many would say not enough on its own. I don't think you will find many who think that open internet connections are a matter of indifference for AI developers working with powerful AGI.

High-grade common sense (the sort you'd get by asking any specialist in computer security) says that you should design an AI which you would trust with an open Internet connection, then put it in the box you would use on an untrusted AI during development. (No, the AI will not be angered by this lack of trust and resent you. Thank you for asking.) I think it's safe to say that for basically everything in FAI strategy (I can't think of an exception right now) you can identify at least two things supporting any key point, such that either alone was designed to be sufficient independently of the other's failing, including things like "indirect normativity works" (you try to build in at least some human checks around this which would shut down any scary AI independently of your theory of indirect normativity being remotely correct, while also not trusting the humans to steer the AI because then the humans are your single point of failure).

See my interview with Roman here [http://intelligence.org/2013/07/15/roman-interview/].
Thanks. Pretty depressing, though.

Hi everyone, my name is Sara!

I am 21, live in Switzerland and study psychology. I am fascinated with the field of rationality and therefore wrote my Bachelor thesis on why and how critical thinking should be taught in schools. I started out with the plan to get my degree in clinical- and neuropsychology but will now change to developmental psychology for I was able to fascinate my supervising tutor and secure his full support. This will allow me to base my Master project on the development and enhancing of critical thinking and rationality, too. Do you have any recommendations?

After my Master's degree I still intend on getting an education as therapist (money reasons) or going into research (pushing the experimental research on rationality) and on giving a lot of money to the most effective charities around. I wonder if as therapist it would be smarter to concentrate on children or adults; both fields will be open for me after my university education (which will take me about 2.5-3 more years). I speak German, Swiss German, Italian, French and English (and understand some more languages), which will give me some freedom in the choice where to actually work in future.

...but I'm not ... (read more)

Hello, Sara. Do you have any specific ideas for this. Are you aiming at enhancing rationality in adults or children? I don't have specific recommendations except perhaps people whose work is relevant, however, you would have encountered those around the site. P.S. I am mainly commenting here because this is the second time I see you on the internet within the last 4 hours.
Hello, Tenoke. I am aiming on enhancing rationality in children but indeed had to often fall back on research with older people. Until now I've been concentrating on the work of Stanovich, Facione, van Gelder and Twardy. Whose work do you think would be relevant, too? Thank you for your answer!
Well, Kahneman (and Tversky) would be the most obvious example out of those not mentioned. Otherwise Dennet, Gilovich, Slovic, Pinker, Taleb and Thaler would be some examples of people whose work has varying degrees of relevance to the subject. Those are the people who I can think of off the top of my head but the best way to systematically find researchers of interest would be to look at the reverse citations of Kahneman and Tversky's work or something of the sort.
Ah, how could I forget them! Biases and heuristics play a big role in my interests for critical thinking of course. I'm a bit surprised: how come you included Dennett and Pinker? I know these two for work that's (very interesting but) mostly unrelated to my addressed topic. I'm curious, seems like I missed something important.
I was writing on auto-pilot, you are right that their work is significantly less relevant to the topic than the others'.

If people have a problem with it, that's not my fault.

It might or it might not be. As a general rule, if two people think that a single issue of fact is a settled question, in different directions, then either they have access to different information, or one or both of them is incorrect.

If the former is the case, then they can share their information, after which either they will agree, or one or both will be incorrect.

If we're incorrect about religion being a settled question, we want to know that, so we can change our minds. If Mormonism is incorrect, do you want to know that?


I'm a final year Mathematics student at Cambridge coming from an IOI, IMO background. I've written software for a machine learning startup, a game dev startup and Google. I was recently interested in programming language theory esp. probabilistic and logic programming (some experiments here http://peteriserins.tumblr.com/archive).

I'm interested in many aspects of startups (including design) and hope to move into product management, management consulting or venture capital. I love trying to think rationally about business processes and have started to write about it at http://medium.com/@p_e .

I found out about LW from a friend and have since started reading the sequences. I hope to learn more about practical instrumental rationality, I am less interested in philosophy and the meta theory. So far I've learned more about practical application of mathematics from data science and consulting, but expect rationality to take it further and with more rigor.

Great meeting y'all

Welcome! You may want to consider participating in a CFAR workshop [http://lesswrong.com/lw/h5t/new_applied_rationality_workshops_april_may_and/]. I think it's 1000% as effective for learning instrumental rationality as reading Less Wrong. They're optimized for teaching practical skills, and they tend to attract entrepreneurs. Also, I think you'd be a valuable addition to the community around CFAR, in addition to the online community around the Less Wrong website.
As someone who has done a CFAR workshop, and a lot of online rationality stuff (including, but not limited to reading ~90% of the sequences) I second this. I'll also add that do think having a strong theoretical background going in enhances the practical training.

Lumifer, please update that at this moment you don't grok the difference between "A => B (p=0.05)" and "B => A (p = 0.05)", which is why you don't understand what p-value really means, which is why you don't understand the difference between selection bias and base rate neglect, which is probably why the emphasis on using Bayes theorem in scientific process does not make sense to you. You made a mistake, that happens to all of us. Just stop it already, please.

And don't feel bad about it. Until recently I didn't understand it too, and I had a gold medal from international mathematical olympiad. Somehow it is not explained correctly at most schools, perhaps because the teachers don't get it themselves, or maybe they just underestimate the difficulty of proper understanding and the high chance of getting it wrong. So please don't contibute to the confusion.

Imagine that there are 1000 possible hypotheses, among which 999 are wrong, and 1 is correct. (That's just a random example to illustrate the concept. The numbers in real life can be different.) You have an experiment that says "yes" to 5% of the wrong hypotheses (this is what p=0.05 means), and a... (read more)

LOL. Yeah, yeah, mea culpa, I had a brain fart and expressed myself very poorly. I do understand what p-value really means. The issue was that I had in mind a specific scenario (where in effect you're trying to see if the difference in means between two groups is significant) but neglected to mention it in the post :-)
I feel like this could use a bit longer explanation, especially since I think you're not hearing Lumifer's point, so let me give it a shot. (I'm not sure a see a meaningful difference between base rate neglect and selection bias in this circumstance.) The word "grok" in Viliam_Bur's comment is really important. This part of the grandparent is true: But it's like saying "well, assume the diagnosis is correct. Then the treatment will make the patient better with high probability." While true, it's totally out of touch with reality- we can't assume the diagnosis is correct, and a huge part of being a doctor is responding correctly to that uncertainty. Earlier, Lumifer said this, which is an almost correct explanation of using Bayes in this situation: The part that makes it the "almost" is the "5% of the times, more or less." This implies that it's centered around 5%, with random chance determining what this instance is. But selection bias means it will almost certainly be more, and generally much more. In fields that study phenomena that don't exist, 100% of the papers published will be of false results that were significant by chance. In many real fields, rates of failure to replicate are around 30%. Describing 30% as "5%, more or less" seems odd, to say the least. But the proposal to reduce the p value doesn't solve the underlying problem (which was Lumifer's response). If we set the p value threshold lower, at .01 or .001 or wherever, we reducing the risk of false positives at the cost of increasing the risk of false negatives. A study design which needs to determine an effect at the .001 level is much more expensive than a study design which needs to determine an effect at the .05 level, and so we will have many less studies attempted, and many many less published studies. Better to drop p entirely. Notice that stricter p thresholds go in the opposite direction as the publication of negative results, which is the real solution to the problem of selection bias
My grandparent post was stupid, but what I had in mind was basically a stage-2 (or -3) drug trial situation. You have declared (at least to the FDA) that you're running a trial, so selection bias does not apply at this stage. You have two groups, one receives the experimental drug, one receives a placebo. Assume a double-blind randomized scenario and assume there is a measurable metric of improvement at the end of the trial. After the trial you have two groups with two empirical distributions of the metric of choice. The question is how confident you are that these two distributions are different. Well, as usual it's complicated. Yes, the p-test is suboptimal in most situations where it's used in reality. However it fulfils a need and if you drop the test entirely you need a replacement for the need won't go away.

Hi, I'm Andrew, a college undergrad in computer science. I found this site through HPMOR a few years ago.

Hi everyone, I'm Chris. I'm a physics PhD student from Melbourne, Australia. I came to rationalism slowly over the years by having excellent conversations with like minded friends. I was raised a catholic and fully bought into the faith, but became an atheist in early high school when I realised that scientific explanations made more sense.

About a year ago I had a huge problem with the collapse postulate of quantum mechanics. It just didn't make sense and neither did anything anyone was telling me about it. This led me to discover that many worlds wasn't as crazy as it had been made out to be, and led me to this very community. My growth as a rationalist has made me distrust the consensus opinions of more and more groups, and realising that physicists could get something so wrong was the final nail in the coffin for my trust of the scientific establishment. Of course science is still the best way to figure things out, but as soon as opinions become politicised or tied to job prospects, I don't trust scientists as far as I can throw them. Related to this is my skepticism that climate change is a big deal.

I am frustrated more by the extent of unreason in educated circles than I am in... (read more)

I'm pretty social and would love to meet more rationalist friends, but I have the perception that if I went to a meetup most people would be less extroverted than me, and it might not be much fun for me.

My experience at meetups has been pretty social. After all, meetups select for people outgoing enough to go out of the house in the first place. I'd encourage you to go once, if there's a convenient meetup around. The value of information is high; if the meetup sucks, that costs one afternoon, but if it's good, you gain a new group of friends.

Excellent point, I know that effect makes a huge difference in other contexts, so that resonates with me. Ok, well I'll give it a shot. There are no meetups near where I am in Germany at the moment, but I'll be back in Melbourne later in the year where there seems to be some regular stuff going on.
Welcome! What do you think of the Born probabilities?
I haven't gone through any of the supposed derivations, but I'm led to believe that the Born rule is convincingly derivable within many worlds. I have a book called "Many Worlds? Everett, quantum theory and reality", which contains such a derivation, I've been meaning to read it for a while and will get around to it some day. It claims: Which I think is a nice angle to view it from. At any rate, the Born rule is a fairly natural result to have, since the probabilities are simply the vector product of the wavefunction with itself, which is how you normally define the sizes of vectors in vector spaces. So I'm expecting the argument in the book to be related to the criteria that mathematicians use to define inner products, and how those criteria map to assumptions about the universe (ie no preferred spatial direction, that sort of thing). Maybe if I understand it I'll post something here about it for those who are interested — I'm yet to see a blog-style summary of where the Born rule comes from. At any rate it doesn't come from anywhere in the way we're taught quantum mechanics at uni, it's simply an axiom that one doesn't question. So any derivation, however assumption laden and weak would be an improvement over standard Copenhagen.


I'm a long-time singularitarian and (intermediate) rationalist looking be a part of the conversation again. By day I am an English teacher in a suburban American high school. My students have been known to Google me. Rather than self-censor I am using a pseudonym so that I will feel free to share my (anonymized) experiences as a rationalist high school teacher.

I internet-know a number of you in this community from early years of the Singularity Institute. I fleetingly met at a few in person once, perhaps. I used to write on singularity-related issues, and was a proud "sniper" of the SL4 mailing list for a time. For the last 6-7 years I've mostly dropped off the radar by letting "life" issues consume me, though I have continued to follow the work of the key actors from afar with interest. I allow myself some pride for any small positive impact I might have once had during a time of great leverage for donors and activists, while recognizing that far too much remains undone. (If you would like to confirm your suspicions of my identity, I would love to hear from you with a PM. I just don't want Google searches of my real name pulling up my LW acti... (read more)

0Said Achmiz9y
Welcome to Less Wrong! Is your user name a reference to "Darmok"?
Yes. It's amazing how memorable people find that one episode. Props to the writers.

Hi Less Wrong. I found a link to this site a year or so ago and have been lurking off and on since. However, I've self identified as a rationalist since around junior high school. My parents weren't religious and I was good at math and science, so it was natural to me to look to science and logic to solve everything. Many years later I realize that this is harder than I hoped.

Anyway, I've read many of the sequences and posts, generally agreeing and finding many interesting thoughts. It's fun reading about zombies and Newcomb's problem and the like.

I guess this sounds heretical, but I don't understand why Bayes theorem is placed on such a pedestal here. I understand Bayesian statistics, intuitively and also technically. Bayesian statistics is great for a lot of problems, but I don't see it as always superior to thinking inspired by the traditional scientific method. More specifically, I would say that coming up with a prior distribution and updating can easily be harder than the problem at hand.

I assume the point is that there is more to what is considered Bayesian thinking than Bayes theorem and Bayesian statistics, and I've reread some of the articles with the idea of trying to pin that down, but I've found that difficult. The closest I've come is that examining what your priors are helps you to keep an open mind.

Bayesian theorem is just one of many mathematical equations, like for example Pythagorean theorem. There is inherently nothing magical about it. It just happens to explain one problem with the current scientific publishing process: neglecting base rates. Which sometimes seems like this: "I designed an experiment that would prove a false hypothesis only with probability p = 0.05. My experiment has succeeded. Please publish my paper in your journal!" (I guess I am exaggerating a bit here, but many people 'doing science' would not understand immediately what is wrong with this. And that would be those who even bother to calculate the p-value. Not everyone who is employed as a scientist is necessarily good at math. Many people get paid for doing bad science.) This kind of thinking has the following problem: Even if you invent hundred completely stupid hypotheses; if you design experiments that would prove a false hypothesis only with p = 0.05, that means five of them would be proved by the experiment. If you show someone else all hundred experiments together [http://xkcd.com/882/], they may understand what is wrong. But you are more likely to send only the successful five ones to the journal, aren't you? -- But how exactly is the journal supposed to react to this? Should they ask: "Did you do many other experiments, even ones completely irrelevant to this specific hypothesis? Because, you know, that somehow undermines the credibility of this one." The current scientific publishing process has a bias. Bayesian theorem explains it. We care about science, and we care about science being done correctly.
That's not neglecting base rates, that's called selection bias combined with incentives to publish. Bayes theorem isn't going to help you with this. http://xkcd.com/882/ [http://xkcd.com/882/]
Uhm, it's similar, but not the same. If I understand it correctly, selection bias is when 20 researchers make an experiment with green jelly beans, 19 of them don't find significant correlation, 1 of them finds it... and only the 1 publishes, and the 19 don't. The essence is that we had 19 pieces of evidence against the green jelly beans, only 1 piece of evidence for the green jelly beans, but we don't see those 19 pieces, because they are not published. Selection = "there is X and Y, but we don't see Y, because it was filtered out by the process that gives us information". But imagine that you are the first researcher ever who has researched the jelly beans. And you only did one experiment. And it happened to succeed. Where is the selection here? (Perhaps selection across Everett branches or Tegmark universes. But we can't blame the scientific publishing process for not giving us information from the parallel universes, can we?) In this case, base rate neglect means ignoring the fact that "if you take a random thing, the probability that this specific thing causes acne is very low". Therefore, even if the experiment shows a connection with p = 0.05, it's still more likely that the result just happened randomly. The proper reasoning could be something like this (all number pulled out of the hat) -- we already have pretty strong evidence that acne is caused by food; let's say there is a 50% probability for this. With enough specificity (giving each fruit a different category, etc.), there are maybe 2000 categories of food. It is possible that more then one of them cause acne, and our probability distribution for that is... something. Considering all this information, we estimate a prior probability let's say 0.0004 that a random food causes acne. -- Which means that if the correlation is significant on level p = 0.05, that per se means almost nothing. (Here one could use the Bayes theorem to calculate that the p = 0.05 successful experiment shows the true cause o
That's a different case -- you have no selection bias here, but your conclusions are still uncertain -- if you pick p=0.05 as your threshold, you're clearly accepting that there is a 5% chance of a Type I error: the green jelly beans did nothing, but the noise happened to be such that you interpreted it as conclusive evidence in favor of your hypothesis. But that all is fine -- the readers of scientific papers are expected to understand that results significant to p=0.05 will be wrong around 5% of the times, more or less (not exactly because the usual test measures P(D|H), the probability of the observed data given the (null) hypothesis while you really want P(H|D), the probability of the hypothesis given the data). People rarely take entirely random things and test them for causal connection to acne. Notice how you had to do a great deal of handwaving in establishing your prior (aka the base rate). As an exercise, try to be specific. For example, let's say I want to check if the tincture made from the bark of a certain tree helps with acne. How would I go about calculating my base rate / prior? Can you walk me through an estimation which will end with a specific number?
And this is the base rate neglect. It's not "results significant to p=0.05 will be wrong about 5% of time". It's "wrong results will be significant to p=0.05 about 5% of time". And most people will confuse these two things. It's like when people confuse "A => B" with "B => A", only this time it is "A => B (p=0.05)" with "B => A (p=0.05)". It is "if wrong, then in 5% significant". It is not "if significant, then in 5% wrong". Yes, you are right. Establishing the prior is pretty difficult, perhaps impossible. (But that does not make "A => B" equal to "B => A".) Probably the reasonable thing to do would be simply to impose strict limits in areas where many results were proved wrong.
Um, what "strict limits" are you talking about, what will they look like, and who will be doing the imposing? To get back to my example, let's say I'm running experiments to check if the tincture made from the bark of a certain tree helps with acne -- what strict limits would you like?
p = 0.001, and if at the end of the year too many researches fail to replicate, keep decreasing. (let's say that "fail to replicate" in this context means that the replication attempt cannot prove it even with p = 0.05 -- we don't want to make replications too expensive, just a simple sanity check) a long answer would involve a lot of handwaving again (it depends on why do you believe the bark is helpful; in other words, what other evidence do you already have) a short answer: for example, p = 0.001
I know a few answers to this question, and I'm sure there are others. (As an aside, these foundational questions are, in my opinion, really important to ask and answer.) 1. What separates scientific thought and mysticism is that scientists are okay with mystery. If you can stand to not know what something is, to be confused, then after careful observation and thought you might have a better idea of what it is and have a bit more clarity. Bayes is the quantitative heart of the qualitative approach of tracking many hypotheses and checking how concordant they are with reality, and thus should feature heavily in a modern epistemic approach. The more precisely and accurately you can deal with uncertainty, the better off you are in an uncertain world. 2. What separates Bayes and the "traditional scientific method" (using scare quotes to signify that I'm highlighting a negative impression of it) is that the TSM is a method for avoiding bad beliefs but Bayes is a method for finding the best available beliefs. In many uncertain situations, you can use Bayes but you can't use the TSM (or it would be too costly to do so), but the TSM doesn't give any predictions in those cases! 3. Use of Bayes focuses attention on base rates, alternate hypotheses, and likelihood ratios, which people often ignore (replacing the first with maxent, the second with yes/no thinking, and the latter with likelihoods). 4. I honestly don't think the quantitative aspect of priors and updating is that important, compared to the search for a 'complete' hypothesis set and the search for cheap experiments that have high likelihood ratios (little bets). I think that the qualitative side of Bayes is super important but don't think we've found a good way to communicate it yet. That's an active area of research, though, and in particular I'd love to hear your thoughts on those four answers.
What is the qualitative side of Bayes?
Unfortunately, the end of that sentence is still true: I think that What Bayesianism Taught Me [http://lesswrong.com/lw/iat/what_bayesianism_taught_me/] is a good discussion on the subject, and my comment there [http://lesswrong.com/lw/iat/what_bayesianism_taught_me/9k8x] explains some of the components I think are part of qualitative Bayes. I think that a lot of qualitative Bayes is incorporating the insights of the Bayesian approach into your System 1 thinking (i.e. habits on the 5 second level [http://lesswrong.com/lw/5kz/the_5second_level/]).
Well, yes, but most of the things there are just useful ways to think about probabilities and uncertainty, proper habits, things to check, etc. Why Bayes? He's not a saint whose name is needed to bless a collection of good statistical practices.
3Rob Bensinger10y
It's more or less the same reason people call a variety of essentialist positions 'platonism' or 'aristotelianism'. Those aren't the only thinkers to have had views in this neighborhood, but they predated or helped inspire most of the others, and the concepts have become pretty firmly glued together. Similarly, the phrases 'Bayes' theorem' and 'Bayesian interpretation of probability' (whence, jointly, the idea of Bayesian inference) have firmly cemented the name Bayes to the idea of quantifying psychological uncertainty and correctly updating on the evidence. The Bayesian interpretation is what links these theorems to actual practice. Bayes himself may not have been a 'Bayesian' in the modern sense, just as Plato wasn't a 'platonist' as most people use the term today. But the names have stuck, and 'Laplacian' or 'Ramseyan' wouldn't have quite the same ring.
I like Laplacian as a name better, but it's already a thing [http://en.wikipedia.org/wiki/Laplace_operator].
If I were to pretend that I'm a mainstream frequentist and consider "quantifying psychological uncertainty" to be subjective mumbo-jumbo with no place anywhere near real science :-D I would NOT have serious disagreements with e.g. Vaniver's list [http://lesswrong.com/lw/iat/what_bayesianism_taught_me/9k8x]. Sure, I would quibble about accents, importances, and priorities, but there's nothing there that would be unacceptable from the mainstream point of view.

My biggest concern with the label 'Bayesianism' isn't that it's named after the Reverend, nor that it's too mainstream. It's that it's really ambiguous.

For example, when Yvain speaks of philosophical Bayesianism, he means something extremely modest -- the idea that we can successfully model the world without certainty. This view he contrasts, not with frequentism, but with Aristotelianism ('we need certainty to successfully model the world, but luckily we have certainty') and Anton-Wilsonism ('we need certainty to successfully model the world, but we lack certainty'). Frequentism isn't this view's foil, and this philosophical Bayesianism doesn't have any respectable rivals, though it certainly sees plenty of assaults from confused philosophers, anthropologists, and poets.

If frequentism and Bayesianism are just two ways of defining a word, then there's no substantive disagreement between them. Likewise, if they're just two different ways of doing statistics, then it's not clear that any philosophical disagreement is at work; I might not do Bayesian statistics because I lack skill with R, or because I've never heard about it, or because it's not the norm in my department.

There's a su... (read more)

Err, actually, yes it is. The frequentist interpretation of probability [http://en.wikipedia.org/wiki/Probability_interpretations#Frequentism] makes the claim that probability theory can only be used in situations involving large numbers of repeatable trials, or selection from a large population. William Feller: Or to quote from the essay coined the term frequentist: Frequentism is only relevant to epistemological debates in a negative sense: unlike Aristotelianism and Anton-Wilsonism, which both present their own theories of epistemology, frequentism's relevance is almost only in claiming that Bayesianism is wrong. (Frequentism separately presents much more complicated and less obviously wrong claims within statistics and probability; these are not relevant, given that frequentism's sole relevance to epistemology is its claim that no theory of statistics and probability could be a suitable basis for an epistemology, since there are many events they simply don't apply to.) (I agree that it would be useful to separate out the three versions of Bayesianism, whose claims, while related, do not need to all be true or false at the same time. However, all three are substantively opposed to one or both of the views labelled frequentist.)
Depends which frequentist you ask. From Aris Spanos's "A frequentist interpretation of probability for model-based inductive inference [http://link.springer.com/article/10.1007/s11229-011-9892-x]": and
For those who can't access that through the paywall (I can), his presentation slides for it are here [http://www.kent.ac.uk/secl/philosophy/jw/2009/musp/Spanos.pdf]. I would hate to have been in the audience for the presentation, but the upside of that is that they pretty much make sense on their own, being just a compressed version of the paper. While looking for those, I also found "Frequentists in Exile" [http://errorstatistics.com/], which is Deborah Mayo's [http://www.phil.vt.edu/dmayo/personal_website/] frequentist statistics blog. I am not enough of a statistician to make any quick assessment of these, but they look like useful reading for anyone thinking about the foundations of uncertain inference.
0Rob Bensinger10y
I don't understand what this "probability theory can only be used..." claim means. Are they saying that if you try to use probability theory to model anything else, your pencil will catch fire? Are they saying that if you model beliefs probabilistically, Math breaks? I need this claim to be unpacked. What do frequentists think is true about non-linguistic reality, that Bayesians deny?
I think they would be most likely to describe it as a category error. If you try to use probability theory outside the constraints within which they consider it applicable, they'd attest that you'd produce no meaningful knowledge and accomplish nothing but confusing yourself.
0Rob Bensinger10y
Can you walk me through where this error arises? Suppose I have a function whose arguments are the elements of a set S, whose values are real numbers between 0 and 1, and whose values sum to 1. Is the idea that if I treat anything in the physical world other than objects' or events' memberships in physical sequences of events or heaps of objects as modeling such a set, the conclusions I draw will be useless noise? Or is there something about the word 'probability' that makes special errors occur independently of the formal features of sample spaces?
As best I can parse the question, I think the former option better describes the position.
IIRC a common claim was that modeling beliefs at all is "subjective" and therefore unscientific.
0Rob Bensinger10y
Do you have any links to this argument? I'm having a hard time seeing why any mainstream scientist who thinks beliefs exist at all would think they're ineffable....
Hmm, I thought I had read it in Jaynes' PT:TLoS, but I can't find it now. So take the above with a grain of salt, I guess.
Yes, it is my understanding that epistemologists usually call the set of ideas Yvain is referring to "probabilism" and indeed, it is far more vague and modest than what they call Bayesianism (which is more vague and modest still than the subjectively-objective Bayesianism that is affirmed often around these parts.). BTW, I think this is precisely what Carnap was on about with his distinction between probability-1 and probability-2, neither of which did he think we should adopt to the exclusion of the other.
I think they would have significant practical disagreement with #3, given the widespread use of NHST, but clever frequentists are as quick as anyone else to point out that NHST doesn't actually do what its users want it to do. Hence the importance of the qualifier 'qualitative'; it seems to me that accents, importances, and priorities are worth discussing, especially if you're interested in changing System 1 thinking instead of System 2 thinking. The mainstream frequentist thinks that base rate neglect is a mistake, but the Bayesian both thinks that base rate neglect is a mistake and has organized his language to make that mistake obvious when it occurs. If you take revealed preferences seriously, it looks like the frequentist says base rate neglect is a mistake but the Bayesian lives that base rate neglect is a mistake. Now, why Bayes specifically? I would be happy to point to Laplace instead of Bayes, personally, since Laplace seems to have been way smarter and a superior rationalist. But the trouble with naming methods of "thinking correctly" is that everyone wants to name their method "thinking correctly," and so you rapidly trip over each other. "Rationalism," for example, refers to a particular philosophical position [http://en.wikipedia.org/wiki/Rationalism] which is very different from the modal position here at LW. Bayes is useful as a marker, but it is not necessary to come to those insights by way of Bayes. (I will also note that not disagreeing with something and discovering something are very different thresholds. If someone has a perspective which allows them to generate novel, correct insights, that perspective is much more powerful than one which merely serves to verify that insights are correct.)
Yeah, I said if I were pretend to be a frequentist -- but that didn't involve suddenly becoming dumb :-) I agree, but at this point context starts to matter a great deal. Are we talking about decision-making in regular life? Like, deciding which major to pick, who to date, what job offer to take? Or are we talking about some explicitly statistical environment where you try to build models, fit them, evaluate them, do out-of-sample forecasting, all that kind of things? I think I would argue that recognizing biases (Tversky/Kahneman style) and trying to correct for them -- avoiding them altogether seems too high a threshold -- is different from what people call Bayesian approaches. The Bayesian way of updating on the evidence is part of "thinking correctly", but there is much, much more than just that.
At least one (and I think several) of biases identified by Tversky and Kahneman is "people do X, a Bayesian would do Y, thus people are wrong," so I think you're overstating the difference. (I don't know enough historical details to be sure, but I suspect Tversky and Kahneman might be an example of the Bayesian approach allowing someone to discover novel, correct insights.) I agree, but it feels like we're disagreeing. It seems to me that a major Less Wrong project is "thinking correctly," and a major part of that project is "decision-making under uncertainty," and a major part of uncertainty is dealing with probabilities, and the Bayesian way of dealing with probabilities seems to be the best, especially if you want to use those probabilities for decision-making. So it sounds to me like you're saying "we don't just need stats textbooks, we need Less Wrong." I agree; that's why I'm here as well as reading stats textbooks. But it also sounds to me like you're saying "why are you naming this Less Wrong stuff after a stats textbook?" The easy answer is that it's a historical accident, and it's too late to change it now. Another answer I like better is that much of the Less Wrong stuff comes from thinking about and taking seriously the stuff from the stats textbook, and so it makes sense to keep the name, even if we're moving to realms where the connection to stats isn't obvious.
Hm... Let me try to unpack my thinking, in particular my terminology which might not match exactly the usual LW conventions. I think of: Bayes theorem as a simple, conventional, and an entirely uncontroversial statistical procedure. If you ask a dyed-in-the-wool rabid frequentist whether the Bayes theorem is true he'll say "Yes, of course". Bayesian statistics as an approach to statistics with three main features. First is the philosophical interpretation of (some) probability as subjective belief. Second is the focus on conditional probabilities. Third is the strong preferences for full (posterior) distributions as answers instead of point estimates. Cognitive biases (aka the Kahneman/Tversky stuff) as certain distortions in the way our wetware processes information about reality, as well as certain peculiarities in human decision-making. Yes, a lot of it it is concerned with dealing with uncertainty. Yes, there is some synergy with Bayesian statistics. No, I don't think this synergy is the defining factor here. I understand that historically the in the LW community Bayesian statistics and cognitive biases were intertwined. But apart from historical reasons, it seems to me these are two different things and the degree of their, um, interpenetration is much overstated on LW. Well, we need for which purpose? For real-life decision making? -- sure, but then no one is claiming that stats textbooks are sufficient for that. Some, not much. I can argue that much of LW stuff comes from thinking logically and following chains of reasoning to their conclusion -- or actually just comes from thinking at all instead of reacting instinctively / on the basis of a gut feeling or whatever. I agree that thinking in probabilities is a very big step and it *is* tied to Bayesian statistics. But still it's just one step.
I agree with your terminology. When contrasting LW stuff and mainstream rationality, I think the reliance on thinking in probabilities is a big part of the difference. ("Thinking logically," for the mainstream, seems to be mostly about logic of certainty.) When labeling, it makes sense to emphasize contrasting features. I don't think that's the only large difference, but I see an argument (which I don't fully endorse) that it's the root difference. (For example, consider evolutionary psychology, a moderately large part of LW. This seems like a field of science particularly prone to uncertainty, where "but you can't prove X!" would often be a conversation-stopper. For the Bayesian, though, it makes sense to update in the direction of evo psych, even though it can't be proven, which is then beneficial to the extent that evo psych is useful.)
Yes, I think you're right. Um, I'm not so sure about that. The main accusation against evolutionary psychology is that it's nothing but a bunch of just-so stories [http://en.wikipedia.org/wiki/Just-so_story], aka unfalsifiable post-hoc narratives. And a Bayesian update should be on the basis of evidence, not on the basis of an unverifiable explanation.
It seems to me that if you think in terms of likelihoods, you look at a story and say "but the converse of this story has high enough likelihood that we can't rule it out!" whereas if you think in terms of likelihood ratios, you say "it seems that this story is weakly more plausible than its converse." I'm thinking primarily of comments like this [http://lesswrong.com/lw/l1/evolutionary_psychology/g92]. I think it is a reasonable conclusion that anger seems to be a basic universal emotion because ancestors who had the 'right' level of anger reproduced more than those who didn't. Boris just notes that it could be the case that anger is a byproduct of something else, but doesn't note anything about the likelihood of anger being universal in a world where it is helpful (very high) and the likelihood of anger being universal in a world where it is neutral or unhelpful (very low). We can't rule out anger being spurious, but asking to rule that out is mistaken, I think, because the likelihood ratio is so significant. It doesn't make sense to bet against anger being reproductively useful in the ancestral environment (but I think it makes sense to assign a probability to that bet, even if it's not obvious how one would resolve it).
I have several problems with this line of reasoning. First, I am unsure what it means for a story to be true. It's a story -- it arranges a set of facts in a pattern pleasing to the human brain. Not contradicting any known facts is a very low threshold (see the Russell's teapot), to call something "true" I'll need more than that and if a story makes no testable predictions I am not sure on which basis I should evaluate its truth and what does it even mean. Second, it seems to me that in such situations the likelihoods and so, necessarily, their ratios are very very fuzzy. My meta uncertainty -- uncertainty about probabilities -- is quite high. I might say "story A is weakly more plausible than story B" but my confidence in my judgment about plausibility is very low. This judgment might not be worth anything. Third, likelihood ratios are good when you know you have a complete set of potential explanations. And you generally don't. For open-ended problems the explanation "something else" frequently looks like the more plausible one, but again, the meta uncertainty is very high -- not only you don't know how uncertain you are, you don't even know what you are uncertain about! Nassim Taleb's black swans are precisely the beasties that appear out of "something else" to bite you in the ass.
Ah, by that I generally mean something like "the causal network N with a particular factorization F is the underlying causal representation of reality," and so a particular experiment measures data and then we calculate "the aforementioned causal network would generate this data with probability P" for various hypothesized causal networks. For situations where you can control at least one of the nodes, it's easy to see how you can generate data useful for this. For situations where you only have observational data (like the history of human evolution, mostly), then it's trickier to determine which causal network(s) is(are) best, but often still possible to learn quite a bit more about the underlying structure than is obvious at first glance. So suppose we have lots of historical lives which are compressed down to two nodes, A which measures "anger" (which is integer-valued and non-negative, say) and C which measures "children" (which is also integer valued and non-negative). The story "anger is spurious" is the network where A and C don't have a link between them, and the story "anger is reproductively useful" is the network where A->C and there is some nonzero value a^* of A which maximizes the expected value of C. If we see a relationship between A and C in the data, it's possible that the relationship was generated by the "anger is spurious" network which said those variables were independent, but we can calculate the likelihoods and determine that it's very very low, especially as we accumulate more and more data. Sure. But even if you're only aware of two hypotheses, it's still useful to use the LR to determine which to prefer; the supremacy of a third hidden hypothesis can't swap the ordering of the two known hypotheses! Yes, reversal effects are always possible, but I think that putting too much weight on this argument leads to Anton-Wilsonism (certainty is necessary but impossible). I think we do often have a good idea of what our meta uncertainty looks
I have only glanced at Pearl's work, not read it carefully, so my understanding of causal networks is very limited. But I don't understand on the basis of which data will you construct the causal network for anger and children (and it's actually more complicated because there are important society-level effects). In what will you "see a relationship between A and C"? On the basis of what will you be calculating the likelihoods?
Ideally, you would have some record. I'm not an expert in evo psych, so I can't confidently say what sort of evidence they actually rely on. I was hoping more to express how I would interpret a story as a formal hypothesis. I get the impression that a major technique in evolutionary psychology is making use of the selection effect [http://en.wikipedia.org/wiki/Selection_effect] due to natural selection: if you think that A is heritable, and that different values of A have different levels of reproductive usefulness, then in steady state the distribution of A in the population gives you information about the historic relationship between A and reproductive usefulness, without even measuring relationship between A and C in this generation. So you can ask the question "what's the chance of seeing the cluster of human anger that we have if there's not a relationship between A and reproduction?" and get answers that are useful enough to focus most of your attention on the "anger is reproductively useful" hypothesis.
Regarding Bayes, you might like my essay [http://cs.stanford.edu/~jsteinhardt/stats-essay.pdf] on the topic, especially if you have statistical training.
That paper did help crystallize some of my thoughts. At this point I'm more interested in wondering if I should be modifying how I think, as opposed to how to implement AI.
You are not alone in thinking the use of Bayes is overblown. It can;t be wrong, of course, but it can be impractical to use and in many real life situations we might not have specific enough knowledge to be able to use it. In fact, that's probably one of the biggest criticisms of lesswrong.

Hi folks, I'm Peter. I read a lot of blogs and saw enough articles on Overcoming Bias a few years ago that I was aware of Yudkowsky and some of his writing. I think I wandered from there to his personal site because I liked the writing and from there to Less Wrong, but it's long enough ago I don't really remember. I've read Yudkowsky's Sequences and found lots of good ideas or interesting new ways to explain things (though I bounced off QM as it assumed a level of knowledge in physics I don't have). They're annoyingly disorganized - I realize they were originally written as an interwoven hypertext, but for long material I prefer reading linear silos, then I can feel confident I've read everything without getting annoyed at seeing some things over and over. Being confused by their organization when nobody else seems to be also contributes to the feeling in my last paragraph below.

I signed up because I had a silly solution to a puzzle, but I've otherwise hesitated to get involved. I feel I've skipped across the surface of LessWrong; I subscribe to a feed that only has a couple posts per week and haven't seen anything better. I'm aware there are pages with voting, but I'm wary of the ... (read more)

I'm also wary of a community so tightly focused around one guy. I have only good things to say about Yudkowsky or his writing, but a site where anyone is far and away the most active and influential writer sets off alarm bells. Despite the warning in the death spiral sequence, this community heavily revolves around him.

Yeah, it's a problem. I'd even go so far as to say that it's a cognitive hazard, not just a PR or recruitment difficulty: if you've got only one person at the clear top of a status hierarchy covering some domain, then halo effects can potentially lead to much worse consequences for that domain than if you have a number of people of relatively equal status who occasionally disagree. Of course there's also less potential for infighting, but that doesn't seem to outweigh the potential risks.

There was a long gap in substantive posts from EY before the epistemology sequence, and I'd hoped that a competitor might emerge from that vacuum. Instead the community seems to have branched; various people's personal blogs have grown in relative significance, but LW has stayed Eliezer's turf in practice. I haven't fully worked out the implications, but they don't seem entirely good, especially since most of the community's modes of social organization are outgrowths of LW.

I think a part of the problem with other people filling the "vacuum" left by Eliezer is that when he was writing the sequences it was a large amount of informal material. Since then we've established a lot of very formal norms for main-level posts; the "blog" is now about discussions with a lot of shared background rather than about trying to use a bunch of words to get some ideas out. That is, most of the point of the sequences is laying out ground rules. There's no vacuum left over for anyone to fill, and LW isn't really a "blog" any more, so much as a community or discussion board. And for me, personally, at least, a lot of the attraction of LW and the sequences is not that Eliezer did a bunch of original creative work, but that he verbalized and worked out a bit more detail on a variety of ideas that were already familiar, and then created a community where people have to accept that and are therefore trustworthy. What this "feels like on the inside" is that the community is here because they share MY ideas about epistemology or whatever, rather than because they share HIS ideas, even if he was the one to write them down. Of course YMMV and none of this is a controlled experiment; I could be making up bad post hoc explanations.
Just to be clear, what you say does not contradict the argument you are responding to. You gave a good explanation for why EY has a big influence on the community. It still isn't clear that this is a good thing.
Yes, I'm not arguing that it is a good thing. I'm simply putting forward an explanation for why no one else has stepped in to "fill the vacuum" as some have hoped in other comments; I don't believe there is a vacuum to fill. Also I meant to endorse the idea that Eliezer is like Pythagoras: someone who wrote down and canonized a set of knowledge already mostly present, which is at least LESS DANGEROUS than a group following a set of personal dogma.
Actually, I think that the sequences have a fair number of original ideas. They were enumerated about a year or so ago by Eliezer and Luke in separate posts.

On a conceptual level, is there more to QM than the Uncertainty Principle and Wave-Particle Duality?

Yes. Very yes. There are several different ways to get at that next conceptual level (matrix mechanics, the behavior of the Schrödinger equation, configuration spaces, Hamiltonian and Lagrangian mechanics, to name ones that I know at least a little about), but qualitative descriptions of the Uncertainty Principle, Schrödinger's Cat, Wave-Particle Duality, and the Measurement Problem do not get you to that level.

Rejoice—the reality of quantum mechanics is way more awesome than you think it is, and you can find out about it!

Let me rephrase: I'm sure there is more to cutting edge QM than that which I understand (or even have heard of). Is any of that necessary to engage with the philosophy-of-science questions raised by the end of the Sequence, such as Science Doesn't Trust Your Rationality [http://lesswrong.com/lw/qb/science_doesnt_trust_your_rationality/]? From a writing point of view, some scientific controversy needed to be introduced to motivate the later discussion - and Eliezer choose QM. As examples go, it has advantages: (1) QM is cutting edge - you can't just go to Wikipedia to figure out who won. EY could have written a Lamarckian / Darwinian evolution sequence with similar concluding essays, but indisputably knowing who was right would slant how the philosophy-of-science point would be interpreted. (2) A non-expert should recognize that their intuitions are hopelessly misleading when dealing with QM, opening them to serious consideration of the new-to-them philosophy-of-science position EY articulates. But let's not confuse the benefits of the motivating example with arguing that there is philosophy-of-science benefit in writing an understandable description of QM. In other words, if the essays in the sequence after and including The Failures of Eld Science [http://lesswrong.com/lw/q9/the_failures_of_eld_science/] were omitted from the Sequence, it wouldn't belong on LessWrong.

Hi, I'm Denise from Germany, I just turned 19 and study maths at university. Right now, I spend most of my time with that and caring for my 3-year-old daughter. I know LessWrong for almost two years now, but never got around to write. However, I'm more or less involved with parts of the LessWrong and the Effective Altruism community, most of them originally found me via Okcupid (I stated I was a LessWrongian), and from there, it expanded.

I grew up in a small village in the middle of nowhere in Germany, very isolated without any people to talk to. I skipped a grade and did extremely well at school, but was mostly very unhappy during my childhood/teen years. Though I had free internet access, I had almost no access to education until I was 15 years old (and pregnant, and no, that wasn't unplanned), because I had no idea what to look for. I dropped out of school then and prepared for the exams -when I had time (I was mostly busy with my child)- I needed to do to be allowed to attend university. In Germany that's extremely unusual and most people don't even know you can do it without going to school.

When I was 15, I discovered enviromentalism (during pregnancy, via people who share m... (read more)

As another LW'er with kids in Germany, welcome!
It isn't customary that kind of quotation marks in English; “these ones” are usually used in typeset materials, but most people just use "the ones on the keyboard" on-line.
Hi Denise/Kendra, sich um ein kleines Kind alleine zu kümmern ist schon viel. Wenn Du dann auch noch studierst und EA und LW Meetups machst ist das schon ziemlich viel. Ich bewundere Deine Leistung. Ich habe einiges Material zu rationaler Erziehung auf meiner Homepage verlinkt, das Du Dir evtl. mal ansehen möchtest: http://lesswrong.com/user/Gunnar_Zarncke [http://lesswrong.com/user/Gunnar_Zarncke] Ein Tipp (obwohl Du vermutlich weißt und nur nicht umsetzen konntest): Die Synergieeffekte bei der Kindererziehung sind beträchtlich. Es ist erheblich einfacher für zwei Eltern für zwei Kinder zu sorgen als 2x alleinerziehend mit Kind. Entsprechend in größeren Gruppen (das sieht man natürlich meist nur wenn sich mehrere Familien treffen). Hast Du keine Möglichkeit das zu nutzen? Du darfst mir gerne jederzeit Fragen stellen. Gruß aus Hamburg Gunnar
Welcome Denise! :)

This is not an atheist forum, in much the same way that it is not an a-unicorn-ist forum. Not because we do not hold a consistent position on the existence of unicorns, but because the issue itself is not worth discussing. The data has spoken, and there is no reason to believe in them. Whatever. Let's move on to more important things like anthropics and the meta-ethics of Friendly AI.


[This comment is no longer endorsed by its author]Reply
Welcome! The really valuable times are when you get to say those things to yourself - you're the only person you can force to listen :D

So I'm going to write about a) my arguments in favor or religion, though I don't feel they are sufficient and I want to improve them, and b) why I don't fully accept the LW way of thinking.

I'm still thinking about it, and will be until I post to the Discussion...

I expect this is a bad idea. The post will probably get downvoted, and might additionally provoke another spurt of useless discussion. Lurk for a few more months instead, seeking occasional clarification without actively debating anything.

I regard atheism as a slam-dunk issue, but I wouldn't walk into a Mormon forum and call atheism a settled question. 'Twould be logically rude to them.


i have been lurking around here mostly for (rational) self help. Some info about me.

Married. Work at India office of a top tier tech company. 26 y/o

between +2 and +2.5 SD IQ . crystallized >> fluid . Extremely introspective and self critical. ADHD / Mildly depressed most of my life. Have hated 'work' most of my life.

Zero visual working memory (One - Two items with training). Therefore struggling with programming computers and not enjoying it. Can write short programs and solve standard interview type questions. Can't build big functional pieces of software

Tried to self medicate two years back .Overdosed on modafinil + piracetam. in ER. 130+ heart rate for 8 hours. induced panic disorder. As of today, Stimulant use out of question therefore.

Familiar with mindfulness meditation and spiritual philosophy.

Its quite clear that i can't build large pieces of software. Unsure as to what productive use i can be with these attributes.


That depends on what your goal is. Making enough money to fund a relaxed and happy life? Making tremendous amounts of money? Job satisfaction? Something else entirely?
in terms of goals, i hadn't formalized things but my mental calculations generally revolve around. A) making a lot of money. B) not burning out (due to competitive stress e.g.) doing so. these seems highly improbable in my current environment as i don't have the natural characteristics for this to happen. so either a) i adapt (major , almost miraculous changes needed in conscientiousness/ working memory etc) to succeed at top tier software product development or any other similar high pay career track. b) settle for low quality / low challenge work and low pay (IT services ? teaching? government bureaucracy?) jobs in the b) category pay < 20K USD in india so it won't be a very relaxed existence financially. therefore had been trying to get a) to work somehow. minor successes overall. my working memory and conscientiousness are atleast bottom quartile/ if not bottom decile in my peer group. stuck big time in life therefore.
You may be able to work as a programmer, given some management so that you only work on small pieces at a time. It seems to me that it is actually quite uncommon to be able to comprehend projects of significant size, in programming or elsewhere. Also, maybe you're not that different from other high-IQ individuals. I've always suspected that top scientists, programmers, etc. are at (just an illustrative example) 1 in 1000 on [metric most directly measured by IQ and similar tests] and 1 in 1000 on combination of things like integration of knowledge/memory, working space, etc. Whereas high IQ individuals in general aren't very far from average on the other factors and can't usefully access massive body of knowledge, for example.
the only trouble is that one is expected to mature and tackle larger and larger problems or alternatively manage a large (and always increasing) business scope with years under the belt. both of those capacities are constrained significantly by conscientiousness / working memory / attention deficits.
That's fairly interesting. It seem to be often under-appreciated that IQ (and similar tests) fail to evaluate important aspects of cognition.
yes. cognitive ability is quite varied and i am highly stunted in the visuo spatial area. could never read fiction (no characters visuals in my head). the lack of this faculty is also a major bottleneck in comprehension of technical material. i like syntax / discrete math / logic etc, things which which depend more on verbal facility.
Welcome! What was your dosage?
immediate dose : 200 mg modafinil + 800 mg piracetam around 10 am. OD symptoms within 2/3 hours. there was probably significant drug buildup of modafinil over the prior week i guess. was taking mostly 200mg (once 400 mg) a day the preceeding week. so i am guessing 300-500 mg built up. effectively then 500 - 700 mg modafinil + 800mg piracetam. resulted in 170/90 BP + 130-150 HR + severe anxiety for around 8-9 hours. ER docs didn't know what to do. I refused to get admitted to ICU. Subsided by 10pm night. instigated a panic disorder and a drug phobia cured by 25mg sertraline for 6 months. panic free (more or less) since. has left me vigilant about drug interactions and adverse drug effects.
my experience could be useful to LWers experiementing with noo tropics in warning of the dangers of a) drug interactions there is need to be very careful while titrating doses up , especially when drugs are in combination. your body may manifest novel problems not seen by anyone else. b) drug buildup : need to be very careful while estimating effective doses to take drug buildup into account. even though superficially i was ingesting 200mg of modafinil, i was effectively on 500mg + of the drug.

Hello, Less Wrong; I'm so glad I found you.

A few years ago a particularly fruitful wikiwalk got me to a list of cognitive biases (also fallacies). I read it voraciously, then followed the sources, found out about Kahneman and Tversky and all the research that followed. The world has never quite been the same.

Last week Twitter got me to this sad knee-jerk post on Slate, which in a few message-board-quality paragraphs completely missed the point of this thought experiment by Steve Landsburg, dealing with the interesting question of crimes in which the only harm to the victims is the pain from knowing that they happened. The discussion there, however, was refreshingly above average, and I'll be forever grateful to LessWronger "Henry", who posted a link to the worst argument in the world - which turned out to be a practical approach to a problem I had been thinking about and trying to condense into something useful in a discussion (I was going toward something like "'X-is-horrible-and-is-called-racism' turning into 'We-call-Y-racism-therefore-it's-horrible'").

Since then I've been looking around and it feels... feels like I've finally found my species after a lifet

... (read more)
Know that feeling. I wonder how common a reaction it is, actually ...
Maybe it's just that EY is very persuasive! I'm reminded of what was said about some other polymath (Arthur Koestler I think) that the critics were agreed that he was right on almost everything - except, of course, for the topic that the critic concerned was expert in, where he was completely wrong! So my problem is, whether to just read the sequences, or to skim through all the responses as well. The latter takes an awful lot longer, but from what I've seen so far there's often a response from some expert in the field concerned that, at the least, puts the post into a whole different perspective.
After looking around a little more, I should clarify what I meant perhaps. The part about agreeing with EY (so far) was about psychology, ethics, morality, epistemology, even the little of politics I saw. The "so far" is doing heavy work there, I've only been around for a week, and focusing first on the topics most immediately relevant to my work and studies. More importantly, I haven't touched the physics yet (which from what I've seen in this page is something I should have mentioned), and I'm not qualified to "take sides" if I had. The paragraph was not prompted (only) by EY, but by my marvel at the quality of discussions here. No caveats there, this community has really impressed me. The way it works, not the conclusions, although they're certainly correlated. I'm used to having to defend rationality in a very relevant portion of the discussions I have, before it's possible to move on to anything productive (of course, those tend not to move on at all). This is a breath of fresh air.
Oi, eu venho tentando juntar brasileiros capazes de pensar faz algum tempo. Dirijo o www.ierfh.org e ja visitei a parte MIRI do pessoal desse site por um mês. Se achar o conteúdo do FAQ do site interessante, envie mensagem para o IERFH, tem comunidade no facebook também, e etc...

I don't feel [my arguments in favor of religion] are sufficient and I want to improve them

I know you've heard this from several other people in this thread, but I feel it's important to reiterate: this seems to be a really obvious case of putting the cart before the horse. It just doesn't make sense to us that you are interested only in finding arguments that bolster a particular belief, rather than looking for the best arguments available in general, for all the beliefs you might choose among.

I'm not asking you to respond to this right now, but please keep it firmly in mind for your Discussion post, as it's probably going to be the #1 source of disagreement.

I'm a college student studying music composition and computer science. You can hear some of my compositions on my SoundCloud page (it's only a small subset of my music, but I made sure to put a few that I consider my best at the top of the page). In the computer science realm, I'm into game development, so I'm participating in this thing called One Game A Month whose name should be fairly self-explanatory (my February submission is the one that's most worth checking out - the other 2 are kind of lame...).

For pretty much as long as I can remember, I've enjoyed pondering difficult/philosophical/confusing questions and not running away from them, which, along with having parents well-versed in math and science, led me to gradually hone my rationality skills over a long period of time without really having a particular moment of "Aha, now I'm a rationalist!". I suppose the closest thing to such a moment would be about a year ago when I discovered HPMoR (and, shortly thereafter, this site). I've found LW to be pretty much the only place where I am consistently less confused after reading articles about difficult/philosophical/confusing questions than I am before.

Welcome! Have you done any algorithmic composition?
I did this [https://www.dropbox.com/s/vx1g98r2zz78vmq/rainbow%20dream.zip] and I might try doing a few more pieces like it. You have to click somewhere on the screen to start/stop it.
Fascinating, thanks! A project that's been kicking around in the back of my head for a while is emotional engineering through algorithmic music; it would be great to have a way to generate somewhat novel happy high-energy music during coding that won't sap any attention (I'm sort of reluctant to talk to musicians about it, though, because it feels like telling a chef you'd like a way to replace them with a machine that dispenses a constant stream of sugar :P).
I would also love this. I'm in constant deficit of high-energy music for coding or other similar activities, and often it can take more work finding good music for it than all the coding work I want to do while listening to it (or, conversely, it can take much longer to find good music than the music lasts).
One thing I think would be cool would be some sort of audio-generating device/software/thing that allows arbitrary levels of specificity. So, on one extreme, you could completely specify a fully deterministic stream of sound, and, on the other extreme, you could specify nothing and just say "make some sound". Or you could go somewhere in between and specify something along the lines of "play music for X minutes, in a manner evoking emotion Y, using melody Z as the main theme of the piece".
Now that you mention this, I do remember reading some years ago about a machine-learning composition project that had the algorithm generate random streams and learn what music people liked by crowd-sourcing feedback. I think what you've described is a great idea, and I would pay for it. Ideally, it would let me have different-styled streams dependent on what I want to do with the music / what activity I'm doing while listening. Triple bonus points if it can consume an existing piece of music to learn more about some particular style of stream that I want.
There have been a lot o' such projects. I like some of the tracks produced by DarwinTunes [http://darwintunes.org/].
Welcome, fellow new person! You've got some wonderful music. Any particular things that interest you in the "confusing question" genre?

Hi, I am Olga, female, 40, programmer, mother of two. Got here from HPMoR. Can not as yet define myself as a rationalist, but I am working on it. Some rationality questions, used in real life conversations, have helped me to tackle some personal and even family issues. It felt great. In my "grown-up" role I am deeply concerned to bring up my kids with their thoughts process as undamaged as I possibly can and maybe even to balance some system-taught stupidity. I am at the start of my reading list on the matter, including LW sequences.

Welcome! Many people here call themselves aspiring rationalists.

Hello, my name is Lisa. I found this site through HPMOR.

I'm a Georgia Tech student double majoring in Industrial Engineering and Psychology. I know I want to further my education after graduation, probably through a PhD. However, I'm not entirely sure what field I would want to focus on.

I've been lurking for awhile and am slowly making my way through the sequences, though I'm currently studying abroad so I'm not reading particularly quickly. I'm particularly interested in behavioral economics, statistics, evolutionary psychology, and in education policy, especially in higher education.

Fun fact: my high level of interest in education policy quickly evaporated as soon as I was no longer going to school.

Hello everyone!

I've read occasional OB and LW articles and other Yudkowsky writings for many years, but never got into it in a big way until now.

My goal at the moment is to read the Quantum Physics sequence, since quantum physics has always seemed mysterious to me and I want to find out if its treatment here will dispel some of my confusion. I've spent the last few days absorbing the preliminaries and digressing into many, many prior articles. Now the tabs are finally dwindling and I am almost up to the start of the sequence!

Anyway, I have a question I didn't see in the FAQ. Given that I went on a long, long, long wiki walk and still haven't read very much of the core material, how big is Less Wrong? Has anyone done word counts on the sequences, or anything like that?

The sequences come close to a million words. [http://lesswrong.com/lw/555/96_bad_links_in_the_sequences/3ydl]

Hello there, everyone! I am Osiris, and I came here at the request of a friend of mine. I am familiar with Harry Potter and the Methods of Rationality, and spent some time reading through the articles here. Everythin' here is so interesting! I studied to become a Russian Orthodox Priest in the early nineties, and moved to the USA from the Russian Federation at the beginning of the W. Bush Administration. The change of scenery inspired me, and within the first year, I had become an atheist and learned everything I could about biology, physics, and modern philosophy. Today, I am a philosophy/psychology major at a local college, and work to change the world one little bit at a time.

Though I tend to be a bit of a poet, I hope I can find a place here. In particular, I am interested in thinking of morality and the uses of mythology in daily life.

I value maintaining and increasing diversity, and plan on posting a few things which relate to this as soon as possible. I am curious to see how everyone will react to my style of presentation and beliefs.

Diversity of what, exactly?
Hi Osiris, and welcome! If you're looking for awesome things that a poet can offer Less Wrong, there are people looking to create meaningful rationalist holidays with a sense of ritual to them [http://lesswrong.com/lw/9aw/designing_ritual/].

Hi everyone,

I'm a humanities PhD who's been reading Eliezer for a few years, and who's been checking out LessWrong for a few months. I'm well-versed in the rhetorical dark arts, due to my current education, but I also have a BA in Economics (yet math is still my weakest suit). The point is, I like facts despite the deconstructivist tendency of humanities since the eighties. Now is a good time for hard-data approaches to the humanities. I want to join that party. My heart's desire is to workshop research methods with the LW community.

It may break protocol, but I'd like to offer a preview of my project in this introduction. I'm interested in associating the details of print production with an unnamed aesthetic object, which we'll presently call the Big Book, and which is the source of all of our evidence. The Big Book had multiple unknown sites of production, which we'll call Print Shop(s) [1-n]. I'm interested in pinning down which parts of the Big Book were made in which Print Shop. Print Shop 1 has Tools (1), and those Tools (1) leave unintended Marks in the Big Book. Likewise with Print Shop 2 and their Tools (2). Unfortunately, people in the present don't know which Print Shop... (read more)

I'm interested in associating the details of print production with an unnamed aesthetic object, which we'll presently call the Big Book, and which is the source of all of our evidence.

It's the Bible, isn't it.

Print Shop 1 has Tools (1), and those Tools (1) leave unintended Marks in the Big Book. Likewise with Print Shop 2 and their Tools (2). Unfortunately, people in the present don't know which Print Shop had which Tools. Even worse, multiple sets of Tools can leave similar Marks.

How can you possibly get off the ground if you have no information about any of the Print Shops, much less how many there are? GIGO.

I'm far from an expert in Bayesian methods, but it seems already that there's something missing here.

Have you considered googling for previous work? 'Bayesian inference in phylogeny' and 'Bayesian stylometry' both seem like reasonable starting points.

Not quite. You can get quite a bit of insight out of unsupervised clustering.
'No free lunches', right? If you're getting anything out of your unsupervised methods, that just means they're making some sort of assumptions and proceeding based on those.
Right, but this isn't a free lunch so much as "you can see a lot by looking."
Sorry to interrupt a perfectly lovely conversation. I just have a few things to add: * I may have overstated the case in my first post. We have some information about print shops. Specifically, we can assign very small books to print shops with a high degree of confidence. (The catch is that small books don't tend to survive very well. The remaining population is rare and intermittent in terms of production date.) * There are some hypotheses that could be treated as priors, but they're very rarely quantified (projects like this are rare in today's humanities).
Interesting feedback. Ha, I wish. No, it's more specific to literature. We have minimal information about Print Shops. I wouldn't say the existing data are garbage, just mostly unquantified. Yes, but thanks to you I know the shibboleth of "Bayesian stylometry." Makes sense, and I've already read some books in a similar vein, but there are some problems. Most fundamentally, I have trouble translating the methods to a different type of data: from textual data like word length to the aforementioned Marks. Otherwise, my understanding of most stylometric analysis was that it favors frequentist methods. Can you clear any of this up? EDIT: I have a follow-up question regarding GIGO: How can you tell what data are garbage? Are the degrees of certainty based on significant digits of measurement, or what?
Have to define your features somehow. Really? I was under the opposite impression, that stylometry was, since the '60s or so with the Bayesian investigation of Mosteller & Wallace into the Federalist papers, one of the areas of triumph for Bayesianism. No, not really. I think I would describe GIGO in this context as 'data which is equally consistent with all theories'.
This is a problem that machine learning can tackle. Feel free to contact me by PM for technical help. To make sure I understand your problem: We have many copies of the Big Book. Each copy is a collection of many sheets. Each sheet was produced by a single tool, but each tool produces many sheets. Each shop contains many tools, but each tool is owned by only one shop. Each sheet has information in the form of marks. Sheets created by the same tool at similar times have similar marks. It may be the case that the marks monotonically increase until the tool is repaired. Right now, we have enough to take a database of marks on sheets and figure out how many tools we think there were, how likely it is each sheet came from each potential tool, and to cluster tools into likely shops. (Note that a 'tool' here is probably only one repair cycle of an actual tool, if they are able to repair it all the way to freshness.) We can either do this unsupervised, and then compare to whatever other information we can find (if we have a subcollection of sheets with known origins, we can see how well the estimated probabilities did), or we can try to include that information for supervised learning.

That's a hell of a summary, thanks!

I'm glad you mentioned the repair cycle of tools. There are some tools that are regularly repaired (let's just call them "Big Tools") and some that aren't ("Little Tools"). Both are expensive at first and to repair, but it seems the Print Shops chose to repair Big Tools because they were subject to breakage that significantly reduced performance.

I should add another twist since you mentioned sheets of known origins: Assume that we can only decisively assign origins to single sheets. There are two problems stemming from this assumption: first, not all relevant Marks are left on such sheets; second, very few single sheet publications survive. Collations greater than one sheet are subject to all of the problems of the Big Book.

I'm most interested in the distinction between unsupervised and supervised learning. And I will very likely PM you to learn more about machine learning. Again, thanks for your help!

EDIT: I just noticed a mistake in your summary. Each sheet is produced by a set of tools, not a single tool. Each mark is produced by a single tool.

Okay. Are the classes of marks distinct by tool type- that is, if I see a mark on a sheet, I know whether it came from tool type X or tool type Y- or do we need to try and discover what sort of marks the various tools can leave?
Fortunately, we know which tool types leave which marks. We also have a very strong understanding of the ways in which tools break and leave marks. Thanks again for entertaining this line of inquiry.
Good point! Also yay combining multiple fields of knowledge and expertise! applause Seriously though, the world does need more of it, and I felt the need to explicitly reward and encourage this.
Any time you are doing statistical analysis, you always want a sample of data that you don't use to tune the model and where you know the right answer. (a 'holdout' sample) In this case, you should have several books related to the various print shops that you don't feed into your Bayesian algorithm. You can then assess the algorithm by seeing if it gets these books correct. To account for the decay of the books, you need books that you know not only came from print shop x,y or z, but also you'd need to know how old the tools wee that made those books. Either that, or you'd have to have some understanding of how the tools decay from a theoretical model.
Very helpful points, thanks. The scholarly community already has a pretty good working knowledge of the Tools, and thus the theoretical model of Tool breakage ("breakage" may be more accurate than "decay," since the decay is non-incremental and stochastic). We know the order in which parts of the Tools break, and we have some hypotheses correlating breakage to gross usage. The twist is that we don't know when any Print Shops produced the Big Book, so we can only extrapolate a timeline based on Tool breakage Can you say more about the holdout sample? Should the holdout sample be a randomly selected sample of data, or something suspected to be associated with Print Shops [x,y,z] ? Print Shops [a,b,c] ?
If you assume that the marks result from defects in the tool that accumulate, it should be relatively easy to build (and test) a monotonic model. Suppose we have an unordered collection of sheets, with some variable number of defects per sheet. If the defects are repeated (i.e. we can recognize defect A whenever we see it, as well as B, and so on), then we can build together paths- all of the sheets without defects pointing towards all of the sheets with just defect A, then defect A and B, and so on. There should be divergence- if we never see sheets with both defect A and C, then we can conclude the 0-A-B path is one tool (with the only some of the 0 defect sheets coming from that tool, obviously), the 0-C-D-E path is another tool, and the 0-F-G path is a third tool. (Noting that here 'tool' refers to one repair cycle, not the entire lifecycle.)
The first assumption seems bad to me- I would assume defects accumulate only until equipment is reset or repaired, which is why I think you'd want some actual data.
That looks to me like it agrees with my assumption; I suspect my grammar is somehow unclear. (Note the last line of the grandparent.)
How about talking clearly about whatever you are currently hinting at?
I dunno, I find the complexity-hiding capitalized nouns things strangely attractive. Maybe there should be more capitalized nouns. Why isn't Sheets capitalized? This is probably coming back to my fascination with graph theory, which has similar but even more exotic terminology. "A spider is a subdivision of a star, which is a kind of tree made up only of leaves and a root; a star with three arcs is called a claw."
I was openly warned by a professor (who will likely be on the dissertation committee) not to talk about this project widely. The capitalized nouns are to highlight key terms. I believe the current description is specific enough to describe the situation accurately and without misleading people, but not too specific to break my professor's (correct) advice. Have I broken LW protocol? Obviously, I'm new here.

because I haven't wrapped it up in condescending niceties?

Being nice is important.

If that's still too ambiguous to render an opinion, what isn't?

Kindergarten level insults like "Mormon sort-of-rhymes with Moron" aren't just an expression of opinion. Mormon would be sort-of-rhyming with Moron, even if Mormonism had been true. What you instead expressed is a cutesy and juvenile way of insulting someone: "The mormon is a moron, the mormon is a moron, hahahaha!"


I found HPMOR nearly three years ago. Soon afterward, I finished the core sequences up through the QM sequence, read some of Eliezer's other posts, and other sequences and authors on LW. When I look back, I realize my thinking has been hugely influenced by what I have learned from this community. I cannot even begin to draw boundaries in my mind identifying what exactly came from LW; hopefully this means I have internalized the ideas and that I am actually using what I learned.

There is a story behind why I have now, after three years of lurking, finally created an account. I am currently a sophomore in high school. I have always been driven to learn by my curiosity and desire for truth and knowledge. But I am also a perfectionist and an overachiever. Somehow, in the last two years of high school, I began to latch onto academics as my “goal.” I started obsessing about ridiculous things - getting perfect scores on every assignment and test, guarding my perfect GPA, etc. It wasn't enough anymore that I understood the content without needing to study - I had to devote huge amounts of time and energy to achieve "perfection."

In March, over spring break, I returned to make some ... (read more)

Ooh, good school, I went there, best of luck.

Hi everyone,

I'm a PhD student in artificial intelligence/robotics, though my work is related to computational neuroscience, and I have strong interests in philosophy of mind, meta-ethics and the "meaning of life". Though I feel that I should treat finishing my PhD as a personal priority, I like to think about these things. As such, I've been working on an explanation for consciousness and a blueprint for artificial general intelligence, and trying to conceive of a set of weighted values that can be applied to scientifically observable/measurable/calculable quantities, both of which have some implications for an explanation of the "meaning" of life.

At the center of the value system I'm working on is a broad notion of "information". Though still at preliminary stages, I'm considering a hierarchy of weights for the value of different types of information, and trying to determine how bad this is as a utility function. At the moment, I consider the preservation and creation of all information valuable; at an everyday level I try to translate this into learning and creating new knowledge and searching for unique, meaningful experiences.

I've been aware of Le... (read more)

Greetings, LessWrongers. I call myself Intrism; I'm a serial lurker, and I've been hiding under the cupboards for a few months already. As with many of my favorite online communities, I found this one multiple times, through Eliezer's website, TVTropes, and Methods of Rationality (twice), before it finally stuck. I am a student of computer science, and greatly enjoy the discipline. I've already read many of the sequences. While I can't say I've noticed an increase in rationality since I've started, I have made some significant progress on my akrasia, including recently starting on an interesting but unknown LW-inspired technique which I'll write up once I have a better idea of how well it's performing.

Thank you for introducing me to the term akrasia!

How important are scholarly credentials vs just having that knowledge without a diploma?

I think in almost every field and occupation, having the scholarly credentials is extremely important. Knowledge without the credentials is pretty worthless (unless its worthwhile in itself, but even then you can't eat it): using that knowledge will generally require that people put trust in your having it, often when they're not in a position to evaluate how much you know (either because they're not experts, or they don't have the time). Credentials are generally therefore the basis of that trust. Since freelance work either requires more trust, or pays very badly and inconsistently, credentials are worth getting.

And that was the point of my previous post: some way or other, you have to earn people's trust that you can do a job worth paying you for. One way to earn that trust is to perform well despite lacking credentials. This will take an enormous amount of time and effort (during which you will not be paid, or at least not well) compared to doing whatever it takes to get as close to a 4.0 as you can. The faster you get people to trust you, the faster you can stop fighting to feed and she... (read more)


I said from the start that I didn't have any, and hoped you would, but when you guys couldn't help meI said "but there must be some out there."

This is a very odd epistemic position to be in.

If you expect there to be strong evidence for something, that means you should already strongly believe it. Whether or not you will find such evidence or what it is, is not the interesting question. The interesting question is why do you have that strong belief now? What strong evidence do you already posses that leads you to believe this thing?

If you haven't got any reason to believe a thing, then it's just like all the other things you don't have reason to believe, of which there are very many, and most of them are false. Why is this one different?.

The correct response, when you notice that a belief is unsupported, is to say oops and move on. The incorrect response is to go looking specifically for confirming evidence. That is writing the bottom line in the wrong place, and is not a reliable truth-finding procedure.

Also, "debate style" arguments are generally frowned upon around here. Epistemology is between you and God, so to speak. Do your thing, collect your evidence, come to your conclusions. This community is here to help you learn to find the truth, not to debate your beliefs.

That's a very good point. From what I've seen, most Christians who debate atheists end up using all kinds of convoluted philosophical arguments to support their position -- whereas in reality, they don't care about these arguments one way or another, since these are not the arguments that convinced them that their version of Christianity is true. Listening to such arguments would be a waste of my time, IMO.
The same is the case for a lot of atheist arguments. See my comment here [http://lesswrong.com/lw/h3p/welcome_to_less_wrong_5th_thread_march_2013/8zm1].
Yeah, you make a good point when you say that we need "Bayesian evidence", not just the folk kind of "evidence". However, most people don't know what "Bayesian evidence" means, because this is a very specific term that's common on Less Wrong but approximately nowhere else. I don't know a better way to put it, though. That said, my comment wasn't about different kinds of evidence necessarily. What I would like to hear from a Christian debater is a statement like, "This thing right here ? This is what caused me to become a Reformed Presbilutheran in the first place." If that thing turns out to be something like, "God spoke to me personally and I never questioned the experience" or "I was raised that way and never gave it a second thought", that's fine. What I don't want to do is sit there listening to some new version of the Kalaam Cosmological Argument (or whatever) for no good reason, when even the person advancing the argument doesn't put any stock in it.
I was raised Roman Catholic. I did give it a second thought; I found, through my life, very little evidence against the existence of God, and some slight evidence for the existence of God. (It doesn't communicate well; it's all anecdotal). I do find, on occasion, that the actions of God are completely mysterious to me. However, an omniscient being would have access to a whole lot of data that I do not have access to; in light of that, I tend to assume that He knows what He is doing. The existence of God also implies that the universe has some purpose, for which it is optimised. I'm not quite sure what that purpose is; the major purpose of the universe may be something that won't happen for the next ten billion years. However, trying to imagine what the purpose could be is an interesting occasional intellectual exercise.

I found, through my life, very little evidence against the existence of God

May I ask what you expected evidence against the existence of God to have looked like?

That is entirely the right question to ask. And the answer is, I don't have the faintest idea. The question there is, what would a universe without God look like? And that question is one that I can't answer. I'd guess that such a universe, if it were possible, would have more-or-less entirely arbitrary and random natural laws; I'd imagine that it would be unlikely to develop intelligent life; and it would be unlikely for said intelligent life, if it developed, to be able to gather any understanding of the random and arbitrary natural laws at all. The trouble is, this line of reasoning promptly falls into the same trouble as any other anthropic argument. The fact that I'm here, thinking about it, means that there is intelligent life in this universe. So a universe without intelligent life is counterfactual, right from the start. I knew that when I started constructing the argument; I can't be sure that I'm not constructing an argument that's somehow flawed. It's very easy, when I'm sure of the answer, to create an argument that's more rationalising than rationality; and it can be hard to tell if I'm doing that.

Doesn't this argument Prove Too Much by also showing that without a Metagod, God should be expected to have arbitrary and random governing principles? The universe is ordered, but trying to explain that by appealing to an ordered God begs the question of what sort of ordered Metagod constructed the first one.

Richard Dawkins does. The universe we see (he says somewhere; this is not a quote) is exactly what a world without God would look like: a world in which, on the whole, to live is to suffer and die for no reason but the pitiless working out of cause and effect, out of which emerged the blind, idiot god of evolution. A billion years of cruelty so vast that mountain ranges are made of the dead [http://en.wikipedia.org/wiki/Limestone]. A world beyond the reach of God [http://lesswrong.com/lw/uk/beyond_the_reach_of_god/].
To be fair, this type of argument only eliminates benevolent and powerful gods. It does not screen out actively malicious gods, indifferent gods, or gods who are powerless to do much of anything.
I don't see what's so bad about mountain ranges being made of dead bodies. The creatures that once used those bodies aren't using them anymore - those mere atoms might as well get recycled to new uses. The problem of death is countered by the solution of the afterlife; an omniscient God would know exactly what the afterlife is like, and an omniscient benevolent God could allow death if the afterlife is a good place. (I don't have any proof of the existance of the afterlife at hand, unfortunately). Suffering, now; suffering is a harder problem to deal with. Which leads around to the question - what is the purpose of the universe? If suffering exists, and God exists, then suffering must have been put into the universe on purpose. For what purpose? A difficult and tricky question. What I suspect, is that suffering is there for its long-term effects on the human psyche. People exposed to suffering often learn a lot from it, about how to handle emotions; people can form long-term bonds of friendship over a shared suffering, can learn wisdom by dealing with suffering. Yes, some people can shortcut the process, figuring out the lessons without undergoing the lesson; but many people can't.

Suffering, now; suffering is a harder problem to deal with. Which leads around to the question - what is the purpose of the universe? If suffering exists, and God exists, then suffering must have been put into the universe on purpose. For what purpose? A difficult and tricky question.

What I suspect, is that suffering is there for

This is using your brain as an outcome pump. Start with a conclusion to be defended, observations that prima facie blow it out of the water, and generate ideas for holding onto the conclusion regardless. You can do it with anything, and it's an interesting exercise in creative thinking to come up with a defence of propositions such as that the earth is flat, that war is good for humanity, or that you're Jesus. (Also known as retconning.) But it is not a way of arriving at the truth of anything.

What your outcome pump has come up with is:

What I suspect, is that suffering is there for its long-term effects on the human psyche.

War really is good for humanity! But what then is the optimal amount of suffering? Just the amount we see? More? Less?

I expect that the answer is that the omniscience and omnibenevolence of God imply that what we see is indeed just... (read more)

What makes suffering any harder a problem than death? Surely the same strategy works equally well in both cases. More precisely... the "solution of the afterlife" is to posit an imperceptible condition that makes the apparent bad thing not so bad after all, despite the evidence we can observe. On that account, sure, it seems like we die, but really (we posit) only our bodies die and there's this other non-body thing, the soul, which is what really matters which isn't affected by that. Applied to suffering, the same solution is something like "sure, it seems like we suffer, but really only our minds suffer and there's this other non-mind thing, the soul, which is what really matters and which isn't affected by that." Personally, I find both of these solutions unconvincing to the point of inanity, but if the former is compelling, I see no reason to not consider the latter equally so. If my soul is unaffected by death, surely it is equally unaffected by (e.g.) a broken arm?
As far as I can tell, most arguments of this kind hinge on that "slight evidence for the existence of God" that you mentioned. Presumably, this is the evidence that overcomes your low prior of God's existence, thus causing you to believe that God is more likely to exist than not. Since the evidence is anecdotal and difficult (if not impossible) to communicate, this means we can't have any kind of a meaningful debate, but I'm personally ok with that.
The problem here is that there is confusion between two senses of the word 'evidence': a) any Bayesian evidence b) evidence that can be easily communicated across an internet forum.

You are fixating on atheism for some reason. Assigning low probability to any particular religion, and only a marginally higher probability to some supernatural creator still actively shaping the universe results naturally from rationally considering the issue and evaluating the probabilities. So do many other conclusions. This reminds me of the creationists picking a fight against evolution, whereas they could have picked a fight against Copernicanism, the way flat earthers do.

Actually, the behavior Risto_Saarelma described fits the standard pattern. People who cannot be helped are ignored or rejected. Take any stable community, online or offline, and that's what you see.

For example, f someone comes to, say, the freenode ##physics IRC channel and starts questioning Relativity, they will be pointed out where their beliefs are mistaken, offered learning resources and have their basic questions answered. If they persist in their folly and keep pushing crackpot ideas, they will be asked to leave or take it to the satellite off-topic channel. If this doesn't help, they get banned.

Again, this pattern appears in every case where a community (or even a living organism) is viable enough to survive.

Saluton! I'm an ex-mormon athiest, a postgenderist, a conlanging dabbler, and a chronic three-day monk.

Looking at the above posts (and a bunch of other places on the net), I think ex-mormons seem to be more common than I thought they would be. Weird.

I'm a first-year college student studying only core/LCD classes so far because every major's terrible and choosing is scary. Also, the college system is madness. I've read lots of posts on the subject of higher education on LessWrong already, and my experience with college seems to be pretty common.

I discovered LessWrong a few months ago via a link on a self-help blog, and quickly fell in love with it. The sequences pretty much completely matched up with what I had come up with on my own, and before reading LW I had never encountered anyone other than myself who regularly tabooed words and rejected the "death gives meaning to life" argument et cetera. It was nice to find out that I'm not the only sane person in the world. Of course, the less happy side of the story is that now I'm not the sanest person in my universe anymore. I'm not sure what I think about that. (Yes, having access to people that are smarter than me ... (read more)

What will you do now that you can't form a movement of rationalists? Take over world? Become a superhero? Invent the best recipe for cookies? MAINTAIN AND INCREASE DIVERSITY? For example, I am going to post a recipe for a bacon trilobite and my experiences and thoughts about paperclipping among humans. Any interesting things you be thinkin' of postin'? ^^

IIRC the standard experimental result is that atheists who were raised religious have substantially above-average knowledge of their former religions. I am also suspicious that any recounting whatsoever of what went wrong will be greeted by, "But that's not exactly what the most sophisticated theologians say, even if it's what you remember perfectly well being taught in school!"

This obviously won't be true in my own case since Orthodox Jews who stay Orthodox will put huge amounts of cumulative effort into learning their religion's game manual over time. But by the same logic, I'm pretty sure I'm talking about a very standard element of the religion when I talk about later religious authorities being presumed to have immensely less theological knowledge than earlier authorities and hence no ability to declare earlier authorities wrong. As ever, you do not need a doctorate in invisible sky wizard to conclude that there is no invisible sky wizard, and you also don't need to know all the sophisticated excuses for why the invisible sky wizard you were told about is not exactly what the most sophisticated dupes believe they believe in (even as they go on telling children abo... (read more)

The trouble with this heuristic is it fails when you aren't right to start with. See also: creationists. That said, you do, in fact, seem to understand the claims theologians make pretty well, so I'm not sure why you're defending this position in the first place. Arguments are soldiers? Well, I probably know even less about your former religion than you do, but I'm guessing - and some quick google-fu seems to confirm - that while you are of course correct about what you were thought, the majority of Jews would not subscribe to this claim. You hail from Orthodox Judaism, a sect that contains mostly those who didn't reject the more easily-disprove elements [http://lesswrong.com/lw/lr/evaporative_cooling_of_group_beliefs/] of Judaism (and indeed seems to have developed new beliefs guarding against such changes, such as concept of a "written and oral Talmud" that includes the teachings of earlier authorites.) Most Jews (very roughly 80%) belong to less extreme traditions, and thus, presumably, are less likely to discover flaws in them. Much like the OP belonging to a subset of Mormons who believe in secret polar Israelites. Again, imagine a creationist claiming that they were taught in school that a frog turned into a monkey, dammit, and you're just trying to disguise the lies you're feeding people by telling them they didn't understand properly! If a claim is true, it doesn't matter if a false version is being taught to schoolchildren (except insofar as we should probably stop that.) That said, disproving popular misconceptions is still bringing you closer to the truth - whatever it is - and you, personally, seem to have a fair idea of what the most sophisticated theologians are claiming in any case, and address their arguments too (although naturally I don't think you always succeed, I'm not stupid enough to try and prove that here.)
I believe the result [http://www.pewforum.org/U-S-Religious-Knowledge-Survey.aspx] is that atheists have an above average knowledge of world religions, similar to Jews (and Mormons) but I don't know of results that show they have an above average knowledge of their previous religion. Assuming most of them were Christians then the answer is possibly. In this particular case I happen to know precisely what is in all of the official church material; I will admit to having no idea where his teachers may have deviated from church publications, hence me wondering where he got those beliefs. I suppose I can't comment on what the average believer of various other sects know of their sects beliefs, only on what I know of their sects beliefs. Which leaves the question of plausibility that I know more then the average believer of say Catholicism or Evangelical Christianity or other groups not my own. [edit] Eliezer, I am not exactly new to this site and have previously responded in detail to what you have written here. Doing so again would get the same result as last time.

Alright. Hi. I'm a senior in high school and thinking about majoring in Computer Science. Unlike most other people my age, this is probably my first post on any chat forum/ wiki/ blog. I also don't normaly type things without a spell checker and would like to get better. Any coments about my spelling or anything else would be appriciated.

My brother showed me this site a while back and also HP:MoR. Spicificly, I saw the Sequences. And they were long. Some of them were some-what interesting but mostly they were just long. In addition to that, I had just been introduced to the Methods of Rationality which, dispite being long, was realy interisting (actualy my favorite story that I have ever read), and there was some other things, so yeah . . . I still haven't read them. But anyway, that was about a year ago and at this point I have read through MoR at least three times. I feel that I am starting to think sort of rationaly and would like to improve on that.

In addition to that, I have this friend that I talk to at lunch. Normaly we talk about things that we probably don't have any ideas about that actualy reflect reality, like the origins of the universe, time travel, artificial intel... (read more)

Since you asked... "comments", "appreciated". Welcome to LessWrong!
Welcome! I should probably write a post, "Why not to major in computer science." My advice is to be aware that there is almost no money in the world budgeted to computer science research, that most people can't even conceive of or believe in the concept of computer science research, and that a degree in computer science leads only to jobs as a computer programmer unless it is from a top-five school.

jobs as a computer programmer

You say that like it's a bad thing.

Hi everyone, I'm labachevskij. I'm a long time lurker on this site, attracted by (IIRC) Bayesian Decision Theory. I'm completing my PhD studies in Maths, but I have also been caught by HPMOR, which is proving a huge source of procrastination (I'm reading it again for the third time). I'm also on my way with the reading of the sequences.

Welcome labachevskij! What part of Math are you focusing on?

carefully evaluating both sides of an issue

Are we ever allowed to say "okay, we have evaluated this issue thoroughly, and this is our conclusion; let's end this debate for now"? Are we allowed to do it even if some other people disagree with the conclusion? Or do we have to continue the debate forever (of course, unless we reach the one very specific predetermined answer)?

Sometimes we probably should doubt even whether 2+2=4. But not all the time! Not even once in a month. Once or twice in a (pre-Singularity) lifetime is probably more than necessary. -- Well, it's very similar for the religion.

There are thousands of issues worth thinking about. Why waste the limited resources on this specific topic? Why not something useful... such as curing the cancer, or even how to invent a better mousetrap?

Most of us have evaluated the both sides of this issue. Some of us did it for years. We did it. It's done. It's over. -- Of course, unless there is something really new and really unexpected and really convincing... but so far, there isn't anything. Why debate it forever? Just because some other people are obsessed?

So, I basically agree with you, but I choose to point out the irony of this as a response to a thread gone quiet for months.
LOL I guess instead of the purple boxes of unread comments, we should have two colors for unread new comments and unread old comments. (Or I should learn to look at the dates, but that seems less effective.)
(blinks) Oh, is THAT what those purple boxes are!?! * learns a thing *
Wait, what purple boxes? Am I missing something?
As I respond to this, your comment is outlined in a wide purple border. When I submit this response, I expect that your comment will no longer be outlined, but my comment will. If I refresh the screen, I expect neither of ours will. This has been true since I started reading LW again recently, and I have mostly been paying no attention to it, figuring it was some kind of "current selection" indicator that wasn't working very well. But if it's an "unread comment" indicator, then it works a lot better. Edit - I was close. When I submit, your comment is still purple, and mine isn't. If I refresh once, yours isn't and mine is. If I refresh again, neither is.
Oh now I see. Both of our comments are purple-boxed. Let's see what happens when I comment and refresh.

Hello! I’m a 15 year old sophomore in high school, living in the San Francisco Bay Area. I was introduced to rationality and Less Wrong while interning at Leverage Research, which was about a month ago.

I was given a free copy of Chapters 1-17 of HPMOR during my stay. I was hooked. I finished the whole series in two weeks and made up my mind to try and learn what it would be like being Harry.

I decided to learn rationality by reading and implementing The Sequences in my daily life. The only problem was, I discovered the length of the Eliezer’s posts from 2006-2010 was around around 10 Harry Potter books. I was told it would take months to read, and some people got lost along the way due to all the dependencies.

Luckily I am very interested in self improvement, so I decided that I should learn speed reading to avoid spending months dedicated solely to reading The Sequences. After several hours of training, I increased my reading speed (with high comprehension) five times, from around 150 words per minute to 700 words per minute. At that speed, it will take me 33.3 hours to read The Sequences.

It seems like most people advise reading The Sequences in chronological order in ebook form. I... (read more)

If I could spend 5 seconds to a minute after each blog post doing anything, what should I do?

Figure out how you would explain the main idea of the post to a smart friend.

Thanks! Just curious, how come you chose that over simply taking short 10 second notes allowing me to memorize all the main ideas?
2Eliezer Yudkowsky10y
IIRC notetaking is supposed to work less well than explaining something to others. I don't know about imagining how to explain something to others.
I would imagine that actually explaining it out loud to a rubber duck is better than imagining explaining it to a friend, for the same reasons that it is a common debugging practice [https://en.wikipedia.org/wiki/Rubber_duck_debugging]. Actually putting something into words makes weak spots in understanding obvious in a way that imagination can glide over.
Perhaps note taking works less well for understanding, but explaining it out loud without recording it down or even writing my explanation will do very little for long term recall. What good will it do if I forget everything I read, after spending many hours reading it?
At first, I think I will try explaining ideas out loud as I read to save time, then write ultrashort notes on main ideas for long term memory. Thanks for everyone's help!
Both would work but my idea is less obvious so perhaps more helpful.
That's an interesting idea. I suppose it might help with better understanding the concept, but it might not work for long term memorization. Should I write the explanations down?
That would probably help if you have the time.
Welcome! As you're interested in applying the Sequences to your daily life, I suggest checking out the Center for Applied Rationality [http://rationality.org/]. (Maybe you overlapped with them at Leverage?) As part of their curriculum development process, they offer free classes at their Berkeley office sometimes. If you sign up here [https://docs.google.com/spreadsheet/embeddedform?formkey=dDhnS1RaNWc0NGFiZEY2ZVV2NjgyRHc6MA] you'll be put on a mailing list where they announce these sessions, usually a day or so in advance.
Thanks, I just signed up. Do you think taking a full CFAR workshop would be a good next step after The Sequences? I'll be done in about 4 days at current reading speed (no planning fallacy adjustments), so I should probably plan ahead now.
It would definitely be a good next step. I don't know if they have a minimum age for workshops, but it doesn't hurt to apply.
I don't believe they have age constraints, the issue is the monetary constraints :p Thanks for your help!
They offer financial aid, too.
Since I have a total of $23, I must get my parents to pay and allow me to go for a week, that will be the tricky part
People might not like my response, but I'd say that if you're in a situation where you believe something might be beneficial to you but it consumes a substantial portion of your resources, you should heavily lean towards not going. This applies as much to a rationality workshop attended by someone with a tiny budget as it applies to playing the stock market. Making large expenditures for an uncertain return is generally a bad bet even if the expected utility gain is positive, if failure has a very negative consequence. And human beings are notoriously bad at assessing the expected utility in such situations. You also need to be very confident in your ability to evaluate arguments [http://squid314.livejournal.com/350090.html] if you don't want to end up worse than before. Obviously, this doesn't apply if you're absolutely certain that going gives you more benefit than you forego in money, time, and parental willingness to give in (which may, in fact, be in limited supply) so there is no risk of loss, but not too many people are really that certain.
But surely going to a rationality workshop is the best way to learn to evaluate whether to go to a rationality workshop. And whether it succeeds or not, you can be convinced it was a good idea!

Hello, Less Wrong, I'm Anna Zhang, a high school student. I found this site about half a month ago, after reading Harry Potter and the Methods of Rationality. On Mr. Yudkowsky's Wikipedia page, I found a link to his site, where I found a link to this site. I've been reading the sequence How to Actually Change Your Mind, as Mr. Yudkowsky recommended, and I've learned a lot from it (though I still have a lot to learn...)

Welcome! If you want to meet other high schoolers, this [http://lesswrong.com/lw/8l7/welcome_to_lesswrong_for_highschoolers/] looks like a good place to start.

I'm going to unify a couple comment threads here.

Perhaps it's not fair of me to ask for your evidence without providing any of my own. However I really don't want to just become the irrational believer hopelessly trying to convince everyone else.

Honestly, I think you'd be coming across as much more reasonable if you were actually willing to discuss the evidence than you do by skirting around it. There are people here who wouldn't positively receive comments standing behind evidence that they think is weak, but at least some people would respect your willingness to engage in a potentially productive conversation. I don't think anyone here is going to react positively to "There's some really strong evidence, and I'm not going to talk about it, but you really ought to have come up with it already yourself."

Will Newsome gets like that sometimes, and when he does, his karma tends to plummet even faster than yours has, and he's built up a lot of it to begin with.

If you want to judge whether our inability to provide "good" arguments really is due to our lack of familiarity with the position we're rejecting, then there isn't really a better way than to expose us to ... (read more)

I second this recommendation. Ibidem, it seems that you don't want to be put in the position of defending your beliefs among people who might consider them weird, or stupid, or even harmful. I empathize a lot with that; I've been in the same situation enough times to know how nasty and unfun it can get. But unfortunately, I don't think there's another way the conversation can continue. You've said a few times that you expected us to know of some good arguments for theism, and that you're disappointed that we don't have any. Well, what can anyone say in response to that but "Okay, please show us what we're missing"? I think you can at least trust the community here to take what you say seriously, and not just dismiss you out of hand or use it as an opportunity to score tribal points and virtual high-fives. We're at least self-aware enough to avoid those discussion traps most of the time.

Discovered while researching the global effects of a Pak-Indo nuclear exchange. Once here I began to dig further and found it appealing. I am a simple soldier pushing myself into a Masters in biology. Am I rationalist? I am not sure to be honest. If I am I know the exact date and time when I started to become one. Nov 2004 I was part of the battle of Fallujah, during an exchange of gunfire a child was injured. I will never know if it was one my rounds that caused her head injury but my lips worked to bring her life again. It was a futile attempt, she passed and while clouded with this damn experience I myself was wounded. At that very moment I lost my faith in any loving deity. My endless pursuit of knowledge, to include academics provided by a brick and mortar school has helped me recover from the loss of a limb. I still have the leg however it does not function well. I like to think and philosophy fascinates me, and this site fascinates me. :) Political ideology- Fiscally Conservative Religion-possibilian Rather progressive on issues like gay marriage and abortion. Abortion actually the act I despise but as a man I feel somehow that I haven't the organs to complain. To sum me up I suppose I am a crippled, tobacco chewing, gun toting member of the Sierra Club with a future as a freshwater biologist with memories I would like to replace with Bayes. LoL Well I just spilled that mess out, might as well hit post. Please feel free to ask anything you like, I am not sensitive. Open honesty to those that are curious is good medicine.

Welcome. Hope you find what you are looking for, and maybe find some of it here.

This is where you are confused. Almost certainly it is not the only confusion. But here is one:

Values are not claims. Goals are not propositions. Dynamics are not beliefs.

A machine that maximises paperclips can believe all true propositions in the world, and go on maximising paperclips. Nothing compels it to act any differently. You expect that rational agents will eventually derive the true theorems of morality. Yes, they will. Along with the true theorems of everything else. It won't change their behaviour, unless they are built so as to send those actions identified as moral to the action system.

If you don't believe me, I can only suggest you study AI (Thrun & Norvig) and/or the metaethics sequence until you do. (I mean really study. As if you were learning particle physics. It seems the usual metaethical confusions are quite resilient; in most peoples' cases I wouldn't expect them to vanish without actually thinking carefully about the data presented.) And, well, don't expect to learn too much from off-the-cuff comments here.

Designating PrawnOfFate a probable troll or sockpuppet. Suggest terminating discussion.

Request accepted, I'm not sure if he's being deliberately obtuse, but I think this discussion probably would have borne fruit earlier if it were going to. I too often have difficulty stepping away from a discussion as soon as I think it's unlikely to be a productive use of my time.

Hi. I'm a computer science student in Oulu University (Finland).

I don't remember exactly how I got here, but I guess some of the first posts I read were about counterarguments to religious delial of evolution.

I have been intrested in rationality (along with sciense and technology) for a long time before I found lesswrong, but back then my view of rationality was mostly that it was the opposite of emotion. I still dislike emotions - I guess that it's because they are so often "immune to reflection" (ie. persistently "out of sync" with what I know to be the right thing to do). However, I'm aware that emotions do have some information value (worse than optimal, but better than nothing) and simply removing emotions from human neuroarchitechture without other changes might result something functionally closer to a rock than a superhuman...

I'm an atheist and don't believe in non-physical entities like souls, but I still believe in eternal life. This unorthodox view is because 1) I'm a (sort of) "modal realist": I believe that every logically possible world actually physically exists (it's the simplest answer I've found to the question "Why does anything... (read more)

Have you read Brain Lock [http://www.amazon.com/Brain-Lock-Yourself-Obsessive-Compulsive-Behavior/dp/0060987111]?

Hey Lesswrong.

This is a sockpuppet account I made for the purpose of making a post to Discussion and possibly Main, while obscuring my identity, which is important due to some NDAs I've signed with regards to the content of the post.

I am explicitly asking for +2 karma so that I can make the post.

Yo. I've been around a couple years, posted a few times as "ZoneSeek," re-registered this year under my real name as part of a Radical Honesty thing.

Nobody can recruit Grigori Perelman for IMO, either.

Perelman is an IMO gold medalist.

Hello LW. My pseudonym is DiscyD3rp, and this introduction is long overdo. I am 17, male, and currently enrolled in high school. I discovered this site over a year ago, via HPMoR, and have read a good percentage of the main sequences in a kinda correct order. However, i was experiencing significant angst from what I call Dungeon Crawl Anxiety (The same reason that when exploring RPG dungeons i double back and explore even AFTER discovering the correct path). I am now (re-)reading the entirety of Eliezer's posts in the ebook version of the sequences. I have found the re-read articles still useful after having gotten a basic handle on bayesian thought, and look forward to completing my enlightenment

As far as personality, I was (am) incredibly arrogant, and future goals involve MIRI and/or rationality teaching myself (one time involves an email to Eliezer claiming the ability to save the world, and subsequently learning that decision theory is HARD). I am not particularly talented in quickly absorbing technical fields of knowledge, but plan on on developing that skill. My existing talent seems to be manipulating idea and concepts easily and creatively once well understood. Im great at reading the map, but suffer difficulty in writing it. (In very mathy fields)

Im a born Christian, with a moderate upbringing, but likely saved from extremism by the internet just in time. Now a skeptic and an atheist.

I hope you will forgive the impertinance of offering unsolicited advice: if you havn't already, you might consider teaching yourself several programming languages in your free time. It's a very marketable skill, important to MIRI's work, and in many ways suffices for a basic education in logic. The mathy stuff is probably not optional given your ambitions, and much of the same discipline and attention to detail necessary cor programming can be applied to learning serious math. Arrogance will be a terrible burden if unaccompanied by usefulness and skill.
I am currently teaching myself Haskel and have a functional programming textbook on my device. While unsolicited, i apreciate ALL advice. Any other tips?
Nope, that's all I got. Wait, one more thing. I learned in a painful way that scholarly credentials are most cheaply won (time and effort wise) in high school, and then it gets exponentially more difficult as you age. Every hour you spend making sure you get perfect grades now is worth ten or a hundred hours in your early-mid twenties. Looking back, getting anything less than perfect grades, given how easy that is in high school, seems utterly foolish. Maybe you already know that. Good luck!
Given your ambition I suggest changing your name to something respectable before you have spent time establishing a name for yourself. DiscyD3rp will make establishing credibility more difficult for you.

Everyone here is expecting me to provide good arguments. I said from the start that I didn't have any, and hoped you would, but when you guys couldn't help meI said "but there must be some out there."

Wait a minute.

You came here without any good reasons to believe in the truth of religion, and then were surprised when we, a group of (mostly) atheists, told you that we hadn't heard of any good reasons to believe in religion either?

I am honestly curious: what makes you think such good reasons exist? Why must there be some good arguments for religion out there? You, a religious person, have none, and you are (apparently?) still religious despite this.

P.S. For what it's worth, I hope you continue to participate in the discussion here, and I look forward to hearing your thoughts, and how your views have evolved.

See my distinction here [http://lesswrong.com/lw/h3p/welcome_to_less_wrong_5th_thread_march_2013/8zm1].

Then you must believe the same with respect to homeopathic remedies, the flat earth society, and those who believe they can use their spiritual energy in the martial arts. Give us some good arguments for those.

There's a lot of stuff out there for which it seems to me there is no good argument. I mean really, let's try to maintain some sense of perspective here. The belief that everyone has a decent argument is, I think, pretty much demonstrably false. You presumably want us to believe that you're in the same category as people who ought to be taken seriously, but I don't really see how a belief in God is any more worthy of that than a belief in homeopathic remedies. At least, not based on your argument that all positions ought to be considered to have good arguments. If you're trying to make a general argument, you're going to get lumped in with them.

But you haven't showed much willingness so far to discuss your reasons for your belief in which way the evidence falls or ours.

I can understand not wanting to discuss a settled question with people who're too biased to analyze it reasonably, but if you're going to avoid discussing the matter here in the first place, it suggests to me that rather than concluding from your experience with us that we're rigid and closed-minded on the matter, you've taken it as a premise to begin with, otherwise where's the harm in discussing the evidence?

I consider the matter of religion to be a settled question because I've studied the matter well beyond the point of diminishing returns for interesting evidence or arguments. Are you familiar enough with the evidence that we're prepared to bring to the table that you think you could argue it yourself?

Just as I've been told repeatedly that your atheism is a foregone conclusion.

Can you point to where you've been told that?

What I think most of us would agree on, and what it seems to me that people here have told you, is that they consider atheism to be a settled question, which is not at all the same thing.

I never said that I considered people different than me to not be good. What I said in earlier comments is that I liked The God Delusion because it introduced me to the concept that you can be "a good, healthy, happy person without believing in God". I believed that those who did not have faith in God would be more likely to be immoral, would be more likely to be unhealthy, and would definitely be more unhappy than if they did believe in God. The book presented to me a case for how atheists can be just as moral, just as healthy, just as happy as theists, an argument I had never seen articulated before. I apologize that I had never conjured this idea up before reading The God Delusion, it just seemed obvious to me based on my study of the Gospel that they couldn't be.

What passages in the scriptures tell you that you can be moral, healthy, and happy without faith in God? It seems pretty consistent to me that in the scriptures they say you can only have those qualities in your life if you believe in God and follow his commandments.

I fail to see how blood atonement, Adam-God, racist theology, and polygamist theology gave you the slightest impression that the Journal of Disc

... (read more)

My $0.02: the most valuable piece of information I get from open-ended introductions is typically what people choose to talk about, which I interpret as a reflection of what they consider important. For example, I interpret the way you describe yourself here as reflecting a substantial interest in how other people judge you.

Found helpful. Your conclusion is true, but not something I'd think to mention. Now I can construct an introduction template: "I'm Alrenous, and I find X important." It won't be complete, but at least it also won't be inaccurate.

Selectivity, in the relevant sense, is more than just a question of how many people are granted something.

How many people are not on that site, but could rank highly if they chose to try? I'm guessing it's far more than the number of people who have never taken part in the IMO, but who could get a gold medal if they did.

(The IMO is more prestigious among mathematicians than topcoder is among programmers. And countries actively recruit their best mathematicians for the IMO. Nobody in the Finnish government thought it would be a good idea to convince and train Linus Torvalds to take part in an internet programming competition, so I doubt Linus Torvalds is on topcoder.)

There certainly are things as selective or more than the IMO (for example, the Fields medal), but I don't think topcoder is one of them, and I'm not convinced about "plenty". (Plenty for what purpose?)

I've tried to compare it more accurately. It's very hard to evaluate selectivity. It's not just the raw number of people participating. It seems that large majority of serious ACM ICPC participants (both contestants and their coaches) are practising on Topcoder, and for the ICPC the best college CS students are recruited much the same as best highschool math students for IMO. I don't know if Linus Torvalds would necessarily do great on this sort of thing - his talents are primarily within software design, and his persistence as the unifying force behind Linux. (And are you sure you'd recruit a 22 years old Linus Torvalds who just started writing a Unix clone?). It's also the case that 'programming contest' is a bit of misnomer - the winning is primarily about applied mathematics - just as 'computer science' is a misnomer. In any case, its highly dubious that understanding of QM sequence is as selective as any contest. I get it fully that Copenhagen is clunky whereas MWI doesn't have the collapse, and that collapse fits in very badly. That's not at all the issue. However badly something fits, you can only throw it away when you figured out how to do without it. Also, commonly, the wavefunction, the collapse, and other internals, are seen as mechanisms of prediction which may, or may not, have anything to do with "how universe does it" (even if the question of "how universe does it" is meaningful, it may still be the case that internals of the theory have nothing to do with that, as the internals are massively based upon our convenience). And worse still, MWI is in many very important ways lacking.

I made an account seven months ago, but I wasn't aware of the last welcome thread, so I guess I'll post on this one.

I'm not sure when I exactly "joined". My first contact with this community was passing familiarity with "Overcoming bias" as one of the blogs which sometimes got linked in the blogosphere I frequented in high school. As typical of my surfing habits in those days, I spent one or two sessions reading it for hours and then promptly forgot about all it. Second contact was a recommendation from another user on reddit to Lesswrong. Third contact was a few months later when my roommate recommended I read hpmor. I lurked for a short time, and made an account, and went to my first few meetups about two months ago. Meetups are fun, you meet lots of smart people, and I highly recommend it.

First impressions? I think this is the (for lack of a better word) most intellectual internet community that I am familiar with. Almost every post or comment is worth reading, and the site has got an addictive reddit-ish feel about it (which hampers my productivity somewhat, but que sera, sera.)

I've noticed that most of the opinions here tend to align precisely with my own... (read more)

I noticed this as well, while first reading the sequences. I flew through blog posts, absorbing it all in, since it all either matched my own thoughts, or were so similar that it hardly took effort to comprehend. But I struggled to find anything original to say, which was part of why I initially didn't bother making an account - I didn't want to simply express agreement every time. (And now I notice that my second comment is precisely that.) That's one of the things I've frequently benefited from in my thinking. I have found that the concepts behind keywords like dissolving the question, mysterious answers, map and territory, and the teacher's password can be applied in so many areas, and that having the arsenal to use them makes it much easier to think clearly about otherwise elusive concepts.

Hello, my name is Cam :]

My goals in life are:

  1. To build a self sufficient farm I with renewable alternative energy and everything.
  2. Acquire financial assets to support the building of my farm and other hobbies and activities I pursue. 3 .To further my fitness and health and maintain it.
  3. Love and Romance.

That's pretty much it, hahaha, I want to learn the ways of a Rationalist to make the best decisions and solutions for problems I might encounter in pursuing these goals! I have a immature or childlike air around me, people tend to say, which is why I am ... (read more)

Have you already built something? Do you have specific plans?

Hello Less Wrong community members,

My name is Zoe, I'm a philosophy student, and increasingly discombobulated by the inadequacy of my field of study to teach me how to Actually Do Things. I discovered Less Wrong 18 months ago, thanks to the story Harry Potter and the Method of Rationality. I've read a number of articles and discussions since then, mostly whenever I felt like reading something both intelligent and relevant, but I have not systematically read through any sequence or topic.

I have recently formed the goal to develop the skills necessary to 'ra... (read more)

Welcome! Let me know if you figure something out. So far I haven't been able to do it without coming across as weird.

Hello, my name is Watson. The username comes from my initials and a Left 4 Dead player attempting to pronounce them. I am a math student at UC Berkeley and a longtime lurker. I've got a post on rational investing, based on the conclusions of years of research by academic economists, but despite lurking I never realized there is a karma limit to post in discussion. I'm interested in just about everything, a dangerous phenomenon.

Hello to the Less Wrong community. My name is Leslie Cuthbert and I'm a lawyer based in the United Kingdom. I look forward to reading the various sequences and posts here.

There are many other intelligent and thoughtful people who disagree. Why -- epistemically, not historically -- do you place particular weight on your parents' beliefs? How did they come by those beliefs?

A sufficiently intelligent mind (and I think I can assume that if God exists, then He is sufficiently intelligent) can impose self-consistency and order on itself.

This begs Eliezer's question, I think. Intelligence itself is highly non-arbitrary and rule-governed, so by positing that God is sufficiently intelligent (and the bar for sufficiency here is pretty high), you're already sneaking in a bunch of unexplained orderliness. So in this particular case, no, I don't think you can assume that if God exists, then He is sufficiently intelligent, just like I can't respond to your original point by assuming that if the universe exists, then it is orderly.

I've now had an overwhelming request to hear my supposed strong arguments. It would be awfully lame of me to drop out now.

Just say "Oops" and move on. My point is that you almost certainly don't have good arguments, which is why your post won't be well-received. If it is so, it's better to notice that it is so in advance and act accordingly.

A rationalist ought to have heard arguments and evidence that challenged his (dis)beliefs, and have come out stronger because of it.

A rationalist

You keep using that word...

In Avoiding Your Belief's Real Weak Points, Eliezer says:

There is a tradition of inquiry. But you only attack targets for purposes of defending them. You only attack targets you know you can defend.

In Modern Orthodox Judaism I have not heard much emphasis of the virtues of blind faith. You're allowed to doubt. You're just not allowed to successfully doubt.

The point being t... (read more)

Hi! I'm a 24 year old woman starting grad school this fall studying mathematics. Specifically I'm interested in mathematically modelling organizational decision making.

My parents raised me on Carl Sagan and Michael Shermer, so there was never really a point that I didn't identify as a rationalist. I discovered less wrong long enough ago that I don't actually remember how I found it. I've been lurking here for several years. I finally registered after doing the last survey, though I didn't make another post until the last few days.

Oh, and I have a talking c... (read more)

What I am wondering about is why it seems that atheists have complete caricatures of their previous theist beliefs.

Suppose there is diversity within a religion, on how much the sensible and silly beliefs are emphasized. If the likelihood of a person rejecting a religion is positively correlated with the religion recommending silly beliefs, then we should expect that the population of atheist converts should have a larger representation of people raised in homes where silly beliefs dominated than the population of theists. That is, standard evaporative c... (read more)

I've been browsing the site for at least a year. Found it through HP:MoR, which is absolutely amazing. I've been coming to the LessWrong study hall for a couple weeks now and have found it highly effective.

For the most part, I haven't really applied this at all. I ended up making a final break with Christianity, but the only significant difference is that I now say "Yay humanism!" instead of "Yay God!" I've used a few tricks here and there, like the Sunk Cost Fallacy, and the Planning Fallacy, but I still spent the majority of my time n... (read more)

Well, hello. I'm a first-year physics PhD student in India. Found this place through Yvain's blog, which I found when I was linked there from a feminist blog. It's great fun, and I'm happy I found a place where I can discuss stuff with people without anyone regularly playing with words (or, more accurately, where it's acceptable to stop and define your words properly). So, one of my favourite things about this place is the fact that it's based on the map to territory idea of truth and beliefs; I've been using it to insult people ever since I read it.

The po... (read more)


I'm a philosopher (postdoc) at the London School of Economics who recently discovered Less Wrong. I am now reading through lots of old posts, especially Yudkowsky's and lukeprog's philosophy-related material, which I find very interesting.

I think lukeprog is right when he points out that the general thrust of Yudkowsky's philosophy belongs to a naturalistic tradition often associated with Quine's name. In general, I think it would be useful to situate Yudkowsky's ideas visavi the philosophical tradition. I hope to be able to contributre something here ... (read more)

Hi. I've been a distant LW lurker for a while now; I first encountered the Sequences sometime around 2009, and have been an avid HP:MOR fan since mid-2011.

I work in computer security with a fair bit of software verification as flavoring, so the AI confinement problem is of interest to me, particularly in light of recent stunts like arbitrary computation in zero CPU instructions via creative abuse of the MMU trap handler. I'm also interested in applying instrumental rationality to improve the quality and utility of my research in general. I flirt with some ... (read more)

Hello, I am a 46 yr old software developer from Australia with a keen interest in Artificial Intelligence.

I don’t have any formal qualifications, which is a shame as my ideal life would be to do full time research in AI - without a PhD I realise this won’t happen, so I am learning as much as I can through books, practice and various online courses.

I came across this site today from a link via MIRI and feel like I have struck gold - the articles, sequences and discussions here are very well written, interesting and thoughtful.

My current goals are to build a... (read more)

Hi, I'm Brayden, from Melbourne Australia. I attended the May 2013 CfAR workshop in Berkeley about 1 year after finding Less Wrong, and 2 years after finding HPMOR. My trip to The States was phenomenal, and I highly recommend the CfAR workshops.

My life is significantly better now than it was before, and I think I am on track with the planning process for eventually working on the highest impact causes that might help save the world.

Hello Less Wrong! I am Scott Garrabrant, a 23 year old math PhD student at UCLA, studying combinatorics. I discovered Less Wrong about 4 months ago. After reading MoR and a few sequences, I decided to go back and read every blog post. (I just finished all Eliezer's OB posts) I was going to wait and start posting after I got completely caught up, but then I started attending weekly meetups 2 months ago, and now I need to earn enough karma to make meetup announcements.

I have been interested in meta-thinking for a long time. I have spent a lot of time thinkin... (read more)

As a new member of this community, I am having a bit of difficulty with the numerous abbreviations that people use in their writing on this site. For example I have come across a number of these that are not listed on the Jargon page (eg: EY, PC, NPC, MWI...). I realize that as a new member, I will eventually understand many of these, however, it is very frustrating trying to read something and be continually distracted by having to look-up some of these obscure terms. This is especially a problem on the Welcome Thread, where a potential new member could ... (read more)

I added the acronyms you mentioned to the Jargon page [http://wiki.lesswrong.com/wiki/Jargon]. Tell me if you come across any more. You can also edit the page to add them yourself as you learn them if you like.

Hi, my name is Danon. I just joined less wrong after reading a wonderful post by Swimmer963: http://lesswrong.com/lw/9j1/how_i_ended_up_nonambitious/ on her reasoning for why she ended up without ambition (actually, I felt she had a lot of ambition). I got to her post while trying to figure out why I am lazy, I was wondering if it was because I had no (or little, if any) ambition. Her post got me asking the right questions I have finally been able to save a private draft in LW stating a reasoning for my laziness. It really is refreshing to read the posts here at LW. Thank you for having me.

I want to know what everyone thinks of my [response] to EY

I think it's confused.

If I were part of a forum that self-identified as Modern Orthodox Jewish, and a Christian came along and said "you should identify yourselves as Jewish and anti-Jesus, not just Jewish, since you reject the divinity of Jesus", that would be confused. While some Orthodox Jews no doubt reject the divinity of Jesus a priori, others simply embrace a religious tradition that, on analysis, turns out to entail the belief that Jesus was not divine.

Similarly, we are a for... (read more)

I guess the core of the confusion is treating atheism like an axiom of some kind. Modelling an atheist as someone who just somehow randomly decided that there are no gods, and is not thinking about the correctness of this belief anymore, only about the consequences of this belief. At least this is how I decode the various "atheism is just another religion" statements. As if in our belief graphs, the "atheism" node only has outputs, no inputs. I am willing to admit that for some atheists it probably is exactly like this. But that is not the only way it can be. And it is probably not very frequent at LW. The ideas really subversive to theism are reductionism, and the distinction between the map and the territory (specifically that the "mystery" exists only in the map, that it is how an ignorant or a confused mind feels from inside). At first there is nothing suspicious about them, but unless stopped by compartmentalization, they quickly grow to materialism and atheism. It's not that I a priori deny the existence of spiritual beings or whatever. I am okay with using this label for starters; I just want an explanation about how they interact with the ordinary matter, what parts do they consist of, how those parts interact with each other, et cetera. I want a model that makes sense. And suddenly, there are no meaningful answers; and the few courageous attempts are obviously wrong. And then I'm like: okay guys, the problem is not that I don't believe you; the problem is that I don't even know what do you want me to believe, because obviously you don't know it either. You just want me to repeat your passwords and become a member of your tribe; and to stop reflecting on this whole process. Thanks, but no; I value my sanity more than a membership in your tribe (although if I lived a few centuries ago or in some unfortunate country, my self-preservation instinct would probably make me choose otherwise).

An always open mind never closes on anything. There is a time to confess your ignorance and a time to relinquish your ignorance and all that...

Are you saying it's more rational not ever to consider some ways of thinking?

Yes. Rationality isn't necessarily about having accurate beliefs. It just tends that way because they seem to be useful. Rationality is about achieving your aims in the most efficient way possible.

Oh, someone may have to look into some ways of thinking, if people who use them start showing signs of being unusually effective at achieving relevant ends in some way. Those people would become super-dominant, it would be obvious that their way of thinking was superior. However, ther... (read more)


I tend to focus on the current authorized messengers from God and the Holy Spirit as I feel that is what I have been instructed to do.

Who authorizes messengers from God? It's not like He has a public key, after all...

Apparently I have just registered.

So, I have a question. What's an introduction do? What is it supposed to do? How would I be able to tell that I've introduced myself if I somehow accidentally willed myself to forget?

Well... I'm an engineering student who intends to graduate in electronics. I became interested in AI when I started learning programming at the age of 12. I became fascinated with what I could make the computer do. And rather naively I tried for months and months to program something that was "intelligent" (and failed horribly of course). I set that project aside temporarily but never stopped thinking about it. Years later I discovered HPMoR and through it LessWrong and suddenly found a whole community of people interested in AI and similar thing... (read more)

Please consider whether this exchange is worth your while. Certainly wasn't worth mine.

I know Mitchell Porter is likewise a physicist and he's not convinced at all either.

Mitchell Porter also advocates Quantum Monadology and various things about fundamental qualia. The difference in assumptions about how physics (and rational thought) works between Eliezer (and most of Eliezer's target audience) and Mitchell Porter is probably insurmountable.

Hello everyone, I'm Franz. I don't actually remember how I happened upon this site, but I do know it was rotting in my unsorted bookmark folder for over a year before I actually decided to read any post. This I do regret.

Because of circumstances I am currently in Brazil and due to a lack of internet infrastructure, I have to read the downloadable versions of the sequences and won't be able to comment often. I do enjoying reading your insightful thoughts!

I was wondering if anyone has directly applied EY methods to their own life? For what reason and what... (read more)

Welcome! I have. Specifically, the How to Actually Change Your Mind [http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_Mind] sequence was very helpful to me in real life. However, in spite of how some people feel about this site, for me, it is not about [only] EY. Lots of things from Less Wrong have affected my life outside of Less Wrong, specifically (quoting from an older draft of this comment, now, so that is why the flow may be weird here): One of the most helpful posts I came upon here was "The Power of Pomodoros" [http://lesswrong.com/lw/gp4/the_power_of_pomodoros/], which introduced me to the Pomodoro technique. See this PDF [http://www.pomodorotechnique.com/download/pdf/ThePomodoroTechnique_v1-3.pdf] from the official website for a more detailed guide. Another helpful thing I discovered via Less Wrong is the Less Wrong Study Hall. See "Co-Working Collaboration to Combat Akrasia" [http://lesswrong.com/lw/gwo/coworking_collaboration_to_combat_akrasia/] and "Programming the LW Study Hall" [http://lesswrong.com/lw/gzm/programming_the_lw_study_hall/]. This [http://tinychat.com/lesswrong] is the current study hall (on Tinychat), but I think it will eventually be moved to somewhere else. Less Wrong taught me about existential risk [http://lesswrong.com/lw/8f0/existential_risk/] and efficient charity [http://lesswrong.com/lw/37f/efficient_charity/]. This has produced a tangible change in what I do with my money. lukeprog's The Science of Winning at Life [http://wiki.lesswrong.com/wiki/The_Science_of_Winning_at_Life] sequence was also very helpful to me. I could write more, but I've already spent too much time on this comment. Enjoy Less Wrong!

Hi Everyone! I'm AABoyles (that's true most places on the internet besides LW).

I first found LW when a colleague mentioned That Alien Message over lunch. I said something to the effect of "That sounds like an Arthur C. Clarke short story. Who is the author?" "Eliezer Yudkowsky," He said, and sent me the link. I read it, and promptly forgot about it. Fast forward a year, and another friend posts the link to HPMOR on Facebook. The author's name sounded very familiar. I read it voraciously. I subscribed to the Main RSS feed and lurked for ... (read more)

From the book's website:

Are physicists and biologists willing to believe in anything so long as it is not religious thought? Close enough.

Is there a narrow and oppressive orthodoxy of thought and opinion within the sciences? Close enough.

Does anything in the sciences or in their philosophy justify the claim that religious belief is irrational? Not even ballpark.

I guess there is some tension between "narrow and oppressive orthodoxy of thought and opinion" and "willing to believe in anything"...

Redundancy isn't a design failure or a 'patch'.

I'm a Swiss medical student. I've read HPMoR and a large part of the core sequences. I've attended LW meetups in several US cities and met quite a few of you in the Bay Area and/or at the Effective Altruism Summit. I've interned for Leverage Research. I co-founded giordano-bruno-stiftung.ch (outreach organisation with German translations of some LessWrong blog posts, and other posts about rationality). Looking forward to participating in the comment section more often.

this is a test

Hi everyone,

I have been lurking LessWrong on and off for quite a while. I originally found this place through HPMoR; I thought the 'LessWrong' authorname was clever and it was nice to find out there was a whole community based around aiming to be less wrong! My tendency to overthink whatever I write has gotten in the way of actually taking part in the community so far though. Maybe now that I have gotten the introduction out of the way I'll be more likely to post.

A bit more about myself: I'm a student from the Netherlands, doing a masters in Artificial In... (read more)

Hello, Less Wrong! I'm Michael Odintsov from Ukraine, so sorry for my not-nearly-perfect :) English. Just like many here I found this site from Yudkowsky's link while reading his "Harry Potter and the Methods of Rationality". I am 27 years old programmer, fond of science in general and mostly math of all kinds.

I worked a bit in fields of AI and machine learning and looking forward for new opportunities. Well... that's almost all that I can tell about me right now - never been a great talker :) If anyone have questions or need some help with CS related topics - just ask, I always ready to help.

I don't believe that rationality in general is incompatible with religious belief, but if this community thinks that their particular brand of rationality is, people like me would love to know that.

Might we not, instead, disagree with you about rationality in general being compatible with religious belief, rather than asserting that we have some special incompatible brand of rationality?

I think it that most of your problems with theists would go away if you clarified LW's actual position.

Do we really have "problems with theists"...?

I don't. I just consider the debates about theism boring if they don't bring any new information.

Yes, but what I expected was...um...atheists who were better than most, who had arrived at atheism through two-sided discourse.

Bob Altemeyer asked college students about this, some of whom had a strong allegiance to 'traditional' authority and some less so:

Interestingly, virtually everyone said she had questioned the existence of God at some time in her life. What did the authoritarian students do when this question arose? Most of all, they prayed for enlightenment. Secondly, they talked to their friends who believed in God. Or they talked with their

... (read more)