If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.

(This is the fifth incarnation of the welcome thread; once a post gets over 500 comments, it stops showing them all by default, so we make a new one. Besides, a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves.)

A few notes about the site mechanics

Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax (you can click the "Help" link below the text box to bring up a primer).

You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.

However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.

Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.

It's definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there's a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way.  There's also a succession of open comment threads for discussion of anything remotely related to rationality.

Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.

EXTRA FEATURES:
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion.

A few notes about the community

If you've come to Less Wrong to  discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.

If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)

If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma- honestly, you don't know what you don't know about the community norms here.)

If you'd like to connect with other LWers in real life, we have  meetups  in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page. There's also a Facebook group. If you have your own blog or other online presence, please feel free to link it.

If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter

A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.

A list of some posts that are pretty awesome

I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:

More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site!

Note from orthonormal: MBlume and other contributors wrote the original version of this welcome post, and I've edited it a fair bit. If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post. Finally, once this gets past 500 comments, anyone is welcome to copy and edit this intro to start the next welcome thread.

Welcome to Less Wrong! (5th thread, March 2013)
New Comment
Rendering 1000/1746 comments, sorted by (show more) Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hello! I call myself Atomliner. I'm a 23 year old male Political Science major at Utah Valley University.

From 2009 to 2011, I was a missionary for the Mormon Church in northeastern Brazil. In the last month I was there, I was living with another missionary who I discovered to be a closet atheist. In trying to help him rediscover his faith, he had me read The God Delusion, which obliterated my own. I can't say that book was the only thing that enabled me to leave behind my irrational worldview, as I've always been very intellectually curious and resistant to authority. My mind had already been a powder keg long before Richard Dawkins arrived with the spark to light it.

Needless to say, I quickly embraced atheism and began to read everything I could about living without belief in God. I'm playing catch-up, trying to expand my mind as fast as I can to make up for the lost years I spent blinded by religious dogma. Just two years ago, for example, I believed homosexuality was an evil that threatened to destroy civilization, that humans came from another planet, and that the Lost Ten Tribes were living somewhere underground beneath the Arctic. Needless to say, my re-education process has ... (read more)

Welcome to LW! Don't worry about some of the replies you're getting, polls show we're overwhelmingly atheist around here.

1MugaSofer
This^ That said, my hypothetical atheist counterpart would have made the exact same comment. I can't speak for JohnH, but I can see someone with experience of Mormons not holding those beliefs being curious regardless of affiliation. And, of course, the other two - well, three now - comments are from professed atheists. So far nobody seems willing to try and reconvert him or anything.
3MugaSofer
Welcome to LessWrong! Good for you! You might want to watch out for assuming that everyone had a similar experience with religion; many theists will fin this very annoying and this seems to be a common mistake among people with your background-type. Huh. I must say, I found the GD pretty terrible (despite reading it multiple times to be sure,) although I suppose that powder-keg aspect probably accounts for most of your conversion (deconversion?) I'm curious, could you expand on what you found so convincing in The God Delusion? I think we can all say that :)

Welcome to LessWrong!

Thank you! :)

Good for you! You might want to watch out for assuming that everyone had a similar experience with religion; many theists will fin this very annoying and this seems to be a common mistake among people with your background-type.

I apologize. I had no idea I was making this false assumption, but I was. I'm embarrassed.

I'm curious, could you expand on what you found so convincing in The God Delusion?

I replied to JohnH about this. I don't know if I could go into a lot of detail on why it was convincing, it was almost two years ago that I read it. But what really convinced me to start doubting my religion was when I prayed to God very passionately asking him whether or not The God Delusion was true and after I felt this tingly warm sensation telling me it was. I had done the same thing with The Book of Mormon multiple times and felt this same sensation, and I was told in church that this was the Holy Spirit telling me that it was true. I had been taught I could pray about anything and the Spirit would tell me whether or not it was true. After being told by the Spirit that The God Delusion was true, I decided that the only explanation is that what I thought of as the Spirit was just happening in my head and that it wasn't a sure way of finding knowledge. It was a very dramatic experience for me.

2Kawoomba
What kind of theist are you, personal or more of the general theism (which includes deism) variety? Any holy textstring you believe has been divinely inspired?
1MugaSofer
About as Deist as you can be while still being technically Christian. I'd be inclined to say there's something in all major religions, simply for selection reasons, but the only thing I'd endorse as "divinely inspired" as such would be the New Testament? I guess? Even that is filtered by cultural context and such, obviously,.
2TheOtherDave
If you can readily articulate your reasons for evaluating the New Testament differently from other scriptures, I'm interested. (It's possible that you've already done so, perhaps even in response to this question from me; feel free to point me at writeups elsewhere if you wish.)
2Kawoomba
How many of your younger Mormon peers and friends do you think are secretly atheists?
6atomliner
I've only had two of my Mormon peers/friends/relatives reveal to me after knowing them for a substantial amount of time that they are atheists. Based on that, I would guess the percentage of active Latter-day Saints that are closet atheists is pretty low, around 1%-3%?
8CCC
That implies that you have more-or-less a hundred close friends/peers/relatives, who you have known for a substantial amount of time and would expect them to tell you if they are closet atheists.
9Eliezer Yudkowsky
Mormons have lots of friends, and lots of relatives.
3atomliner
Over twenty-three years the numbers add up. I think I could easily find more than a hundred active Latter-day Saints just counting members of my extended family that I routinely encounter every year.
1JohnH
I am Mormon so I am curious where you got the beliefs that Homosexuality would destroy civilization, that humans came from another planet, that the Ten Tribes live underground beneath the Arctic? Those are not standard beliefs of Mormons (see for instance the LDS churches Mormonsandgays.org) and only one of those have I ever even encountered before (Ten Tribes beneath the Arctic) but I couldn't figure out where that belief comes from or why anyone would feel the need to believe it. I also have to ask, the same as MugaSofer, could you explain how The God Delusion obliterated your faith? It seemed largely irrelevent to me.
2atomliner
I have visited mormonsandgays.org. That came out very recently. It seems that the LDS Church is now backing off of their crusade against homosexuality and same-sex marriage. In the middle of the last decade, though, I can assure you what I was taught in church and in my family was that civilizations owed their stability to the prevalence of traditional marriages. I was told that Sodom and Gomorrah were destroyed because homosexuality was not being penalized and because of the same crime the Roman Empire collapsed. It is possible that these teachings, while not official doctrine, were inspired by the last two paragraphs of the LDS Church's 1995 proclamation The Family. In the second to last paragraph it says: I have a strong feeling my interpretation of this doctrine is also held by most active believing American Mormons, having lived among them my entire life. I don't think that most Mormons believe that mankind came from another planet, but I started believing this after I read something from the Journal of Discourses, in which Brigham Young stated: This doctrine has for good reason been de-emphasized by the LDS Church, but never repudiated. I read this and other statements made by Brigham Young and believed it. I did believe he was a prophet of God, after all. I began to believe that the Ten Tribes were living underneath the Arctic after reading The Final Countdown by Clay McConkie which details the signs that will precede the Second Coming. In the survey he apparently conducted of active Latter-day Saints, around 15% believed the Ten Tribes were living somewhere underground in the north. This belief is apparently drawn from an interpretation of Doctrine & Covenants 133:26-27, which states: I liked the interpretation that this meant there was a subterranean civilization of Israelites and believed it was true. I apologize that I gave examples of these extraordinary former beliefs right after I wrote "I'm playing catch-up, trying to expand my mind as fast as I

Aloha.

My name is Sandy and despite being a long time lurker, meetup organizer and CFAR minicamp alumnus, I've got a giant ugh field around getting involved in the online community. Frankly it's pretty intimidating and seems like a big barrier to entry - but this welcome thread is definitely a good start :)

IIRC, I was linked to Overcoming Bias through a programming pattern blog in the few months before LW came into existence, and subsequently spent the next three months of my life doing little else than reading the sequences. While it was highly fascinating and seemed good for my cognitive health, I never thought about applying it to /real life/.

Somehow I ended up at CFAR's January minicamp, and my life literally changed. After so many years, CFAR helped me finally internalize the idea that /rationalists should win/. I fully expect the workshop to be the most pivotal event in my entire life, and would wholeheartedly recommend it to absolutely anyone and everyone.

So here's to a new chapter. I'm going to get involved in this community or die trying.

PS: If anyone is in the Kitchener/Waterloo area, they should definitely come out to UW's SLC tonight at 8pm for our LW meetup. I can guarantee you won't be disappointed!

Hello, Less Wrong; I'm Laplante. I found this site through a TV Tropes link to Harry Potter and the Methods of Rationality about this time last year. After I'd read through that as far as it had been updated (chapter 77?), I followed Yudkowsky's advice to check out the real science behind the story and ended up here. I mucked about for a few days before finding a link to yudkowsky.net, where I spent about a week trying learn what exactly Bayes was all about. I'm currently working my way through the sequences, just getting into the quantum physics sequence now.

I'm currently in the dangerous position of having withdrawn from college, and my productive time is spent between a part-time job and this site. I have no real desire to return to school, but I realize that entry into any sort of psychology/neuroscience/cognitive science field without a Bachelor's degree - preferably more - is near impossible.

I'm aware that Yudkowsky is doing quite well without a formal education, but I'd rather not use that as a general excuse to leave my studies behind entirely.

My goals for the future are to make my way through MIRI's recommended course list, and the dream is to do my own research in a related field. We'll see how it all pans out.

my productive time is spent between a part-time job and this site.

Perhaps I'm reading a bit much into a throwaway phrase, but I suggest that time spent reading LessWrong (or any self-improvement blog, or any blog) is not, in fact, productive. Beware the superstimulus of insight porn! Unless you are actually using the insights gained here in a measureable way, I very strongly suggest you count LessWrong reading as faffing about, not as production. (And even if you do become more productive, observe that this is probably a one-time effect: Continued visits are unlikely to yield continual improvement, else gwern and Alicorn would long since have taken over the world.) By all means be inspired to do more work and smarter work, but do not allow the feeling of "I learned something today" to substitute for Actually Doing Things.

All that aside, welcome to LessWrong! We will make your faffing-about time much more interesting. BWAH-HAH-HAH!

6John_Maxwell
Learning stuff can be pretty useful. Especially stuff extremely general in its application that isn't easy to just look up when you need it, like rationality. If the process of learning is enjoyable, so much the better.
7Dentin
I think you may have misinterpreted a critical part of the sentence: 'do not allow the FEELING of "I learned something today" to substitute for Actually Doing Things.' Insight porn, so to speak, is that way because it makes you feel good, like you can Actually Do Things and like you have the tools to now Actually Do Things. But if you don't get up and Actually Do Things, you have only learned how to feel like you can Actually Do Things, which isn't nearly as useful as it sounds.
0John_Maxwell
Sure, I agree. IMO, any self-improvement effort should be intermixed with lots of attempts to accomplish object-level goals so you can get empirical feedback on what's working and what isn't.
[-]Shmi200

My standard advice to all newcomers is to skip the quantum sequence, at least on the first reading. Or at least stop where the many worlds musings start. The whole thing is way too verbose and controversial for the number of useful points it makes. Your time is much better spent reading about cognitive biases. If you want epistemology, try the new sequence.

7Eliezer Yudkowsky
Bad advice for technical readers. Mihaly Barasz (IMO gold medalist) got here via HPMOR but only became seriously interested in working for MIRI after reading the QM sequence. Given those particular circumstances, can I ask that you stop with that particular bit of helpful advice?

Bad advice for technical readers. Mihaly Barasz (IMO gold medalist) got here via HPMOR but only became seriously interested in working for MIRI after reading the QM sequence.

Do you have a solid idea of how many technical readers get here via HPMOR but become disinterested in working for MIRI after reading the QM sequence? If not, isn't this potentially just the selection effect?

7Kawoomba
EY can rationally prefer the certain evidence of some Mihaly-Barasz-caliber researchers joining when exposed to the QM sequence over speculations whether the loss of Mihaly Barasz (had he not read the QM sequence) would be outweighed by even more / better technical readers becoming interested in joining MIRI, taking into account the selection effect. Personally, I'd go with what has been proven/demonstrated to work as a high-quality attractor.
7Eliezer Yudkowsky
Yep. I also tend to ignore nontechnical folks along the lines of RationalWiki getting offended by my thinking that I know something they don't about MWI. Carl often hears about, anonymizes, and warns me when technical folks outside the community are offended by something I do. I can't recall hearing any warnings from Carl about the QM sequence offending technical people. Bluntly, if shminux can't grasp the technical argument for MWI then I wouldn't expect him to understand what really high-class technical people might think of the QM sequence. Mihaly said the rest of the Sequences seemed interesting but lacked sufficient visible I-wouldn't-have-thought-of-that nature. This is very plausible to me - after all, the Sequences do indeed seem to me like the sort of thing somebody might just think up. I'm just kind of surprised the QM part worked, and it's possible that might be due to Mihaly having already taken standard QM so that he could clearly see the contrast between the explanation he got in college and the explanation on LW. It's a pity I'll probably never have time to write up TDT.

I have a phd in physics (so I have at least some technical skill in this area) and find the QM sequence's argument for many worlds unconvincing. You lead the reader toward a false dichotomy (Copenhagen or many worlds) in order to suggest that the low probability of copenhagen implies many worlds. This ignores a vast array of other interpretations.

Its also the sort of argument that seems very likely to sway someone with an intro class in college (one or two semesters of a Copenhagen based shut-up-and-calculate approach), precisely because having seen Copenhagen and nothing else they 'know just enough to be dangerous', as it were.

For me personally, the quantum sequence threw me into some doubt about the previous sequences I had read. If I have issues with the area I know the most about, how much should I trust the rest? Other's mileage may vary.

[-]Shmi170

I have a phd in physics (so I have at least some technical skill in this area) and find the QM sequence's argument for many worlds unconvincing.

Actually, attempting to steelman the QM Sequence made me realize that the objective collapse models are almost certainly wrong, due to the way they deal with the EPR correlations. So the sequence has been quite useful to me.

On the other hand, it also made me realize that the naive MWI is also almost certainly wrong, as it requires uncountable worlds created in any finite instance of time (unless I totally misunderstand the MWI version of radioactive decay, or any emission process for that matter). It has other issues, as well. Hence my current leanings toward some version of RQM, which EY seems to dislike almost as much as his straw Copenhagen, though for different reasons.

For me personally, the quantum sequence threw me into some doubt about the previous sequences I had read.

Right, I've had a similar experience, and I heard it voiced by others.

As a result of re-examining EY's take on epistemology of truth, I ended up drifting from the realist position (map vs territory) to an instrumentalist position (models vs inputs&outputs... (read more)

9Plasmon
How is that any more problematic than doing physics with real or complex numbers in the first place?
6Vaniver
I defected from physics during my Master's, but this is basically the impression I had of the QM sequence as well.

Carl often hears about, anonymizes, and warns me when technical folks outside the community are offended by something I do. I can't recall hearing any warnings from Carl about the QM sequence offending technical people.

That sounds like reasonable evidence against the selection effect.

Bluntly, if shminux can't grasp the technical argument for MWI then I wouldn't expect him to understand what really high-class technical people might think of it.

I strongly recommend against both the "advises newcomers to skip the QM sequence -> can't grasp technical argument for MWI" and "disagrees with MWI argument -> poor technical skill" inferences.

[-][anonymous]100

I'm just kind of surprised the QM part worked, and it's possible that might be due to Mihaly having already taken standard QM so that he could clearly see the contrast between the explanation he got in college and the explanation on LW.

I'm no IMO gold medalist (which really just means I'm giving you explicit permission to ignore the rest of my comment) but it seems to me that a standard understanding of QM is necessary to get anything out of the QM sequence.

It's a pity I'll probably never have time to write up TDT.

Revealed preferences are rarely attractive.

Revealed preferences are rarely attractive.

Adds to "Things I won't actually get put on a T-shirt but sort of feel I ought to" list.

2Shmi
As others noted, you seem to be falling prey to the selection bias. Do you have an estimate of how many "IMO gold medalists" gave up on MIRI because its founder, in defiance of everything he wrote before, confidently picks one untestable from a bunch and proclaims it to be the truth (with 100% certainty, no less, Bayes be damned), despite (or maybe due to) not even being an expert in the subject matter? EDIT: My initial inclination was to simply comply with your request, probably because I grew up being taught deference to and respect for authority. Then it struck me as one of the most cultish things one could do.

with 100% certainty, no less, Bayes be damned

Is this an April Fool's joke? He says nothing of the kind. The post which comes closest to this explicitly says that it could be wrong, but "the rational probability is pretty damned small." And counting the discovery of time-turners, he's named at least two conceivable pieces of evidence that could change that number.

What do you mean when you say you "just don't put nearly as much confidence in it as you do"?

5philh
The number of IMO gold medalists is sufficiently low, and the probability of any one of them having read the QM sequence is sufficiently small, that my own estimate would be less than one regardless of X. (I don't have a good model of how much more likely an IMO gold medalist would be to have read the QM sequence than any other reference class, so I'm not massively confident.)
3Eliezer Yudkowsky
Well, I'm sorry to say this, but part of what makes authority Authority is that your respect is not always required. Frankly, in this case Authority is going to start deleting your comments if you keep on telling newcomers who post in the Welcome thread not to read the QM sequence, which you've done quite a few times at this point unless my memory is failing me. You disagree with MWI. Okay. I get it. We all get it. I still want the next Mihaly to read the QM Sequence and I don't want to have this conversation every time, nor is it an appropriate greeting for every newcomer.
[-]Shmi260

Sure, your site, your rules.

Just to correct a few inaccuracies in your comment:

You disagree with MWI.

I don't, I just don't put nearly as much confidence in it as you do. It is also unfortunately abused on this site quite a bit.

nor is it an appropriate greeting for every newcomer.

I don't even warn every newcomer who mentions the QM sequence, let alone "every newcomer", only those who appear to be stuck on it. Surely Mihaly had no difficulties with it, so none of my warnings would interfere with "still want the next Mihaly to read the QM Sequence".

nor is it an appropriate greeting for every newcomer.

I don't even warn every newcomer who mentions the QM sequence, let alone "every newcomer"

The claim you made that prompted the reply was:

My standard advice to all newcomers is to skip the quantum sequence, at least on the first reading.

It is rather disingenuous to then express exaggerated 'let alone' rejections of the reply "nor is it an appropriate greeting for every newcomer".

2MugaSofer
Uhuh. That said, kudos to you for remaining calm and reasonable
5Shmi
You have a point, it's easy to read my first comment rather uncharitably. I should have been more precise: "My standard advice to all newcomers [who mention difficulties with the QM sequence]..." which is much closer to what actually happens. I don't bring it up out of the blue every time I greet someone.
1Shmi
Hmm, the above got a lot of upvotes... I have no idea why.

Hmm, the above got a lot of upvotes... I have no idea why.

Egalitarian instinct. Eliezer is using power against you, which drastically raises the standards of behavior expected from him while doing so---including less tolerance of him getting things wrong.

Your reply used the form 'graceful' in a context where you would have been given a lot of leeway even to be (overtly) rude. The corrections were portrayed as gentle and patient. Whether the corrections happen to be accurate or reasonable is usually almost irrelevant for the purpose of determining people's voting behavior this far down into a charged thread.

Note that even though I approve of Eliezer's decision to delete comments of yours disparaging the QM sequence to newcomers I still endorse your decision to force Eliezer to use his power instead of deferring to his judgement simply because he has the power. It was the right decision for you to make from your perspective and is also a much more desirable precedent.

I deliberately invoke this tactic on occasion in arguments on other people's turf, particularly where the rules are unevenly applied. I was once accused by an acquaintance who witnessed it of being unreasonably reasonable.

It's particularly useful when moderators routinely take sides in debates. It makes it dangerous for them to use their power to shut down dissent.

6VCavallo
Nailed it on the head. As my cursor began to instinctively over the "upvote" button on shminux's comment I caught myself and thought, why am I doing this?. And while I didn't come to your exact conclusion I realized my instinct had something to do with EY's "use of power" and shminux's gentle reply. Some sort of underdog quality that I didn't yet take the time to assess but that my mouse-using-hand wanted badly to blindly reward. I'm glad you pieced out the exact reasoning behind the scenes here. Stopping and taking a moment to understand behavior and then correct based on that understanding is why I am here. That said, I really should think for a long time about your explanation before voting you up, too!
1Shmi
If it is as right as it is insightful (which it undeniably is), I would expect those who come across wedifid's explanation to go back and change their vote, resulting in %positive going sharply down. It doesn't appear to be happening.

If it is as right as it is insightful (which it undeniably is), I would expect those who come across wedifid's explanation to go back and change their vote, resulting in %positive going sharply down.

A quirk (and often a bias) humans have is that we tend to assume that just because a social behavior or human instinct can be explained it must thereby be invalidated. Yet everything can (in principle) be explained and there are still things that are, in fact, noble. My parents' love for myself and my siblings is no less real because I am capable of reasoning about the inclusive fitness of those peers of my anscestors that happened to love their children less.

In this case the explanation given was, roughly speaking "egalitarian instinct + politeness". And personally I have to say that the egalitarian instinct is one of my favorite parts of humanity and one of the traits that I most value in those I prefer to surround myself with (Rah foragers!).

All else being equal the explanation in terms of egalitarian instinct and precedent setting regarding authority use describes (what I consider to be) a positive picture and in itself is no reason to downvote. (The comment deserves to... (read more)

8Kaj_Sotala
I believe that I already knew I was acting on egalitarian instinct when I upvoted your comment.
5VCavallo
They could just be a weird sort of lazy whereby they don't scroll back up and change anything. Or maybe they never see his post. Or something else. I don't think the -%positive-not-going-down-yet is any indication that wedrifid's comment is not right.
5satt
This is the second time you mention shminux having talked about QM for years. But I can't find any comments or posts he's made before July 2011. Does he have a dupe account or something else I don't know about?
4Shmi
Since you are asking... July 2011 is right for the join date and some time later is when I voiced any opinion related to the QM sequence and MWI (I did read through it once and browsed now and again since). No, I did not have another account before that, as a long-term freenode ##physics IRC channel moderator, I dislike being confused about user's previous identities, so I don't do it myself (hence the silly nick chosen a decade or so ago, which has lost all relevance by now). On the other hand, I don't mind people wanting a clean slate with a new nick, just not using socks to express a controversial or karma-draining opinion they are too chicken to have linked to their main account. I also encourage you to take whatever wedrifid writes about me with a grain of salt. While I read what he writes and often upvote when I find it warranted, I quite publicly announced here about a year ago that I will not be replying to any of his comments, given how counterproductive it had been for me. (There are currently about 4 or 5 people on my LW "do-not-reply" list.) I have also warned other users once or twice, after I noticed them in a similarly futile discussion with wedrifid. I would be really surprised if this did not color his perception and attitude. It certainly would for me, were the roles reversed.
2Kawoomba
I'm also interested in this. Hopefully it's not an overt lie or something.
1wedrifid
I don't keep an exact mental record of the join dates. My guess from intuitive feel was "2 years". It's April 2013. It was July 2011 when the account joined. If anything you have prompted me to slightly increase my confidence in the calibration of my account-joining estimator. If the subject of how long user:shminux has been complaining about the QM sequence ever becomes relevant again I'll be sure to use Wei Dai's script, search the text and provide a link to the exact first mention. In this case, however, the difference hardly seems significant or important. I doubt it. If so I praise him for his flawless character separation.
2satt
Thanks for clarifying. I asked not because the exact timing is important but because the overstatement seemed uncharacteristic (albeit modest), and I wasn't sure whether it was just offhand pique or something else. (Also, if something funny had been going on, it might've explained the weird rancour/sloppiness/mindkilledness in the broader thread.)
1wedrifid
Just an error. Note that in the context there was no particular pique. I intended acknowledgement of established disrespect, not conveyance of additional disrespect. The point was that I was instinctively (as well as rationally) motivated to support shminux despite also approving of Eliezer's declared intent, which illustrates the strength of the effect. Fortunately nothing is lost if I simply remove the phrase you quote entirely. The point remains clear even if I remove the detail of why I approve of Eliezer's declaration. The main explanation there is just that incarnations of this same argument have been cropping up with slight variations for (what seems like) a long time. As with several other subjects there are rather clear battle lines drawn and no particular chance of anyone learning anything. The quality of the discussion tends to be abysmal, riddled with status games and full of arguments that are sloppy in the extreme. As well as the problem of persuasion through raw persistence.
2[anonymous]
Bluntly, IMO gold medalists who can conceive of working on something 'crazy' like FAI would be expected to better understand the QM sequence than that. Even more so they would be expected to understand the core arguments better than to get offended by my having come to a conclusion. I haven't heard from the opposite side at all, and while the probability of my hearing about it might conceivably be low, my priors on it existing are rather lower than yours, and the fact that I have heard nothing is also evidence. Carl, who often hears (and anonymizes) complaints from the outside x-risk community, has not reported to me anyone being offended by my QM sequence. Smart people want to be told something smart that they haven't already heard from other smart people and that doesn't seem 'obvious'. The QM sequence is demonstrably not dispensable for this purpose - Mihaly said the rest of LW seemed interesting but insufficiently I-wouldn't-have-thought-of-that. Frankly I worry that QM isn't enough but given how long it's taking me to write up the Lob problem, I don't think I can realistically try to take on TDT.
2Shmi
Again, you seem to be generalizing from a single example, unless you have more data points than just Mihaly.
2TheOtherDave
Note that the original text was "gold," not "good". I assume IMO is the International Mathematical Olympiad(1). Not that this in any way addresses or mitigates your point; just figured I'd point it out. (1) If I've understood the wiki article, ~35 IMO gold medals are awarded every year.
1Shmi
Thanks, I fixed the typo.
2TimS
QM Sequence is two parts: (1) QM for beginners (2) Philosophy-of-science on believing things when evidence is equipoise (or absent) - pick the simpler hypothesis. I got part (1) from reading Dancing Wu-Li Masters, but I can clearly see the value to readers without that background. But teaching foundational science is separate from teaching Bayesian rationalism. The philosophy of the second part is incredibly controversial. Much more than you acknowledge in the essays, or acknowledge now. Treating the other side of any unresolved philosophical controversy as if it is stupid, not merely wrong, is excessive and unjustified. In short, the QM sequence would seriously benefit from the sort of philosophical background stuff that is included in your more recent essays. Including some more technical discussion of the opposing position.

If you learned quantum mechanics from that book, you may have seriously mislearned it. It's actually pretty decent describing everything up to but excluding quantum physics. When it comes to QM, however, the author sacrifices useful understanding in favor of mysticism.

8Michelle_Z
If you want to learn things/explore what you want to do with your life, take a few varied courses at Coursera.
3beoShaffer
Hi, Laplante. Why do you want to enter psychology/neuroscience/cognitive science? I ask this as someone who is about to graduate with a double major in psychology/computer science and is almost certain to go into computer science as my career.

It's a forum where taking atheism for granted is widespread, and the 10% of non-atheists have some idea of what the 90% are thinking. Being atheist isn't part of the official charter, but you can make a function call to atheism without being questioned by either the 10% or the 90% because everyone knows where you're coming from. If I was on a 90% Mormon forum which theoretically wasn't about Mormonism but occasionally contained posters making function calls to Mormon theology without further justification, I would not walk in and expect to be able to make atheist function calls without being questioned on it. If I did, I wouldn't be surprised to be downvoted to oblivion if that forum had a downvoting function. This isn't groupthink; it's standard logical courtesy. When you know perfectly well that a supermajority of the people around you believe X, it's not just silly but logically rude to ask them to take Y as a premise without defending it. I would owe this hypothetical 90%-Mormon forum more acknowledgement of their prior beliefs than that.

I regard all of this as common sense.

As part of said minority, I fully endorse this comment.

[-]DSimon130

I like your use of "function calls" as an analogy here, but I don't think it's a good idea; you could just as easily say "use concepts from" without alienating non-programmer readers.

2[anonymous]
I understand it now knowing that it's a programming reference (I program), but I wouldn't have recognized it otherwise. Thanks for the clarification.
7[anonymous]
Since I'm momentarily feeling remarkably empowered about my own life, I'm going to take this chance to officially bow out for a few weeks. We all knew it was coming—it's the typical reaction for an overwhelmed newbie like me, I know, and I'm always very determined not to give up, but I really think I had better take a break. My last week has hardly involved anything except LW and related sites, and we all know that having one's mind blown is a very strenuous task. I've learned a lot, and I will definitely be back after four weeks or so. I've decided I'm not going to let myself be pressured into expressly arguing in favor of religion. I've said several times I'm not interested in that, and that I don't have these supposed strong arguments in favor of religion. If you guys want a good theist, check out William Lane Craig. When I come back I will, however, explain my own beliefs and why I can't fully accept the LW way of thinking. Please don't get misunderstand what I'm saying: I think you guys are right, more so than any group of people I've ever met. But for now I'm going to shelve philosophy and take advantage of my situation. In the next four weeks I'm going to a) learn Lambda Calculus and b) study Arabic intensively. May the Force be with you 'til we meet again.

For the record, I once challenged Craig to a Bloggingheads but he refused.

I'm a male senior in high school. I found this site in November or so, and started reading the sequences voraciously.

I feel like I might be a somewhat atypical LessWrong reader. For one, I'm on the young side. Also, if you saw me and talked to me, you would probably not guess that I was a "rationalist" from the way I act/dress but, I don't know, perhaps you might. When I first found this website, I was pretty sure I wanted to be an art major, now I'm pretty sure I want to be an art/comp sci double major and go into indie game development (correlation may or may not imply causation). I also love rap music (and not the "good" kind like Talib Kweli) and I read most of the sequences while listening to Lil Wayne, Lil B, Gucci Mane, Future, Young Jeezy, etc. I occasionally record my own terrible rap songs with my friends in my friend's basement. Before finding this site, the word "rational" had powerful negative affect around it. Science was far and away my least favorite subject in school. I have absolutely no interest at the moment in learning any science or anything about science, except for maybe neuroscience, and maybe metaphysics. I've always found t... (read more)

8[anonymous]
lulz. You have my attention. You sound like quite an intelligent and awesome person. (bad rap, art, rationality. only an interesting person could have such a nonstandard combination of interests. Boring people come prepackaged...) Glad to have you around. It's only a matter of time ;) I remember that feeling. I'm more skeptical now, but I can't help but notice more awesomeness in my life due to LW. It really is quite cool isn't it? This is the part that's been elusive to me. What kind of things are you doing? How do you knwo you are actually getting benefits and not just producing that "this is awesome" feeling which unfortunately often gets detached from realty? keep your identity small. Where do you live? Do you attend meetups?
5gothgirl420666
Thank you :) I guess essentially what I do is try to read self-help stuff. I try to spend half my "work time", so to speak, doing this, and half working on creative projects. I've read both books and assorted stuff on the internet. My goal for April is to read a predetermined list of six self-help books. I'm currently on track for this goal. So far I've read * Part of the massive tome that is Psychological Self Help by Clayton Tucker-Ladd * Success - How We Can Reach Our Goals by Heidi Halverson * How to Talk to Anyone by Leil Lowndes * 59 Seconds by Richard Wiseman * Thinking Things Done by PJ Eby * the first 300 pages of Feeling Good by David Burns, the last 200 seem to be mostly about the chemical nature of depression and have little practical value, so I'm saving them for later If meditation books count * Mindfulness in Plain English by Henepola Gunaratana * most of Mastering the Core Teachings of the Buddha by Daniel Ingram I also have been keeping a diary, which is something I've wanted to get in the habit of all my life but have never been able to do. Every day, in addition to summarizing the day's events, I rate my happiness out of ten, my productivity out of ten, and speculate on how I can do better. I've only been keeping the diary a month, which is too small of a sample size. However, during this time, I had three weeks off for spring break, and I told myself that I would work as much as I could on self-improvement and personal projects. I ended up not really getting that much done, unfortunately. However, I managed to put in a median of... probably about five hours every day, and more importantly, I was in a fantastic mood the whole break. It might even have been the best mood I've been in for an extended time in the last few years. In the past, every time I have had a break from school, I ended up in a depressed, lonely, lethargic state, where I surfed the internet for hours on end, in which I paradoxically want to go back to school knowi
1[anonymous]
I think you need to talk to daenerys, IIRC, she runs the Ohio stuff. Actually doing, for one, though it sounds like you're doing that too. yet. Some day you will want to take over the world, and then you will need to talk to big winners. I've had this problem, too (I've got so much free time, why is it all getting pissed away?). Have you tried beeminder? I cannot overstate how much that site is just conscientiousness in a can, so to speak. Thanks for the list. A variety of evidence is making me want to check out the self-help community more closely.
1gothgirl420666
I have yet to read a self-help book that doesn't emphatically state "If you do not take care to apply these principles as much as you can in your daily life, you will not gain anything from reading this book." So, yeah, I agree, and by "reading self-help" I mean "reading self-help and applying the knowledge". I've seen it, and checked it out a little, but I can't think of any way to quantify the stuff that I have problems getting done. Also I wish there was an option to donate money to charity, but I guess they have to make money somehow.
3someonewrongonthenet
I have yet to see this. Which major LW contributor is advocating racism, and where can I read about it?
8gothgirl420666
I'm sorry, I can't really remember any specific links to discussions, and I don't really know exactly who believes in what ideas, but I feel like there are a lot of people here, and especially people who show up in the comments, who believe that certain races are inherently more or less intelligent/violent/whatever on average than others. I specifically remember nyan_sandwich saying that he believes this, calling himself a "proto-racist" but that's the only example I can recall. The "reactionary" philosophy is discussed a lot here too, and I feel like most people who subscribe to this philosophy are racist. Mencius Moldbug is the biggest name in this, I believe. Also I've seen a lot of links to this site http://isteve.blogspot.com/ which seems to basically be arguing in favor of racism. This blog post http://slatestarcodex.com/2013/03/03/reactionary-philosophy-in-an-enormous-planet-sized-nutshell/ contains a discussion of these issues.

The one basically follows from the other, I think. This isn't a reactionary site by any means; the last poll showed single-digit support for the philosophy here, if it's fair to consider it a political philosophy exclusive with liberalism, libertarianism, and/or conservatism. However, neoreaction/Moldbuggery gets a less hostile reception here than it does on most non-reactionary sites, probably because it's an intensely contrarian philosophy and LW seems to have a cultural fondness for clever contrarians, and we have do have several vocal reactionaries among our commentariat. Among them, perhaps unfortunately, are most of the people talking about race.

It's also pretty hard to dissociate neoreaction from... let's say "certain hypotheses concerning race", since "racism" is too slippery and value-laden a term and most of the alternatives are too euphemistic. The reasons for this seem somewhat complicated, but I think we can trace a good chunk of them to just how much of a taboo race is among what Moldbug calls the Cathedral; if your basic theory is that there's this vast formless cultural force shaping what everyone can and can't talk about without being brande... (read more)

5Kawoomba
If someone were to correctly point out genetic differences between groups (let's assume correctness as a hypothetical), would that be - in your opinion - 1) racist and reprehensible, 2) racist but not reprehensible, or (in the hypothetical) 3) not racist? Would your opinion differ if those genetic differences were relating to a) IQ, or b) lactose intolerance?
8gothgirl420666
Yes to the second question, in that I would give the answer of 2 for A and 3 for B. Racism has at least three definitions colloquially that I can think of * 1: A belief that there is a meaningful way to categorize human beings into races, and that certain races have more or less desirable characteristics than others. This is the definition that Wikipedia uses. Not that many educated people are racist according to this definition, I think. * 2: The tendency to jump to conclusions about people based on their skin color, which can manifest as a consequence of racism-1, or unconsciously believing in racism-1. Pretty much everyone is racist to some extent according to this definition. * 3: Contempt or dislike of people based on their skin color, i.e. "I hate Asians". You could further divide this into consciously and unconsciously harboring these beliefs if you wanted. In the sexism debate, these three definitions are sort of given separate names: "belief in differences between the sexes", "sexism", and "misogyny" respectively. Racism-3 seems to be pretty clearly evil, and racism-2 causes lots of suffering, but racism-1 basically by definition cannot be evil if it is a true belief and you abide by the Litany of Tarski or whatever. But because they have the same name, it gets confusing. Some people might object to calling racism-1 racism, and instead will decide to call it "human biodiversity" or "race realism". I think this is bullshit. Just fucking call it what it is. Own up to your beliefs. (I am not racist-1, for the record.)

Some people might object to calling racism-1 racism, and instead will decide to call it "human biodiversity" or "race realism". I think this is bullshit. Just fucking call it what it is.

"What it fucking is" is a straw man. ie. "and that certain races have more or less desirable characteristics than others" is not what the people you are disparaging are likely to say, for all that it is vaguely related.

Own up to your beliefs.

Seeing this exhortation used to try to shame people into accepting your caricature as their own position fills me with the same sort of disgust and contempt that you have for racism. Failure to "own up" and profess their actual beliefs is approximately the opposite of the failure mode they are engaging in (that of not keeping their mouth shut when socially expedient). In much the same way suicide bombers are not cowards.

1gothgirl420666
According to Wikipedia, "racism is usually defined as views, practices and actions reflecting the belief that humanity is divided into distinct biological groups called races and that members of a certain race share certain attributes which make that group as a whole less desirable, more desirable, inferior or superior." This definition appears to exactly match the beliefs of the people I am talking about. I guess it's all in how you define superior, inferior, more desirable, etc. But most of the discourse revolves around intelligence which is a pretty important trait and I don't think these people believe that black people, for example, have traits that make up for their supposed lack of intelligence, or that Asians have flaws that make up for their supposed above-average intelligence (and no, dick size doesn't count). In particular, these people seem to believe that an innate lack of intelligence is to blame for the fact that so many African countries are in total chaos and unless you believe in a soul or something, it's hard to imagine that a race physically incapable of sustaining civilization is not in some meaningful way "inferior". If you hold a belief that is described with a name that has negative connotations, you have two options. You can either hide behind some sort of euphemism, or you can just come out and say "yes I do believe that, and I am proud of it". I think the second choice is much more noble, and if I were to adopt these beliefs, I would just go ahead and describe myself as a racist. It's not really a major issue though and I probably shouldn't have used the word "fucking" in my previous post. But anyway, since the term is completely accurate, the only reason I can think of to not call the people I'm describing racists is because it might offend them, which is deeply ironic.
9Viliam_Bur
There is also a third option: Keep your identity small and pick your battles. Just because the society happens to disagree with you in one specific topic, that is no reason to make that one topic central to your life, and to let all other people define you by that one topic regardless of what other traits or abilities you have -- which will probably happen if you are open about that disagreement. Imagine that you live in a society where people believe that 2+2=5, and they also believe that anyone who says 2+2=4 is an evil person and must be killed. (There seems to be a good reason for that. Hundred years ago there was an evil robot who destroyed half of the planet, and it is know that the robot believed that 2+2=4. Because this is the most known fact about the robot, people concluded that beliving that 2+2=4 must be the source of all evil, and needs to be eradicated from the society. We don't want any more planetary destruction, do we?) What are your choices? You could say that 2+2=4 and get killed. Or you could say that 2+2=4.999, avoid being killed, only get a few suspicious looks and be rejected at a few job interviews; and hope that if people keep doing that long enough, at one moment it will become acceptable to say that 2+2=4.9, or even 4.5, and perhaps one day no one will be killed for saying that it equals 4. The third option is to enjoy food and wine, and refuse to comment publicly on how much 2+2 is. Perhaps have a few trusted friends you can discuss maths with.
3gothgirl420666
Okay, but all I'm saying is that if you do decide to talk about your beliefs, you should use a more honest term for your belief system. I definitely agree with you that racists should not go around talking publicly about their beliefs! You seem to have inferred something from my post that I didn't mean, sorry about that.
4A1987dM
I think that “group as a whole” is the key word. Men are taller than women in average, and being tall is usually considered desirable; is pointing that out sexist? I'd say that until you treat that fact as a reason to consider a gender “as a whole” more desirable than another, it isn't.
1Kawoomba
Most people do consider a gender as a whole more desirable than another ... (and can also supply some "facts" on which that preference is based).
2Document
Possibly related: Overcoming Bias : Mate Racism.
1A1987dM
Doesn't contradict what I said, because I never claimed that most people aren't sexist. (And BTW, I'm not sure whether what you mean by “desirable” is what was meant in WP's definition of racism. I'm not usually sexually attracted to males or Asians, but I consider this a fact about me, not about males or Asians, and I don't consider myself sexist or racist for that.) (EDIT: to be more pedantic, one could say that the fact that I'm normally only attracted to people with characteristics X, Y, and Z is a fact about me and that the fact that males/Asians seldom have characteristics X, Y and Z is a fact about them, though.)
3khafra
If they believed you, consistency bias might make them lean more toward racist-2 and racist-3. Or it might shame them into lowering their belief in the entire reactionary memeplex, which would be epistemically sub-optimal. It might lower their status, or even their earning ability if justified accusations of racism became associated with their offline identities. There's many ways leveraging emotionally loaded terms can have negative effects.
7[anonymous]
Why not?
7CCC
As far as racism-1 goes, I am told that high levels of melanin in the skin lead to an immunity to sunburn. So black people can't get sunburnt - that's a desirable characteristic, to my mind. (There's still negative effects - such as a headache - from being in the sun too long. Just not sunburn).
5Zaine
Science:
2MugaSofer
Well, if you think races are a real thing, then calling this belief race realism seems fairly clear, and helps distinguish your belief from type-3 racism. Human biodiversity implies something more like support for eugenics, to me, since you're saying that humans are diverse, not that race is a functional Schelling point.
6Nornagest
Stripped of connotations, "race realism" to me implies the belief that empirical clusters exist within the space of human diversity and that they map to the traditional racial classifications, but not necessarily that those clusters affect intellectual or ethical dimensions to any significant degree. I'm not sure if there's an non-euphemistic value-neutral term for racism-1 in the ancestor's typology, but that isn't it. (The first thing that comes to mind is "scientific racism", which I'd happily use for ideas like this in a 19th- or early 20th-century context, but I have qualms about using it in a present-day context.)
1MugaSofer
Ah, good point.
1TheOtherDave
If it helps, the LW user I most consistently associate with the "certain races are inherently more or less intelligent/violent/whatever on average than others" (as gothgirl420666 says below) is Eugine Nier. A quick Google search ("site:http://lesswrong.com Eugine_Nier rac intelligence") turns up just about any proxy measure of intelligence, from SAT scores, to results of IQ tests, to crime rates, will correlate with race, for example. That said, were someone to describe Eugine Nier or their positions as "racist," I suspect they would respond that "racist" means lots of different things to different people and is not a useful descriptor.
2Nisan
Welcome! I'm unable to read while listening to music with words in it. I wonder how universal that is.
2MalcolmOcean
I know of at least three possible minds for this. Pretty sure we all assumed we were typical until talking about it. * One friend of mine is like you, and finds music horribly distracting to reading. * Another friend becomes practically deaf while reading, so music is just irrelevant. * I, on the third hand, can sing along to songs I know, while reading. I can possibly even do this for simple songs I don't know. I would suspect this is not optimal reading from a comprehension or speed perspective, but it's a lot of fun.
1Shmi
Pretty much the same here. I can only read when I tune out the lyrics. Well, not quite true, I can certainly read, but the content just doesn't register.
[-]aime15310

Hello, I'm E. I'll be entering university in September planning to study some subset of {math, computer science, economics}. I found Less Wrong in April 2012 through HPMoR and started seriously reading here after attending SPARC. I haven't posted because I don't think I can add too much to discussions, but reading here is certainly illuminating.

I'm interested in self-improvement. Right now, I'm trying to develop better social skills, writing skills, and work ethic. I'm also collecting some simple data from my day-to-day activities with the belief that having data will help me later. Some concrete actions I am currently taking:

  • Conditioning myself (focusing on smiling and positive thoughts) to enjoy social interaction. I don't dislike social interaction, but I'm definitely averse to talking to strangers. This aversion seems like it will hurt me long-term, so I'm trying to get rid of it.
  • Writing in a journal every night. Usually this is 200-300 words of my thoughts and summaries of the more important events that happened. I started this after noticing that I repeatedly tried and failed to recall my thoughts from a few months or years ago.
  • Setting daily schedules for myself. When I
... (read more)
7ModusPonies
Welcome! You sound remarkably driven. Math and CS are foundational fields which can be used for nearly anything, while economics past intro level is much more specialized. I'd suggest putting the least focus on economics unless/until you're sure you want to do something with it. (Warning: I am a programmer with an econ degree. I may be projecting, here.) Subjective happiness, maybe? The old "how good do you feel right now on a scale of 1-10" could be one way to quantify this. They are the worst thing.

Hi everyone. I have been lurking on this site for a long time, and somewhat recently have made an account, but I still feel pretty new here. I've read most of the sequences by now, and I feel that I've learned a lot from them. I have changed myself in some small ways as a result, most notably by donating small amounts to whatever charity I feel is most effective at doing good, with the intention that I will donate much more once I am capable of doing so.

I'm currently working on a Ph.D. in Mathematics right now, and I am also hoping that I can steer my research activities towards things that will do good. Still not sure exactly how to do this, though.

I also had the opportunity to attend my local Less Wrong meetup, and I have to say it was quite enjoyable! I am looking forward toward future interactions with my local community.

7Pablo
Hi Adele. Given what you write in your introduction, it's likely that you have already heard of this organization, but if this is not the case: you may want to check out 80,000 Hours. They provide evidence-based career advice for people that want to make a difference.
3Nisan
Welcome! I like your username. EDIT: I know several people in this community who dropped out of math grad school, and most of them were happy with the decision. I'm choosing to graduate with a PhD in a useless field because I find myself in a situation where I can get one in exchange for a few months of work. I know someone who switched to algebraic statistics, which is a surprisingly useful field that involves algebraic geometry.
3John_Maxwell
I haven't looked at this issue in detail, but I seem to recall that not getting more education was one of the more common regrets among "Terman's geniuses", whoever those are. Link.
2Adele_L
What is their reasoning?
7Nisan
I can't speak for them, but I expect it's something like this: One can make more money, do more good, have a more fun career, and have more freedom in where one lives by dropping out than by going into academia. And having a PhD when hunting for non-academic jobs is not worth spending several years as a grad student doing what one feels is non-valuable work for little pay. You'd have to speak to someone who successfully dropped out to get more details; and of course even if all their judgments are correct, they may not be correct for you.
2magfrump
There are several people on LW (myself included) who continue to be in graduate school in mathematics. If you're interested in just talking math, there'll be an audience for that. I would personally be interested in more academic networking happening here--even if most people on LW will end up leaving mathematics as such.

Hello!

I'm Jennifer; I'm currently a graduate student in medieval literature and a working actor. Thanks to homeschooling, though, I do have a solid background and abiding interest in quantum physics/pure mathematics/statistics/etc., and 'aspiring rationalist' is probably the best description I can provide! I found the site through HPMoR.

Current personal projects: learning German and Mandarin, since I already have French/Latin/Spanish/Old English/Old Norse taken care of, and much as I personally enjoy studying historical linguistics and old dead languages, knowing Mandarin would be much more practical (in terms of being able to communicate with the greatest number of people when travelling, doing business, reading articles, etc.)

3Adele_L
Hey, another homeschooled person! There seem to be a lot of us here. How was your experience? Mine was the crazy religious type, but I still consider it to have been an overall good thing for my development relative to other feasible options.
2lavalamp
Me three-- I thought I was the only one, where are we all hiding? :)
2Jennifer_H
My experience was, overall, excellent - although my parents are definitely highly religious. (To be more precise, my father is a pastor, so biology class certainly contained some outdated ideas!) However, I'm in complete agreement - relative to any other possible options, I don't think I could have gotten a better education (or preparation for postsecondary/graduate studies) any other way.
2Adele_L
Yeah, I got taught young earth creationism instead of evolution. But despite this, i think I was better prepared academically than most of my peers.
0komponisto
Your self-description is one of the best arguments for homeschooling I have ever seen or could imagine being made. (See also: Lillian Pierce.) Welcome to LW, and please keep existing.
0Shmi
Impressive! How do you plan to learn Mandarin? Immersion? Rosetta Stone?
6Jennifer_H
Combination of methods based on what has worked for me in the past with other languages! I've used Rosetta Stone before, for French & Spanish, and while it's definitely got advantages, I (personally - I also know people who love it!) also found it very time-consuming for very little actual learning, and it's also expensive for what it is. Basically: a) I have enough friends who are either native or fluent speakers of Mandarin that once I'm a little more confident with the basics, I will draft them to help me practice conversation skills :) b) My university offers inexpensive part-time courses to current students. c) Lots of reading, textbook exercises, watching films, listening to music, translating/reading newspapers, etc. in the language. d) I'm planning to go to China to teach English in the not-too-distant future, so while I'd like to have basic communication skills down before I go, immersion will definitely help!

Hi!

I’ve been interested in how to think well since early childhood. When I was about ten, I read a book about cybernetics. (This was in the Oligocene, when “cybernetics” had only recently gone extinct.) It gave simple introductions to probability theory, game theory, information theory, boolean switching logic, control theory, and neural networks. This was definitely the coolest stuff ever.

I went on to MIT, and got an undergraduate degree in math, specializing in mathematical logic and the theory of computation—fields that grew out of philosophical investigations of rationality.

Then I did a PhD at the MIT AI Lab, continuing my interest in what thinking is. My work there seems to have been turned into a surrealistic novel by Ken Wilber, a woo-ish pop philosopher. Along the way, I studied a variety of other fields that give diverse insights into thinking, ranging from developmental psychology to ethnomethodology to existential phenomenology.

I became aware of LW gradually over the past few years, mainly through mentions by people I follow on Twitter. As a lurker, there’s a lot about the LW community I’ve loved. On the other hand, I think some fundamental, generally-accepted ideas her... (read more)

[-]lll230

Hey everyone!

I'm ll, my real name is Lukas. I am a student at a technical university in the US and a hobbyist FOSS programmer.

I discovered Harry Potter and the Methods of Rationality accidentally one night, and since then I've been completely hooked on it. After I caught up, I decided to check out the Less Wrong community. I've been lurking since then, reading the essays, comments, hanging out in the IRC channel.

1EvelynM
Welcome to Less Wrong III!
1Kindly
It's not III, it's lll.
5Manfred
We can just call him CL for short, to distinguish him from IIV.
3A1987dM
Damn sans-serif fonts...
1EvelynM
If I were reading this in inconsolata, I'd have known that. Thanks.
1lll
It seems like my username is already sparking some controversies. It's three lowercase L letters. My initial is LL, but I can't have a two letter username, so LLL, but I thought uppercase would be too much, so lll it is.
1lll
Thank you! I am definitely enjoying this community. I am a recent Reddit expat, too, so I will focus my internet browsing time here. I don't think I will miss Reddit at all.
1VCavallo
If your Reddit time commitment was anything like that of other people I know, you should be able to blow through all the sequences in about a day or two : )

Hey, my name is Roman. You can read my detailed bio here, as well as some research papers I published on the topics of AI and security. I decided to attend a local LW meet up and it made sense to at least register on the site. My short term goal is to find some people in my geographic area (Louisville, KY, USA) to befriend.

4Shmi
Nice to see more AI experts here.
1Wei Dai
Hi Roman. Would you mind answering a few more questions that I have after reading your interview with Luke? Carl Shulman and Nick Bostrom have a paper coming out arguing that embryo selection can eventually (or maybe even quickly) lead to IQ gains of 100 points or more. Do you think Friendly AI will still be an unsolvable problem for IQ 250 humans? More generally, do you see any viable path to a future better than technological stagnation short of autonomous AGI? What about, for example, mind uploading followed by careful recursive upgrading of intelligence?
3Roman_Yampolskiy
Hey Wei, great question! Agents (augmented humans) with IQ of 250 would be superintelligent with respect to our current position on the intelligence curve and would be just as dangerous to us, unaugment humans, as any sort of artificial superintelligence. They would not be guaranteed to be Friendly by design and would be as foreign to us in their desires as most of us are from severely mentally retarded persons. For most of us (sadly?) such people are something to try and fix via science not something for whom we want to fulfill their wishes. In other words, I don’t think you can rely on unverified (for safety) agent (event with higher intelligence) to make sure that other agents with higher intelligence are designed to be human-safe. All the examples you give start by replacing humanity with something not-human (uploads, augments) and proceed to ask the question of how to safe humanity. At that point you already lost humanity by definition. I am not saying that is not going to happen, it probably will. Most likely we will see something predicted by Kurzweil (merger of machines and people).
6Wei Dai
I think if I became an upload (assuming it's a high fidelity emulation) I'd still want roughly the same things that I want now. Someone who is currently altruistic towards humanity should probably still be altruistic towards humanity after becoming an upload. I don't understand why you say "At that point you already lost humanity by definition".
5Dr_Manhattan
Wei, the question here is would rather than should, no? It's quite possible that the altruism that I endorse as a part of me is related to my brain's empathy module, much of which might be broken if I see cannot relate to other humans. There are of course good fictional examples of this, e.g. Ted Chiang's "Understand" - http://www.infinityplus.co.uk/stories/under.htm and, ahem, Watchmen's Dr. Manhattan.
4Eliezer Yudkowsky
Logical fallacy: Generalization from fictional evidence. A high-fidelity upload who was previously altruistic toward humanity would still be altruistic during the first minute after awakening; their environment would not cause this to change unless the same sensory experiences would have caused their previous self to change. If you start doing code modification, of course, some but not all bets are off.
6Dr_Manhattan
Well, I did put a disclaimer by using the standard terminology :) Fiction is good for suggesting possibilities, you cannot derive evidence from it of course. I agree on the first-minute point, but do not see why it's relevant, because there is the 999999th minute by which value drift will take over (if altruism is strongly related to empathy). I guess upon waking up I'd make value preservation my first order of business, but since an upload is still evolution's spaghetti code it might be a race against time.
1MugaSofer
Perhaps the idea is that the sensory experience of no longer falling into the category of "human" would cause the brain to behave in unexpected ways? I don't find that especially likely, mind, although I suppose long-term there might arise a self-serving "em supremacy" meme.
1Bugmaster
+1 for linking to Understand ; I remembered reading the story long ago, but I forgot the link. Thanks for reminding me !
3Roman_Yampolskiy
We can talk about what high fidelity emulation includes. Will it be just your mind? Or will it be Mind + Body + Environment? In the most common case (with an absent body) most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent. People are mostly defined by their physiological needs (think of Maslow’s pyramid). An entity with no such needs (or with such needs satisfied by virtual/simulated abandoned resources) will not be human and will not want the same things as a human. Someone who is no longer subject to human weaknesses or relatively limited intelligence may lose all allegiances to humanity since they would no longer be a part of it. So I guess I define “humanity” as comprised on standard/unaltered humans. Anything superior is no longer a human to me, just like we are not first and foremost Neanderthals and only after homo sapiens.
1Nornagest
Insofar as Maslow's pyramid accurately models human psychology (a point of which I have my doubts), I don't think the majority of people you're likely to be speaking to on the Internet are defined in terms of their low-level physiological needs. Food, shelter, physical security -- you might have fears of being deprived of these, or even might have experienced temporary deprivation of one or more (say, if you've experienced domestic violence, or fought in a war) but in the long run they're not likely to dominate your goals in the way they might for, say, a Clovis-era Alaskan hunter. We treat cases where they do as abnormal, and put a lot of money into therapy for them. If we treat a modern, first-world, middle-class college student with no history of domestic or environmental violence as psychologically human, then, I don't see any reason why we shouldn't extend the same courtesy to an otherwise humanlike emulation whose simulated physiological needs are satisfied as a function of the emulation process.
3Roman_Yampolskiy
I don’t know you, but for me only a few hours a day is devoted to thinking or other non-physiological pursuits, the rest goes to sleeping, eating, drinking, Drinking, sex, physical exercise, etc. My goals are dominated by the need to acquire resources to support physiological needs of me and my family. You can extend any courtesy you want to anyone you want but you (human body) and a computer program (software) don’t have much in common as far as being from the same group is concerned. Software is not humanity; at best it is a partial simulation of one aspect of one person.
3Nornagest
It seems to me that there are a couple of things going on here. I spend a reasonable amount of time (probably a couple of hours of conscious effort each day; I'm not sure how significant I want to call sleep) meeting immediate physical needs, but those don't factor much into my self-image or my long-term goals; I might spend an hour each day making and eating meals, but ensuring this isn't a matter of long-term planning nor a cherished marker of personhood for me. Looked at another way, there are people that can't eat or excrete normally because of one medical condition or another, but I don't see them as proportionally less human. I do spend a lot of time gaining access to abstract resources that ultimately secure my physiological satisfaction, on the other hand, and that is tied closely into my self-image, but it's so far removed from its ultimate goal that I don't feel that cutting out, say, apartment rental and replacing it with a proportional bill for Amazon AWS cycles would have much effect on my thoughts or actions further up the chain, assuming my mental and emotional machinery remains otherwise constant. I simply don't think about the low-level logistics that much; it's not my job. And I'm a financially independent adult; I'd expect the college student in the grandparent to be thinking about them in the most abstract possible way, if at all.
0TheOtherDave
Well, yes, a lot depends on what we assume the upload includes, and how important the missing stuff is. If Dave!upload doesn't include X1, and X2 defines Dave!original's humanity, and X1 contains X2, then Dave!upload isn't human... more or less tautologically. We can certainly argue about whether our experiences of hunger, thirst, fatigue, etc. qualify as X1, X2, or both... or, more generally, whether anything does. I'm not nearly as confident as you sound about either of those things. But I'm not sure that matters. Let's posit for the sake of comity that there exists some set of experiences that qualify for X2. Maybe it's hunger, thirst, fatigue, etc. as you suggest. Maybe it's curiosity. Maybe it's boredom. Maybe human value is complex and X2 actually includes a carefully balanced brew of a thousand different things, many of which we don't have words for. Whatever it is, if it's important to us that uploads be human, then we should design our uploads so that they have X2. Right? But you seem to be taking it for granted that whatever X2 turns out to be, uploads won't experience X2. Why?
0Roman_Yampolskiy
Just because you can experience something someone else can does not mean that you are of the same type. Belonging to a class of objects (ex. Humans) requires you to be one. A simulation of a piece of wood (visual texture, graphics, molecular structure, etc.) is not a piece of wood and so does not belong to the class of pieces of wood. A simulated piece of wood can experience simulated burning process or any other wood-suitable experience, but it is still not a piece of wood. Likewise a piece of software is by definition not a human being, it is at best a simulation of one.
0TheOtherDave
Ah. So when you say "most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent" you're making a definitional claim that whatever the new agent experiences, it won't be a human feeling, because (being software) the agent definitionally won't be a human. So on your view it might experience hunger, thirst, fatigue, etc., or it might not, but if it does they won't be human hunger, thirst, fatigue, etc., merely simulated hunger, thirst, fatigue, etc. Yes? Do I understand you now? FWIW, I agree that there are definitions of "human being" and "software" by which a piece of software is definitionally not a human being, though I don't think those are useful definitions to be using when thinking about the behavior of software emulations of human beings. But I'm willing to use your definitions when talking to you. You go on to say that this agent, not being human, will not want the same things as a human. Well, OK; that follows from your definitions. One obvious followup question is: would a reliable software simulation of a human, equipped with reliable software simulations of the attributes and experiences that define humanity (whatever those turn out to be; I labelled them X2 above), generate reliable software simulations of wanting what a human wants? Relatedly, do we care? That is, given a choice between an upload U1 that reliably simulates wanting what a human wants, and an upload U2 that doesn't reliable simulate wanting what a human wants, do we have any grounds for preferring to create U1 over U2? Because if it's important to us that uploads reliably simulate being human, then we should design our uploads so that they have reliable simulations of X2. Right?
2Bugmaster
Have you ever had the unfortunate experience of hanging out with really boring people; say, at a party ? The kind of people whose conversations are so vapid and repetitive that you can practically predict them verbatim in your head ? Were you ever tempted to make your excuses and duck out early ? Now imagine that it's not a party, but the entire world; and you can't leave, because it's everywhere. Would you still "feel altruistic toward humanity" at that point ?
1TheOtherDave
It's easy to conflate uploads and augments, here, so let me try to be specific (though I am not Wei Dai and do not in any way speak for them). I experience myself as preferring that people not suffer, for example, even if they are really boring people or otherwise not my cup of tea to socialize with. I can't see why that experience would change upon a substrate change, such as uploading. Basically the same thing goes for the other values/preferences I experience. OTOH, I don't expect the values/preferences I experience to remain constant under intelligence augmentation, whatever the mechanism. But that's kind of true across the board. If you did some coherently specifiable thing that approximates the colloquial meaning of "doubled my intelligence" overnight, I suspect that within a few hours I would find myself experiencing a radically different (from my current perspective) set of values/preferences. If instead of "doubling" you "multiplied by 10" I expect that within a few hours I would find myself experiencing an incomprehensible (from my current perspective) set of values/preferences.
0[anonymous]
I'm going to throw out some more questions. You are by no means obligated to answer. In your AI Safety Engineering paper you say, "We propose that AI research review boards are set up, similar to those employed in review of medical research proposals. A team of experts in artificial intelligence should evaluate each research proposal and decide if the proposal falls under the standard AI – limited domain system or may potentially lead to the development of a full blown AGI." But would we really want to do this today? I mean, in the near future--say the next five years--AGI seems pretty hard to imagine. So might this be unnecessary? Or, what if later on when AGI could happen, some random country throws the rules out? Do you think that promoting global cooperation now is a useful way to address this problem, as I assert in this shamelessly self-promoted blog post? The general question I am after is, How do we balance the risks and benefits of AI research? Finally you say in your interview, "Conceivable yes, desirable NO" on the question of relinquishment. But are you not essentially proposing relinquishment/prevention?
1Roman_Yampolskiy
Just because you can’t imaging AGI in the next 5 years, doesn’t mean that in four years someone will not propose a perfectly workable algorithm for achieving it. So yes, it is necessary. Once everyone sees how obvious AGI design is, it will be too late. Random countries don’t develop cutting edge technology; it is always done by the same Superpowers (USA, Russia, etc.). I didn’t read your blog post so can’t comment on “global cooperation”. As to the general question you are asking, you can get most conceivable benefits from domain expert AI without any need for AGI. Finally, I do think that relinquishment/delaying is a desirable thing, but I don’t think it is implementable in practice.
1TheOtherDave
Is there a short form of where you see the line between these two types of systems? For example, what is the most "AGI-like" AI you can conceive of that is still "really a domain-expert AI" (and therefore putatively safe to develop), or vice-versa? My usual sense is that these are fuzzy terms people toss around to point to very broad concept-clusters, which is perfectly fine for most uses, but if we're really getting to the point of trying to propose policy based on these categories, it's probably good to have a clearer shared understanding of what we mean by the terms. That said, I haven't read your paper; if this distinction is explained further there, that's fine too.
0Roman_Yampolskiy
Great question. To me a system is domain specific if it can’t be switched to a different domain without re-designing it. I can’t take Deep Blue and use it to sort mail instead. I can’t take Watson and use it to drive cars. An AGI (for which I have no examples) would be capable of switching domains. If we take humans as an example of general intelligence, you can take an average person and make them work as a cook, driver, babysitter, etc, without any need for re-designing them. You might need to spend some time teaching that person a new skill, but they can learn efficiently and perhaps just by looking at how it should be done. I can’t do this with domain expert AI. Deep Blue will not learn to sort mail regardless of how many times I demonstrate that process.
0TheOtherDave
(nods) That's fair. Thanks for clarifying.
0Moss_Piglet
I've heard repeatedly that the correlation between IQ and achievement after about 120 (z = 1.33) is pretty weak, possibly even with diminishing returns up at the very top. Is moving to 250 (z = 10) passing a sort of threshold of intelligence at some point where this trend reverses? Or is the idea that IQ stops strongly predicting achievement above 120 wrong? This is something I've been curious about for a while, so I would really appreciate your help clearing the issue up a bit.
9ESRogs
In agreement with Vaniver's comment, there is evidence that differences in IQ well above 120 are predictive of success, especially in science. For example: * IQs of a sample of eminent scientists were much higher than the average for science PhDs (~160 vs ~130) * Among those who take the SAT at age 13, scorers in the top .1% end up outperforming the top 1% in terms of patents and scientific publications produced as adults I don't think I have good information on whether these returns are diminishing, but we can at least say that they are not vanishing. There doesn't seem to be any point beyond which the correlation disappears.
1Moss_Piglet
I just read the "IQ's of eminent scientists" and realized I really need to get my IQ tested. I've been relying on my younger brother's test (with the knowledge that older brothers tend to do slightly better but usually within an sd) to guesstimate my own IQ but a) it was probably a capped score like Feynman's since he took it in middle school and b) I have to know if there's a 95% chance of failure going into my field. I'd like to think I'm smart enough to be prominent, but it's irrational not to check first. Thanks for the information; you might have just saved me a lot of trouble down the line, one way or the other.
5EHeller
I'd be very careful generalizing from that study to the practice of science today. Science in the 1950s was VERY different, the length of time to the phd was shorter, postdocs were very rare, and almost everyone stepped into a research faculty position almost immediately. In today's world, staying in science is much harder- there are lots of grad students competing for many postdocs competing for few permanent science positions. In today's world, things like conscientiousness, organization skills,etc (grant writing is now a huge part of the job) play a much larger role in eventually landing a job in the past, and luck is a much bigger driver (whether a given avenue of exploration pays off requires a lot of luck. Selecting people whose experiments ALWAYS work is just grabbing people who have been both good AND lucky). It would surprise me if the worsening science career hasn't changed the make up of an 'eminent scientist'.
0Moss_Piglet
At the same time, all of those points except the luck one could be presented as evidence that the IQ required to be eminent has increased rather than the converse. Grant writing and schmoozing are at least partially a function of verbal IQ, IQ in general strongly predicts academic success in grad school, and competition tends to winnow out the poor performers a lot more than the strong. Not that I really disagree, I just don't see it as particularly persuasive. That's just one of the unavoidable frustrations of human nature though; an experiment which dis-confirms it's hypothesis worked perfectly, it just isn't human nature to notice negatives.
2EHeller
I disagree for several reasons. Mostly, conscientiousness, conformity,etc are personality traits that aren't strongly correlated with IQ (conscientiousness may even be slightly negatively correlated). Would it surprise you to know that the most highly regarded grad students in my physics program all left physics? They had a great deal of success before and in grad school (I went to a top 5 program) , but left because they didn't want to deal with the administrative/grant stuff, and because they didn't want to spend years at low pay. I'd argue that successful career in science is selecting for some threshhold IQ and then much more strongly for a personality type.
0Kawoomba
No kidding.
0ESRogs
Are you American? If you've taken the SAT, you can get a pretty good estimate of your IQ here.
0Moss_Piglet
Mensa apparently doesn't consider the SAT to have a high-enough g loading to be useful as an intelligence test after 1994. Although the website's figure are certainly encouraging, it's probably best to take them with a bit of salt.
0ESRogs
True, but note that, in contrast with Mensa, the Triple Nine Society continued to accept scores on tests taken up through 2005, though with a higher cutoff (of 1520) than on pre-1995 tests (1450). Also, SAT scores in 2004 were found to have a correlation of about .8 with a battery of IQ tests, which I believe is on par with the correlations IQ tests have with each other. So the SAT really does seem to be an IQ test (and an extremely well-normed one at that if you consider their sample size, though perhaps not as highly g-loaded as the best, like Raven's). But yeah, if you want to have high confidence in a score, probably taking additional tests would be the best bet. Here's a list of high-ceiling tests, though I don't know if any of them are particularly well-normed or validated.
3wedrifid
Is this what you intended to say? "Diminishing returns" seems to apply at the bottom the scale you mention. You've already selected the part where returns have started diminishing. Sometimes it is claimed that that at the extreme top the returns are negative. Is that what you mean?
0Moss_Piglet
Yeah, that's just me trying to do everything in one draft. Editing really is the better part of clear writing. I meant something along the lines of "I've heard it has diminishing returns and potentially [, probably due to how it affects metabolic needs and rate of maturation] even negative returns at the high end."
3Vaniver
Most IQ tests are not very well calibrated above 120ish, because the number of people in the reference sample that scored much higher is rather low. It's also the case that achievement is a function of several different factors, which will probably become the limiting factor for most people at IQs higher than 120. That said, it does seem that in physics, first-tier physicists score better on cognitive tests than second-tier physicists, which suggests that additional IQ is still useful for achievement in the most cognitively demanding fields. It seems likely that augmented humans who do several times better than current humans on cognitive tests will also be able to achieve several times as much in cognitively demanding fields.
1Lumifer
First, IQ tests don't go to 250 :-) Generally speaking standard IQ tests have poor resolution in the tails -- they cannot reliably identify whether you have the IQ of, say, 170 or 190. At some point all you can say is something along the lines of "this person is in the top 0.1% of people we have tested" and leave it at that. Second, "achievement" is a very fuzzy word. People mean very different things by it. And other than by money it's hard to measure.
0Shmi
I wonder how they propose to avoid the standard single-trait selective breeding issues, like accumulation of undesirable traits. For example, those geniuses might end up being sickly and psychotic.
0arundelo
It seems to me that this would not be a problem with iterated embryo selection, but I might be wrong. See also Yvain's "modal human" post.
0[anonymous]
Would it matter? C.f. goldmage.
1lukeprog
Note also that Roman co-authored 3 of the papers on MIRI's publications page.
0Shmi
His paper http://cecs.louisville.edu/ry/LeakproofingtheSingularity.pdf seriously discusses ways to confine a potentially hostile superintelligence, a feat MIRI seems to consider hopeless. Did you guys have a good chat about it?

I think most everyone at MIRI and FHI thinks boxing is a good thing, even if many would say not enough on its own. I don't think you will find many who think that open internet connections are a matter of indifference for AI developers working with powerful AGI.

High-grade common sense (the sort you'd get by asking any specialist in computer security) says that you should design an AI which you would trust with an open Internet connection, then put it in the box you would use on an untrusted AI during development. (No, the AI will not be angered by this lack of trust and resent you. Thank you for asking.) I think it's safe to say that for basically everything in FAI strategy (I can't think of an exception right now) you can identify at least two things supporting any key point, such that either alone was designed to be sufficient independently of the other's failing, including things like "indirect normativity works" (you try to build in at least some human checks around this which would shut down any scary AI independently of your theory of indirect normativity being remotely correct, while also not trusting the humans to steer the AI because then the humans are your single point of failure).

5lukeprog
See my interview with Roman here.
0Shmi
Thanks. Pretty depressing, though.

Hi everyone, my name is Sara!

I am 21, live in Switzerland and study psychology. I am fascinated with the field of rationality and therefore wrote my Bachelor thesis on why and how critical thinking should be taught in schools. I started out with the plan to get my degree in clinical- and neuropsychology but will now change to developmental psychology for I was able to fascinate my supervising tutor and secure his full support. This will allow me to base my Master project on the development and enhancing of critical thinking and rationality, too. Do you have any recommendations?

After my Master's degree I still intend on getting an education as therapist (money reasons) or going into research (pushing the experimental research on rationality) and on giving a lot of money to the most effective charities around. I wonder if as therapist it would be smarter to concentrate on children or adults; both fields will be open for me after my university education (which will take me about 2.5-3 more years). I speak German, Swiss German, Italian, French and English (and understand some more languages), which will give me some freedom in the choice where to actually work in future.

...but I'm not ... (read more)

1Tenoke
Hello, Sara. Do you have any specific ideas for this. Are you aiming at enhancing rationality in adults or children? I don't have specific recommendations except perhaps people whose work is relevant, however, you would have encountered those around the site. P.S. I am mainly commenting here because this is the second time I see you on the internet within the last 4 hours.
1Serendipity
Hello, Tenoke. I am aiming on enhancing rationality in children but indeed had to often fall back on research with older people. Until now I've been concentrating on the work of Stanovich, Facione, van Gelder and Twardy. Whose work do you think would be relevant, too? Thank you for your answer!
1Tenoke
Well, Kahneman (and Tversky) would be the most obvious example out of those not mentioned. Otherwise Dennet, Gilovich, Slovic, Pinker, Taleb and Thaler would be some examples of people whose work has varying degrees of relevance to the subject. Those are the people who I can think of off the top of my head but the best way to systematically find researchers of interest would be to look at the reverse citations of Kahneman and Tversky's work or something of the sort.
0Serendipity
Ah, how could I forget them! Biases and heuristics play a big role in my interests for critical thinking of course. I'm a bit surprised: how come you included Dennett and Pinker? I know these two for work that's (very interesting but) mostly unrelated to my addressed topic. I'm curious, seems like I missed something important.
1Tenoke
I was writing on auto-pilot, you are right that their work is significantly less relevant to the topic than the others'.

If people have a problem with it, that's not my fault.

It might or it might not be. As a general rule, if two people think that a single issue of fact is a settled question, in different directions, then either they have access to different information, or one or both of them is incorrect.

If the former is the case, then they can share their information, after which either they will agree, or one or both will be incorrect.

If we're incorrect about religion being a settled question, we want to know that, so we can change our minds. If Mormonism is incorrect, do you want to know that?

Hi,

I'm a final year Mathematics student at Cambridge coming from an IOI, IMO background. I've written software for a machine learning startup, a game dev startup and Google. I was recently interested in programming language theory esp. probabilistic and logic programming (some experiments here http://peteriserins.tumblr.com/archive).

I'm interested in many aspects of startups (including design) and hope to move into product management, management consulting or venture capital. I love trying to think rationally about business processes and have started to write about it at http://medium.com/@p_e .

I found out about LW from a friend and have since started reading the sequences. I hope to learn more about practical instrumental rationality, I am less interested in philosophy and the meta theory. So far I've learned more about practical application of mathematics from data science and consulting, but expect rationality to take it further and with more rigor.

Great meeting y'all

6Nisan
Welcome! You may want to consider participating in a CFAR workshop. I think it's 1000% as effective for learning instrumental rationality as reading Less Wrong. They're optimized for teaching practical skills, and they tend to attract entrepreneurs. Also, I think you'd be a valuable addition to the community around CFAR, in addition to the online community around the Less Wrong website.
2beoShaffer
As someone who has done a CFAR workshop, and a lot of online rationality stuff (including, but not limited to reading ~90% of the sequences) I second this. I'll also add that do think having a strong theoretical background going in enhances the practical training.

Lumifer, please update that at this moment you don't grok the difference between "A => B (p=0.05)" and "B => A (p = 0.05)", which is why you don't understand what p-value really means, which is why you don't understand the difference between selection bias and base rate neglect, which is probably why the emphasis on using Bayes theorem in scientific process does not make sense to you. You made a mistake, that happens to all of us. Just stop it already, please.

And don't feel bad about it. Until recently I didn't understand it too, and I had a gold medal from international mathematical olympiad. Somehow it is not explained correctly at most schools, perhaps because the teachers don't get it themselves, or maybe they just underestimate the difficulty of proper understanding and the high chance of getting it wrong. So please don't contibute to the confusion.

Imagine that there are 1000 possible hypotheses, among which 999 are wrong, and 1 is correct. (That's just a random example to illustrate the concept. The numbers in real life can be different.) You have an experiment that says "yes" to 5% of the wrong hypotheses (this is what p=0.05 means), and a... (read more)

2Lumifer
LOL. Yeah, yeah, mea culpa, I had a brain fart and expressed myself very poorly. I do understand what p-value really means. The issue was that I had in mind a specific scenario (where in effect you're trying to see if the difference in means between two groups is significant) but neglected to mention it in the post :-)
0Vaniver
I feel like this could use a bit longer explanation, especially since I think you're not hearing Lumifer's point, so let me give it a shot. (I'm not sure a see a meaningful difference between base rate neglect and selection bias in this circumstance.) The word "grok" in Viliam_Bur's comment is really important. This part of the grandparent is true: But it's like saying "well, assume the diagnosis is correct. Then the treatment will make the patient better with high probability." While true, it's totally out of touch with reality- we can't assume the diagnosis is correct, and a huge part of being a doctor is responding correctly to that uncertainty. Earlier, Lumifer said this, which is an almost correct explanation of using Bayes in this situation: The part that makes it the "almost" is the "5% of the times, more or less." This implies that it's centered around 5%, with random chance determining what this instance is. But selection bias means it will almost certainly be more, and generally much more. In fields that study phenomena that don't exist, 100% of the papers published will be of false results that were significant by chance. In many real fields, rates of failure to replicate are around 30%. Describing 30% as "5%, more or less" seems odd, to say the least. But the proposal to reduce the p value doesn't solve the underlying problem (which was Lumifer's response). If we set the p value threshold lower, at .01 or .001 or wherever, we reducing the risk of false positives at the cost of increasing the risk of false negatives. A study design which needs to determine an effect at the .001 level is much more expensive than a study design which needs to determine an effect at the .05 level, and so we will have many less studies attempted, and many many less published studies. Better to drop p entirely. Notice that stricter p thresholds go in the opposite direction as the publication of negative results, which is the real solution to the problem of selection bias
0Lumifer
My grandparent post was stupid, but what I had in mind was basically a stage-2 (or -3) drug trial situation. You have declared (at least to the FDA) that you're running a trial, so selection bias does not apply at this stage. You have two groups, one receives the experimental drug, one receives a placebo. Assume a double-blind randomized scenario and assume there is a measurable metric of improvement at the end of the trial. After the trial you have two groups with two empirical distributions of the metric of choice. The question is how confident you are that these two distributions are different. Well, as usual it's complicated. Yes, the p-test is suboptimal in most situations where it's used in reality. However it fulfils a need and if you drop the test entirely you need a replacement for the need won't go away.
[-]aphyer190

Hi, I'm Andrew, a college undergrad in computer science. I found this site through HPMOR a few years ago.

Hi everyone, I'm Chris. I'm a physics PhD student from Melbourne, Australia. I came to rationalism slowly over the years by having excellent conversations with like minded friends. I was raised a catholic and fully bought into the faith, but became an atheist in early high school when I realised that scientific explanations made more sense.

About a year ago I had a huge problem with the collapse postulate of quantum mechanics. It just didn't make sense and neither did anything anyone was telling me about it. This led me to discover that many worlds wasn't as crazy as it had been made out to be, and led me to this very community. My growth as a rationalist has made me distrust the consensus opinions of more and more groups, and realising that physicists could get something so wrong was the final nail in the coffin for my trust of the scientific establishment. Of course science is still the best way to figure things out, but as soon as opinions become politicised or tied to job prospects, I don't trust scientists as far as I can throw them. Related to this is my skepticism that climate change is a big deal.

I am frustrated more by the extent of unreason in educated circles than I am in... (read more)

I'm pretty social and would love to meet more rationalist friends, but I have the perception that if I went to a meetup most people would be less extroverted than me, and it might not be much fun for me.

My experience at meetups has been pretty social. After all, meetups select for people outgoing enough to go out of the house in the first place. I'd encourage you to go once, if there's a convenient meetup around. The value of information is high; if the meetup sucks, that costs one afternoon, but if it's good, you gain a new group of friends.

1nonplussed
Excellent point, I know that effect makes a huge difference in other contexts, so that resonates with me. Ok, well I'll give it a shot. There are no meetups near where I am in Germany at the moment, but I'll be back in Melbourne later in the year where there seems to be some regular stuff going on.
1Nisan
Welcome! What do you think of the Born probabilities?
4nonplussed
I haven't gone through any of the supposed derivations, but I'm led to believe that the Born rule is convincingly derivable within many worlds. I have a book called "Many Worlds? Everett, quantum theory and reality", which contains such a derivation, I've been meaning to read it for a while and will get around to it some day. It claims: Which I think is a nice angle to view it from. At any rate, the Born rule is a fairly natural result to have, since the probabilities are simply the vector product of the wavefunction with itself, which is how you normally define the sizes of vectors in vector spaces. So I'm expecting the argument in the book to be related to the criteria that mathematicians use to define inner products, and how those criteria map to assumptions about the universe (ie no preferred spatial direction, that sort of thing). Maybe if I understand it I'll post something here about it for those who are interested — I'm yet to see a blog-style summary of where the Born rule comes from. At any rate it doesn't come from anywhere in the way we're taught quantum mechanics at uni, it's simply an axiom that one doesn't question. So any derivation, however assumption laden and weak would be an improvement over standard Copenhagen.

Greetings.

I'm a long-time singularitarian and (intermediate) rationalist looking be a part of the conversation again. By day I am an English teacher in a suburban American high school. My students have been known to Google me. Rather than self-censor I am using a pseudonym so that I will feel free to share my (anonymized) experiences as a rationalist high school teacher.

I internet-know a number of you in this community from early years of the Singularity Institute. I fleetingly met at a few in person once, perhaps. I used to write on singularity-related issues, and was a proud "sniper" of the SL4 mailing list for a time. For the last 6-7 years I've mostly dropped off the radar by letting "life" issues consume me, though I have continued to follow the work of the key actors from afar with interest. I allow myself some pride for any small positive impact I might have once had during a time of great leverage for donors and activists, while recognizing that far too much remains undone. (If you would like to confirm your suspicions of my identity, I would love to hear from you with a PM. I just don't want Google searches of my real name pulling up my LW acti... (read more)

0Said Achmiz
Welcome to Less Wrong! Is your user name a reference to "Darmok"?
0tanagrabeast
Yes. It's amazing how memorable people find that one episode. Props to the writers.
[-]Axion180

Hi Less Wrong. I found a link to this site a year or so ago and have been lurking off and on since. However, I've self identified as a rationalist since around junior high school. My parents weren't religious and I was good at math and science, so it was natural to me to look to science and logic to solve everything. Many years later I realize that this is harder than I hoped.

Anyway, I've read many of the sequences and posts, generally agreeing and finding many interesting thoughts. It's fun reading about zombies and Newcomb's problem and the like.

I guess this sounds heretical, but I don't understand why Bayes theorem is placed on such a pedestal here. I understand Bayesian statistics, intuitively and also technically. Bayesian statistics is great for a lot of problems, but I don't see it as always superior to thinking inspired by the traditional scientific method. More specifically, I would say that coming up with a prior distribution and updating can easily be harder than the problem at hand.

I assume the point is that there is more to what is considered Bayesian thinking than Bayes theorem and Bayesian statistics, and I've reread some of the articles with the idea of trying to pin that down, but I've found that difficult. The closest I've come is that examining what your priors are helps you to keep an open mind.

9Viliam_Bur
Bayesian theorem is just one of many mathematical equations, like for example Pythagorean theorem. There is inherently nothing magical about it. It just happens to explain one problem with the current scientific publishing process: neglecting base rates. Which sometimes seems like this: "I designed an experiment that would prove a false hypothesis only with probability p = 0.05. My experiment has succeeded. Please publish my paper in your journal!" (I guess I am exaggerating a bit here, but many people 'doing science' would not understand immediately what is wrong with this. And that would be those who even bother to calculate the p-value. Not everyone who is employed as a scientist is necessarily good at math. Many people get paid for doing bad science.) This kind of thinking has the following problem: Even if you invent hundred completely stupid hypotheses; if you design experiments that would prove a false hypothesis only with p = 0.05, that means five of them would be proved by the experiment. If you show someone else all hundred experiments together, they may understand what is wrong. But you are more likely to send only the successful five ones to the journal, aren't you? -- But how exactly is the journal supposed to react to this? Should they ask: "Did you do many other experiments, even ones completely irrelevant to this specific hypothesis? Because, you know, that somehow undermines the credibility of this one." The current scientific publishing process has a bias. Bayesian theorem explains it. We care about science, and we care about science being done correctly.
0Lumifer
That's not neglecting base rates, that's called selection bias combined with incentives to publish. Bayes theorem isn't going to help you with this. http://xkcd.com/882/
1Viliam_Bur
Uhm, it's similar, but not the same. If I understand it correctly, selection bias is when 20 researchers make an experiment with green jelly beans, 19 of them don't find significant correlation, 1 of them finds it... and only the 1 publishes, and the 19 don't. The essence is that we had 19 pieces of evidence against the green jelly beans, only 1 piece of evidence for the green jelly beans, but we don't see those 19 pieces, because they are not published. Selection = "there is X and Y, but we don't see Y, because it was filtered out by the process that gives us information". But imagine that you are the first researcher ever who has researched the jelly beans. And you only did one experiment. And it happened to succeed. Where is the selection here? (Perhaps selection across Everett branches or Tegmark universes. But we can't blame the scientific publishing process for not giving us information from the parallel universes, can we?) In this case, base rate neglect means ignoring the fact that "if you take a random thing, the probability that this specific thing causes acne is very low". Therefore, even if the experiment shows a connection with p = 0.05, it's still more likely that the result just happened randomly. The proper reasoning could be something like this (all number pulled out of the hat) -- we already have pretty strong evidence that acne is caused by food; let's say there is a 50% probability for this. With enough specificity (giving each fruit a different category, etc.), there are maybe 2000 categories of food. It is possible that more then one of them cause acne, and our probability distribution for that is... something. Considering all this information, we estimate a prior probability let's say 0.0004 that a random food causes acne. -- Which means that if the correlation is significant on level p = 0.05, that per se means almost nothing. (Here one could use the Bayes theorem to calculate that the p = 0.05 successful experiment shows the true cause o
0Lumifer
That's a different case -- you have no selection bias here, but your conclusions are still uncertain -- if you pick p=0.05 as your threshold, you're clearly accepting that there is a 5% chance of a Type I error: the green jelly beans did nothing, but the noise happened to be such that you interpreted it as conclusive evidence in favor of your hypothesis. But that all is fine -- the readers of scientific papers are expected to understand that results significant to p=0.05 will be wrong around 5% of the times, more or less (not exactly because the usual test measures P(D|H), the probability of the observed data given the (null) hypothesis while you really want P(H|D), the probability of the hypothesis given the data). People rarely take entirely random things and test them for causal connection to acne. Notice how you had to do a great deal of handwaving in establishing your prior (aka the base rate). As an exercise, try to be specific. For example, let's say I want to check if the tincture made from the bark of a certain tree helps with acne. How would I go about calculating my base rate / prior? Can you walk me through an estimation which will end with a specific number?
6Viliam_Bur
And this is the base rate neglect. It's not "results significant to p=0.05 will be wrong about 5% of time". It's "wrong results will be significant to p=0.05 about 5% of time". And most people will confuse these two things. It's like when people confuse "A => B" with "B => A", only this time it is "A => B (p=0.05)" with "B => A (p=0.05)". It is "if wrong, then in 5% significant". It is not "if significant, then in 5% wrong". Yes, you are right. Establishing the prior is pretty difficult, perhaps impossible. (But that does not make "A => B" equal to "B => A".) Probably the reasonable thing to do would be simply to impose strict limits in areas where many results were proved wrong.
0Lumifer
Um, what "strict limits" are you talking about, what will they look like, and who will be doing the imposing? To get back to my example, let's say I'm running experiments to check if the tincture made from the bark of a certain tree helps with acne -- what strict limits would you like?
0Viliam_Bur
p = 0.001, and if at the end of the year too many researches fail to replicate, keep decreasing. (let's say that "fail to replicate" in this context means that the replication attempt cannot prove it even with p = 0.05 -- we don't want to make replications too expensive, just a simple sanity check) a long answer would involve a lot of handwaving again (it depends on why do you believe the bark is helpful; in other words, what other evidence do you already have) a short answer: for example, p = 0.001
7Vaniver
I know a few answers to this question, and I'm sure there are others. (As an aside, these foundational questions are, in my opinion, really important to ask and answer.) 1. What separates scientific thought and mysticism is that scientists are okay with mystery. If you can stand to not know what something is, to be confused, then after careful observation and thought you might have a better idea of what it is and have a bit more clarity. Bayes is the quantitative heart of the qualitative approach of tracking many hypotheses and checking how concordant they are with reality, and thus should feature heavily in a modern epistemic approach. The more precisely and accurately you can deal with uncertainty, the better off you are in an uncertain world. 2. What separates Bayes and the "traditional scientific method" (using scare quotes to signify that I'm highlighting a negative impression of it) is that the TSM is a method for avoiding bad beliefs but Bayes is a method for finding the best available beliefs. In many uncertain situations, you can use Bayes but you can't use the TSM (or it would be too costly to do so), but the TSM doesn't give any predictions in those cases! 3. Use of Bayes focuses attention on base rates, alternate hypotheses, and likelihood ratios, which people often ignore (replacing the first with maxent, the second with yes/no thinking, and the latter with likelihoods). 4. I honestly don't think the quantitative aspect of priors and updating is that important, compared to the search for a 'complete' hypothesis set and the search for cheap experiments that have high likelihood ratios (little bets). I think that the qualitative side of Bayes is super important but don't think we've found a good way to communicate it yet. That's an active area of research, though, and in particular I'd love to hear your thoughts on those four answers.
0Lumifer
What is the qualitative side of Bayes?
0Vaniver
Unfortunately, the end of that sentence is still true: I think that What Bayesianism Taught Me is a good discussion on the subject, and my comment there explains some of the components I think are part of qualitative Bayes. I think that a lot of qualitative Bayes is incorporating the insights of the Bayesian approach into your System 1 thinking (i.e. habits on the 5 second level).
0Lumifer
Well, yes, but most of the things there are just useful ways to think about probabilities and uncertainty, proper habits, things to check, etc. Why Bayes? He's not a saint whose name is needed to bless a collection of good statistical practices.
3Rob Bensinger
It's more or less the same reason people call a variety of essentialist positions 'platonism' or 'aristotelianism'. Those aren't the only thinkers to have had views in this neighborhood, but they predated or helped inspire most of the others, and the concepts have become pretty firmly glued together. Similarly, the phrases 'Bayes' theorem' and 'Bayesian interpretation of probability' (whence, jointly, the idea of Bayesian inference) have firmly cemented the name Bayes to the idea of quantifying psychological uncertainty and correctly updating on the evidence. The Bayesian interpretation is what links these theorems to actual practice. Bayes himself may not have been a 'Bayesian' in the modern sense, just as Plato wasn't a 'platonist' as most people use the term today. But the names have stuck, and 'Laplacian' or 'Ramseyan' wouldn't have quite the same ring.
2Vaniver
I like Laplacian as a name better, but it's already a thing.
2Lumifer
If I were to pretend that I'm a mainstream frequentist and consider "quantifying psychological uncertainty" to be subjective mumbo-jumbo with no place anywhere near real science :-D I would NOT have serious disagreements with e.g. Vaniver's list. Sure, I would quibble about accents, importances, and priorities, but there's nothing there that would be unacceptable from the mainstream point of view.

My biggest concern with the label 'Bayesianism' isn't that it's named after the Reverend, nor that it's too mainstream. It's that it's really ambiguous.

For example, when Yvain speaks of philosophical Bayesianism, he means something extremely modest -- the idea that we can successfully model the world without certainty. This view he contrasts, not with frequentism, but with Aristotelianism ('we need certainty to successfully model the world, but luckily we have certainty') and Anton-Wilsonism ('we need certainty to successfully model the world, but we lack certainty'). Frequentism isn't this view's foil, and this philosophical Bayesianism doesn't have any respectable rivals, though it certainly sees plenty of assaults from confused philosophers, anthropologists, and poets.

If frequentism and Bayesianism are just two ways of defining a word, then there's no substantive disagreement between them. Likewise, if they're just two different ways of doing statistics, then it's not clear that any philosophical disagreement is at work; I might not do Bayesian statistics because I lack skill with R, or because I've never heard about it, or because it's not the norm in my department.

There's a su... (read more)

7Randaly
Err, actually, yes it is. The frequentist interpretation of probability makes the claim that probability theory can only be used in situations involving large numbers of repeatable trials, or selection from a large population. William Feller: Or to quote from the essay coined the term frequentist: Frequentism is only relevant to epistemological debates in a negative sense: unlike Aristotelianism and Anton-Wilsonism, which both present their own theories of epistemology, frequentism's relevance is almost only in claiming that Bayesianism is wrong. (Frequentism separately presents much more complicated and less obviously wrong claims within statistics and probability; these are not relevant, given that frequentism's sole relevance to epistemology is its claim that no theory of statistics and probability could be a suitable basis for an epistemology, since there are many events they simply don't apply to.) (I agree that it would be useful to separate out the three versions of Bayesianism, whose claims, while related, do not need to all be true or false at the same time. However, all three are substantively opposed to one or both of the views labelled frequentist.)
0satt
Depends which frequentist you ask. From Aris Spanos's "A frequentist interpretation of probability for model-based inductive inference": and
5Richard_Kennaway
For those who can't access that through the paywall (I can), his presentation slides for it are here. I would hate to have been in the audience for the presentation, but the upside of that is that they pretty much make sense on their own, being just a compressed version of the paper. While looking for those, I also found "Frequentists in Exile", which is Deborah Mayo's frequentist statistics blog. I am not enough of a statistician to make any quick assessment of these, but they look like useful reading for anyone thinking about the foundations of uncertain inference.
0Rob Bensinger
I don't understand what this "probability theory can only be used..." claim means. Are they saying that if you try to use probability theory to model anything else, your pencil will catch fire? Are they saying that if you model beliefs probabilistically, Math breaks? I need this claim to be unpacked. What do frequentists think is true about non-linguistic reality, that Bayesians deny?
5Desrtopa
I think they would be most likely to describe it as a category error. If you try to use probability theory outside the constraints within which they consider it applicable, they'd attest that you'd produce no meaningful knowledge and accomplish nothing but confusing yourself.
0Rob Bensinger
Can you walk me through where this error arises? Suppose I have a function whose arguments are the elements of a set S, whose values are real numbers between 0 and 1, and whose values sum to 1. Is the idea that if I treat anything in the physical world other than objects' or events' memberships in physical sequences of events or heaps of objects as modeling such a set, the conclusions I draw will be useless noise? Or is there something about the word 'probability' that makes special errors occur independently of the formal features of sample spaces?
3Desrtopa
As best I can parse the question, I think the former option better describes the position.
4nshepperd
IIRC a common claim was that modeling beliefs at all is "subjective" and therefore unscientific.
0Rob Bensinger
Do you have any links to this argument? I'm having a hard time seeing why any mainstream scientist who thinks beliefs exist at all would think they're ineffable....
0nshepperd
Hmm, I thought I had read it in Jaynes' PT:TLoS, but I can't find it now. So take the above with a grain of salt, I guess.
4Jayson_Virissimo
Yes, it is my understanding that epistemologists usually call the set of ideas Yvain is referring to "probabilism" and indeed, it is far more vague and modest than what they call Bayesianism (which is more vague and modest still than the subjectively-objective Bayesianism that is affirmed often around these parts.). BTW, I think this is precisely what Carnap was on about with his distinction between probability-1 and probability-2, neither of which did he think we should adopt to the exclusion of the other.
6Vaniver
I think they would have significant practical disagreement with #3, given the widespread use of NHST, but clever frequentists are as quick as anyone else to point out that NHST doesn't actually do what its users want it to do. Hence the importance of the qualifier 'qualitative'; it seems to me that accents, importances, and priorities are worth discussing, especially if you're interested in changing System 1 thinking instead of System 2 thinking. The mainstream frequentist thinks that base rate neglect is a mistake, but the Bayesian both thinks that base rate neglect is a mistake and has organized his language to make that mistake obvious when it occurs. If you take revealed preferences seriously, it looks like the frequentist says base rate neglect is a mistake but the Bayesian lives that base rate neglect is a mistake. Now, why Bayes specifically? I would be happy to point to Laplace instead of Bayes, personally, since Laplace seems to have been way smarter and a superior rationalist. But the trouble with naming methods of "thinking correctly" is that everyone wants to name their method "thinking correctly," and so you rapidly trip over each other. "Rationalism," for example, refers to a particular philosophical position which is very different from the modal position here at LW. Bayes is useful as a marker, but it is not necessary to come to those insights by way of Bayes. (I will also note that not disagreeing with something and discovering something are very different thresholds. If someone has a perspective which allows them to generate novel, correct insights, that perspective is much more powerful than one which merely serves to verify that insights are correct.)
0Lumifer
Yeah, I said if I were pretend to be a frequentist -- but that didn't involve suddenly becoming dumb :-) I agree, but at this point context starts to matter a great deal. Are we talking about decision-making in regular life? Like, deciding which major to pick, who to date, what job offer to take? Or are we talking about some explicitly statistical environment where you try to build models, fit them, evaluate them, do out-of-sample forecasting, all that kind of things? I think I would argue that recognizing biases (Tversky/Kahneman style) and trying to correct for them -- avoiding them altogether seems too high a threshold -- is different from what people call Bayesian approaches. The Bayesian way of updating on the evidence is part of "thinking correctly", but there is much, much more than just that.
3Vaniver
At least one (and I think several) of biases identified by Tversky and Kahneman is "people do X, a Bayesian would do Y, thus people are wrong," so I think you're overstating the difference. (I don't know enough historical details to be sure, but I suspect Tversky and Kahneman might be an example of the Bayesian approach allowing someone to discover novel, correct insights.) I agree, but it feels like we're disagreeing. It seems to me that a major Less Wrong project is "thinking correctly," and a major part of that project is "decision-making under uncertainty," and a major part of uncertainty is dealing with probabilities, and the Bayesian way of dealing with probabilities seems to be the best, especially if you want to use those probabilities for decision-making. So it sounds to me like you're saying "we don't just need stats textbooks, we need Less Wrong." I agree; that's why I'm here as well as reading stats textbooks. But it also sounds to me like you're saying "why are you naming this Less Wrong stuff after a stats textbook?" The easy answer is that it's a historical accident, and it's too late to change it now. Another answer I like better is that much of the Less Wrong stuff comes from thinking about and taking seriously the stuff from the stats textbook, and so it makes sense to keep the name, even if we're moving to realms where the connection to stats isn't obvious.
0Lumifer
Hm... Let me try to unpack my thinking, in particular my terminology which might not match exactly the usual LW conventions. I think of: Bayes theorem as a simple, conventional, and an entirely uncontroversial statistical procedure. If you ask a dyed-in-the-wool rabid frequentist whether the Bayes theorem is true he'll say "Yes, of course". Bayesian statistics as an approach to statistics with three main features. First is the philosophical interpretation of (some) probability as subjective belief. Second is the focus on conditional probabilities. Third is the strong preferences for full (posterior) distributions as answers instead of point estimates. Cognitive biases (aka the Kahneman/Tversky stuff) as certain distortions in the way our wetware processes information about reality, as well as certain peculiarities in human decision-making. Yes, a lot of it it is concerned with dealing with uncertainty. Yes, there is some synergy with Bayesian statistics. No, I don't think this synergy is the defining factor here. I understand that historically the in the LW community Bayesian statistics and cognitive biases were intertwined. But apart from historical reasons, it seems to me these are two different things and the degree of their, um, interpenetration is much overstated on LW. Well, we need for which purpose? For real-life decision making? -- sure, but then no one is claiming that stats textbooks are sufficient for that. Some, not much. I can argue that much of LW stuff comes from thinking logically and following chains of reasoning to their conclusion -- or actually just comes from thinking at all instead of reacting instinctively / on the basis of a gut feeling or whatever. I agree that thinking in probabilities is a very big step and it *is* tied to Bayesian statistics. But still it's just one step.
1Vaniver
I agree with your terminology. When contrasting LW stuff and mainstream rationality, I think the reliance on thinking in probabilities is a big part of the difference. ("Thinking logically," for the mainstream, seems to be mostly about logic of certainty.) When labeling, it makes sense to emphasize contrasting features. I don't think that's the only large difference, but I see an argument (which I don't fully endorse) that it's the root difference. (For example, consider evolutionary psychology, a moderately large part of LW. This seems like a field of science particularly prone to uncertainty, where "but you can't prove X!" would often be a conversation-stopper. For the Bayesian, though, it makes sense to update in the direction of evo psych, even though it can't be proven, which is then beneficial to the extent that evo psych is useful.)
0Lumifer
Yes, I think you're right. Um, I'm not so sure about that. The main accusation against evolutionary psychology is that it's nothing but a bunch of just-so stories, aka unfalsifiable post-hoc narratives. And a Bayesian update should be on the basis of evidence, not on the basis of an unverifiable explanation.
1Vaniver
It seems to me that if you think in terms of likelihoods, you look at a story and say "but the converse of this story has high enough likelihood that we can't rule it out!" whereas if you think in terms of likelihood ratios, you say "it seems that this story is weakly more plausible than its converse." I'm thinking primarily of comments like this. I think it is a reasonable conclusion that anger seems to be a basic universal emotion because ancestors who had the 'right' level of anger reproduced more than those who didn't. Boris just notes that it could be the case that anger is a byproduct of something else, but doesn't note anything about the likelihood of anger being universal in a world where it is helpful (very high) and the likelihood of anger being universal in a world where it is neutral or unhelpful (very low). We can't rule out anger being spurious, but asking to rule that out is mistaken, I think, because the likelihood ratio is so significant. It doesn't make sense to bet against anger being reproductively useful in the ancestral environment (but I think it makes sense to assign a probability to that bet, even if it's not obvious how one would resolve it).
0Lumifer
I have several problems with this line of reasoning. First, I am unsure what it means for a story to be true. It's a story -- it arranges a set of facts in a pattern pleasing to the human brain. Not contradicting any known facts is a very low threshold (see the Russell's teapot), to call something "true" I'll need more than that and if a story makes no testable predictions I am not sure on which basis I should evaluate its truth and what does it even mean. Second, it seems to me that in such situations the likelihoods and so, necessarily, their ratios are very very fuzzy. My meta uncertainty -- uncertainty about probabilities -- is quite high. I might say "story A is weakly more plausible than story B" but my confidence in my judgment about plausibility is very low. This judgment might not be worth anything. Third, likelihood ratios are good when you know you have a complete set of potential explanations. And you generally don't. For open-ended problems the explanation "something else" frequently looks like the more plausible one, but again, the meta uncertainty is very high -- not only you don't know how uncertain you are, you don't even know what you are uncertain about! Nassim Taleb's black swans are precisely the beasties that appear out of "something else" to bite you in the ass.
1Vaniver
Ah, by that I generally mean something like "the causal network N with a particular factorization F is the underlying causal representation of reality," and so a particular experiment measures data and then we calculate "the aforementioned causal network would generate this data with probability P" for various hypothesized causal networks. For situations where you can control at least one of the nodes, it's easy to see how you can generate data useful for this. For situations where you only have observational data (like the history of human evolution, mostly), then it's trickier to determine which causal network(s) is(are) best, but often still possible to learn quite a bit more about the underlying structure than is obvious at first glance. So suppose we have lots of historical lives which are compressed down to two nodes, A which measures "anger" (which is integer-valued and non-negative, say) and C which measures "children" (which is also integer valued and non-negative). The story "anger is spurious" is the network where A and C don't have a link between them, and the story "anger is reproductively useful" is the network where A->C and there is some nonzero value a^* of A which maximizes the expected value of C. If we see a relationship between A and C in the data, it's possible that the relationship was generated by the "anger is spurious" network which said those variables were independent, but we can calculate the likelihoods and determine that it's very very low, especially as we accumulate more and more data. Sure. But even if you're only aware of two hypotheses, it's still useful to use the LR to determine which to prefer; the supremacy of a third hidden hypothesis can't swap the ordering of the two known hypotheses! Yes, reversal effects are always possible, but I think that putting too much weight on this argument leads to Anton-Wilsonism (certainty is necessary but impossible). I think we do often have a good idea of what our meta uncertainty looks
0Lumifer
I have only glanced at Pearl's work, not read it carefully, so my understanding of causal networks is very limited. But I don't understand on the basis of which data will you construct the causal network for anger and children (and it's actually more complicated because there are important society-level effects). In what will you "see a relationship between A and C"? On the basis of what will you be calculating the likelihoods?
1Vaniver
Ideally, you would have some record. I'm not an expert in evo psych, so I can't confidently say what sort of evidence they actually rely on. I was hoping more to express how I would interpret a story as a formal hypothesis. I get the impression that a major technique in evolutionary psychology is making use of the selection effect due to natural selection: if you think that A is heritable, and that different values of A have different levels of reproductive usefulness, then in steady state the distribution of A in the population gives you information about the historic relationship between A and reproductive usefulness, without even measuring relationship between A and C in this generation. So you can ask the question "what's the chance of seeing the cluster of human anger that we have if there's not a relationship between A and reproduction?" and get answers that are useful enough to focus most of your attention on the "anger is reproductively useful" hypothesis.
6jsteinhardt
Regarding Bayes, you might like my essay on the topic, especially if you have statistical training.
2Axion
That paper did help crystallize some of my thoughts. At this point I'm more interested in wondering if I should be modifying how I think, as opposed to how to implement AI.
2Jiro
You are not alone in thinking the use of Bayes is overblown. It can;t be wrong, of course, but it can be impractical to use and in many real life situations we might not have specific enough knowledge to be able to use it. In fact, that's probably one of the biggest criticisms of lesswrong.
[-]pushcx170

Hi folks, I'm Peter. I read a lot of blogs and saw enough articles on Overcoming Bias a few years ago that I was aware of Yudkowsky and some of his writing. I think I wandered from there to his personal site because I liked the writing and from there to Less Wrong, but it's long enough ago I don't really remember. I've read Yudkowsky's Sequences and found lots of good ideas or interesting new ways to explain things (though I bounced off QM as it assumed a level of knowledge in physics I don't have). They're annoyingly disorganized - I realize they were originally written as an interwoven hypertext, but for long material I prefer reading linear silos, then I can feel confident I've read everything without getting annoyed at seeing some things over and over. Being confused by their organization when nobody else seems to be also contributes to the feeling in my last paragraph below.

I signed up because I had a silly solution to a puzzle, but I've otherwise hesitated to get involved. I feel I've skipped across the surface of LessWrong; I subscribe to a feed that only has a couple posts per week and haven't seen anything better. I'm aware there are pages with voting, but I'm wary of the ... (read more)

I'm also wary of a community so tightly focused around one guy. I have only good things to say about Yudkowsky or his writing, but a site where anyone is far and away the most active and influential writer sets off alarm bells. Despite the warning in the death spiral sequence, this community heavily revolves around him.

Yeah, it's a problem. I'd even go so far as to say that it's a cognitive hazard, not just a PR or recruitment difficulty: if you've got only one person at the clear top of a status hierarchy covering some domain, then halo effects can potentially lead to much worse consequences for that domain than if you have a number of people of relatively equal status who occasionally disagree. Of course there's also less potential for infighting, but that doesn't seem to outweigh the potential risks.

There was a long gap in substantive posts from EY before the epistemology sequence, and I'd hoped that a competitor might emerge from that vacuum. Instead the community seems to have branched; various people's personal blogs have grown in relative significance, but LW has stayed Eliezer's turf in practice. I haven't fully worked out the implications, but they don't seem entirely good, especially since most of the community's modes of social organization are outgrowths of LW.

6magfrump
I think a part of the problem with other people filling the "vacuum" left by Eliezer is that when he was writing the sequences it was a large amount of informal material. Since then we've established a lot of very formal norms for main-level posts; the "blog" is now about discussions with a lot of shared background rather than about trying to use a bunch of words to get some ideas out. That is, most of the point of the sequences is laying out ground rules. There's no vacuum left over for anyone to fill, and LW isn't really a "blog" any more, so much as a community or discussion board. And for me, personally, at least, a lot of the attraction of LW and the sequences is not that Eliezer did a bunch of original creative work, but that he verbalized and worked out a bit more detail on a variety of ideas that were already familiar, and then created a community where people have to accept that and are therefore trustworthy. What this "feels like on the inside" is that the community is here because they share MY ideas about epistemology or whatever, rather than because they share HIS ideas, even if he was the one to write them down. Of course YMMV and none of this is a controlled experiment; I could be making up bad post hoc explanations.
2itaibn0
Just to be clear, what you say does not contradict the argument you are responding to. You gave a good explanation for why EY has a big influence on the community. It still isn't clear that this is a good thing.
3magfrump
Yes, I'm not arguing that it is a good thing. I'm simply putting forward an explanation for why no one else has stepped in to "fill the vacuum" as some have hoped in other comments; I don't believe there is a vacuum to fill. Also I meant to endorse the idea that Eliezer is like Pythagoras: someone who wrote down and canonized a set of knowledge already mostly present, which is at least LESS DANGEROUS than a group following a set of personal dogma.
3Shmi
Actually, I think that the sequences have a fair number of original ideas. They were enumerated about a year or so ago by Eliezer and Luke in separate posts.

On a conceptual level, is there more to QM than the Uncertainty Principle and Wave-Particle Duality?

Yes. Very yes. There are several different ways to get at that next conceptual level (matrix mechanics, the behavior of the Schrödinger equation, configuration spaces, Hamiltonian and Lagrangian mechanics, to name ones that I know at least a little about), but qualitative descriptions of the Uncertainty Principle, Schrödinger's Cat, Wave-Particle Duality, and the Measurement Problem do not get you to that level.

Rejoice—the reality of quantum mechanics is way more awesome than you think it is, and you can find out about it!

4TimS
Let me rephrase: I'm sure there is more to cutting edge QM than that which I understand (or even have heard of). Is any of that necessary to engage with the philosophy-of-science questions raised by the end of the Sequence, such as Science Doesn't Trust Your Rationality? From a writing point of view, some scientific controversy needed to be introduced to motivate the later discussion - and Eliezer choose QM. As examples go, it has advantages: (1) QM is cutting edge - you can't just go to Wikipedia to figure out who won. EY could have written a Lamarckian / Darwinian evolution sequence with similar concluding essays, but indisputably knowing who was right would slant how the philosophy-of-science point would be interpreted. (2) A non-expert should recognize that their intuitions are hopelessly misleading when dealing with QM, opening them to serious consideration of the new-to-them philosophy-of-science position EY articulates. But let's not confuse the benefits of the motivating example with arguing that there is philosophy-of-science benefit in writing an understandable description of QM. In other words, if the essays in the sequence after and including The Failures of Eld Science were omitted from the Sequence, it wouldn't belong on LessWrong.
[-]Kendra160

Hi, I'm Denise from Germany, I just turned 19 and study maths at university. Right now, I spend most of my time with that and caring for my 3-year-old daughter. I know LessWrong for almost two years now, but never got around to write. However, I'm more or less involved with parts of the LessWrong and the Effective Altruism community, most of them originally found me via Okcupid (I stated I was a LessWrongian), and from there, it expanded.

I grew up in a small village in the middle of nowhere in Germany, very isolated without any people to talk to. I skipped a grade and did extremely well at school, but was mostly very unhappy during my childhood/teen years. Though I had free internet access, I had almost no access to education until I was 15 years old (and pregnant, and no, that wasn't unplanned), because I had no idea what to look for. I dropped out of school then and prepared for the exams -when I had time (I was mostly busy with my child)- I needed to do to be allowed to attend university. In Germany that's extremely unusual and most people don't even know you can do it without going to school.

When I was 15, I discovered enviromentalism (during pregnancy, via people who share m... (read more)

5Kawoomba
As another LW'er with kids in Germany, welcome!
2A1987dM
It isn't customary that kind of quotation marks in English; “these ones” are usually used in typeset materials, but most people just use "the ones on the keyboard" on-line.
1Gunnar_Zarncke
Hi Denise/Kendra, sich um ein kleines Kind alleine zu kümmern ist schon viel. Wenn Du dann auch noch studierst und EA und LW Meetups machst ist das schon ziemlich viel. Ich bewundere Deine Leistung. Ich habe einiges Material zu rationaler Erziehung auf meiner Homepage verlinkt, das Du Dir evtl. mal ansehen möchtest: http://lesswrong.com/user/Gunnar_Zarncke Ein Tipp (obwohl Du vermutlich weißt und nur nicht umsetzen konntest): Die Synergieeffekte bei der Kindererziehung sind beträchtlich. Es ist erheblich einfacher für zwei Eltern für zwei Kinder zu sorgen als 2x alleinerziehend mit Kind. Entsprechend in größeren Gruppen (das sieht man natürlich meist nur wenn sich mehrere Familien treffen). Hast Du keine Möglichkeit das zu nutzen? Du darfst mir gerne jederzeit Fragen stellen. Gruß aus Hamburg Gunnar
1vollmer
Welcome Denise! :)

This is not an atheist forum, in much the same way that it is not an a-unicorn-ist forum. Not because we do not hold a consistent position on the existence of unicorns, but because the issue itself is not worth discussing. The data has spoken, and there is no reason to believe in them. Whatever. Let's move on to more important things like anthropics and the meta-ethics of Friendly AI.

.

[This comment is no longer endorsed by its author]Reply
3Manfred
Welcome! The really valuable times are when you get to say those things to yourself - you're the only person you can force to listen :D

So I'm going to write about a) my arguments in favor or religion, though I don't feel they are sufficient and I want to improve them, and b) why I don't fully accept the LW way of thinking.

I'm still thinking about it, and will be until I post to the Discussion...

I expect this is a bad idea. The post will probably get downvoted, and might additionally provoke another spurt of useless discussion. Lurk for a few more months instead, seeking occasional clarification without actively debating anything.

I regard atheism as a slam-dunk issue, but I wouldn't walk into a Mormon forum and call atheism a settled question. 'Twould be logically rude to them.

Hi,

i have been lurking around here mostly for (rational) self help. Some info about me.

Married. Work at India office of a top tier tech company. 26 y/o

between +2 and +2.5 SD IQ . crystallized >> fluid . Extremely introspective and self critical. ADHD / Mildly depressed most of my life. Have hated 'work' most of my life.

Zero visual working memory (One - Two items with training). Therefore struggling with programming computers and not enjoying it. Can write short programs and solve standard interview type questions. Can't build big functional pieces of software

Tried to self medicate two years back .Overdosed on modafinil + piracetam. in ER. 130+ heart rate for 8 hours. induced panic disorder. As of today, Stimulant use out of question therefore.

Familiar with mindfulness meditation and spiritual philosophy.

Its quite clear that i can't build large pieces of software. Unsure as to what productive use i can be with these attributes.

Thanks

6ModusPonies
That depends on what your goal is. Making enough money to fund a relaxed and happy life? Making tremendous amounts of money? Job satisfaction? Something else entirely?
3rationalnoob
in terms of goals, i hadn't formalized things but my mental calculations generally revolve around. A) making a lot of money. B) not burning out (due to competitive stress e.g.) doing so. these seems highly improbable in my current environment as i don't have the natural characteristics for this to happen. so either a) i adapt (major , almost miraculous changes needed in conscientiousness/ working memory etc) to succeed at top tier software product development or any other similar high pay career track. b) settle for low quality / low challenge work and low pay (IT services ? teaching? government bureaucracy?) jobs in the b) category pay < 20K USD in india so it won't be a very relaxed existence financially. therefore had been trying to get a) to work somehow. minor successes overall. my working memory and conscientiousness are atleast bottom quartile/ if not bottom decile in my peer group. stuck big time in life therefore.
4private_messaging
You may be able to work as a programmer, given some management so that you only work on small pieces at a time. It seems to me that it is actually quite uncommon to be able to comprehend projects of significant size, in programming or elsewhere. Also, maybe you're not that different from other high-IQ individuals. I've always suspected that top scientists, programmers, etc. are at (just an illustrative example) 1 in 1000 on [metric most directly measured by IQ and similar tests] and 1 in 1000 on combination of things like integration of knowledge/memory, working space, etc. Whereas high IQ individuals in general aren't very far from average on the other factors and can't usefully access massive body of knowledge, for example.
1rationalnoob
the only trouble is that one is expected to mature and tackle larger and larger problems or alternatively manage a large (and always increasing) business scope with years under the belt. both of those capacities are constrained significantly by conscientiousness / working memory / attention deficits.
4private_messaging
That's fairly interesting. It seem to be often under-appreciated that IQ (and similar tests) fail to evaluate important aspects of cognition.
1rationalnoob
yes. cognitive ability is quite varied and i am highly stunted in the visuo spatial area. could never read fiction (no characters visuals in my head). the lack of this faculty is also a major bottleneck in comprehension of technical material. i like syntax / discrete math / logic etc, things which which depend more on verbal facility.
2hg00
Welcome! What was your dosage?
4rationalnoob
immediate dose : 200 mg modafinil + 800 mg piracetam around 10 am. OD symptoms within 2/3 hours. there was probably significant drug buildup of modafinil over the prior week i guess. was taking mostly 200mg (once 400 mg) a day the preceeding week. so i am guessing 300-500 mg built up. effectively then 500 - 700 mg modafinil + 800mg piracetam. resulted in 170/90 BP + 130-150 HR + severe anxiety for around 8-9 hours. ER docs didn't know what to do. I refused to get admitted to ICU. Subsided by 10pm night. instigated a panic disorder and a drug phobia cured by 25mg sertraline for 6 months. panic free (more or less) since. has left me vigilant about drug interactions and adverse drug effects.
3hg00
Thanks!
9rationalnoob
my experience could be useful to LWers experiementing with noo tropics in warning of the dangers of a) drug interactions there is need to be very careful while titrating doses up , especially when drugs are in combination. your body may manifest novel problems not seen by anyone else. b) drug buildup : need to be very careful while estimating effective doses to take drug buildup into account. even though superficially i was ingesting 200mg of modafinil, i was effectively on 500mg + of the drug.

Hello, Less Wrong; I'm so glad I found you.

A few years ago a particularly fruitful wikiwalk got me to a list of cognitive biases (also fallacies). I read it voraciously, then followed the sources, found out about Kahneman and Tversky and all the research that followed. The world has never quite been the same.

Last week Twitter got me to this sad knee-jerk post on Slate, which in a few message-board-quality paragraphs completely missed the point of this thought experiment by Steve Landsburg, dealing with the interesting question of crimes in which the only harm to the victims is the pain from knowing that they happened. The discussion there, however, was refreshingly above average, and I'll be forever grateful to LessWronger "Henry", who posted a link to the worst argument in the world - which turned out to be a practical approach to a problem I had been thinking about and trying to condense into something useful in a discussion (I was going toward something like "'X-is-horrible-and-is-called-racism' turning into 'We-call-Y-racism-therefore-it's-horrible'").

Since then I've been looking around and it feels... feels like I've finally found my species after a lifet

... (read more)
2MugaSofer
Know that feeling. I wonder how common a reaction it is, actually ...
6RogerS
Maybe it's just that EY is very persuasive! I'm reminded of what was said about some other polymath (Arthur Koestler I think) that the critics were agreed that he was right on almost everything - except, of course, for the topic that the critic concerned was expert in, where he was completely wrong! So my problem is, whether to just read the sequences, or to skim through all the responses as well. The latter takes an awful lot longer, but from what I've seen so far there's often a response from some expert in the field concerned that, at the least, puts the post into a whole different perspective.
6MenosErrado
After looking around a little more, I should clarify what I meant perhaps. The part about agreeing with EY (so far) was about psychology, ethics, morality, epistemology, even the little of politics I saw. The "so far" is doing heavy work there, I've only been around for a week, and focusing first on the topics most immediately relevant to my work and studies. More importantly, I haven't touched the physics yet (which from what I've seen in this page is something I should have mentioned), and I'm not qualified to "take sides" if I had. The paragraph was not prompted (only) by EY, but by my marvel at the quality of discussions here. No caveats there, this community has really impressed me. The way it works, not the conclusions, although they're certainly correlated. I'm used to having to defend rationality in a very relevant portion of the discussions I have, before it's possible to move on to anything productive (of course, those tend not to move on at all). This is a breath of fresh air.
0diegocaleiro
Oi, eu venho tentando juntar brasileiros capazes de pensar faz algum tempo. Dirijo o www.ierfh.org e ja visitei a parte MIRI do pessoal desse site por um mês. Se achar o conteúdo do FAQ do site interessante, envie mensagem para o IERFH, tem comunidade no facebook também, e etc...
[-]DSimon140

I don't feel [my arguments in favor of religion] are sufficient and I want to improve them

I know you've heard this from several other people in this thread, but I feel it's important to reiterate: this seems to be a really obvious case of putting the cart before the horse. It just doesn't make sense to us that you are interested only in finding arguments that bolster a particular belief, rather than looking for the best arguments available in general, for all the beliefs you might choose among.

I'm not asking you to respond to this right now, but please keep it firmly in mind for your Discussion post, as it's probably going to be the #1 source of disagreement.

I'm a college student studying music composition and computer science. You can hear some of my compositions on my SoundCloud page (it's only a small subset of my music, but I made sure to put a few that I consider my best at the top of the page). In the computer science realm, I'm into game development, so I'm participating in this thing called One Game A Month whose name should be fairly self-explanatory (my February submission is the one that's most worth checking out - the other 2 are kind of lame...).

For pretty much as long as I can remember, I've enjoyed pondering difficult/philosophical/confusing questions and not running away from them, which, along with having parents well-versed in math and science, led me to gradually hone my rationality skills over a long period of time without really having a particular moment of "Aha, now I'm a rationalist!". I suppose the closest thing to such a moment would be about a year ago when I discovered HPMoR (and, shortly thereafter, this site). I've found LW to be pretty much the only place where I am consistently less confused after reading articles about difficult/philosophical/confusing questions than I am before.

1Vaniver
Welcome! Have you done any algorithmic composition?
2WedgeOfCheese
I did this and I might try doing a few more pieces like it. You have to click somewhere on the screen to start/stop it.
5Vaniver
Fascinating, thanks! A project that's been kicking around in the back of my head for a while is emotional engineering through algorithmic music; it would be great to have a way to generate somewhat novel happy high-energy music during coding that won't sap any attention (I'm sort of reluctant to talk to musicians about it, though, because it feels like telling a chef you'd like a way to replace them with a machine that dispenses a constant stream of sugar :P).
3DaFranker
I would also love this. I'm in constant deficit of high-energy music for coding or other similar activities, and often it can take more work finding good music for it than all the coding work I want to do while listening to it (or, conversely, it can take much longer to find good music than the music lasts).
1WedgeOfCheese
One thing I think would be cool would be some sort of audio-generating device/software/thing that allows arbitrary levels of specificity. So, on one extreme, you could completely specify a fully deterministic stream of sound, and, on the other extreme, you could specify nothing and just say "make some sound". Or you could go somewhere in between and specify something along the lines of "play music for X minutes, in a manner evoking emotion Y, using melody Z as the main theme of the piece".
3DaFranker
Now that you mention this, I do remember reading some years ago about a machine-learning composition project that had the algorithm generate random streams and learn what music people liked by crowd-sourcing feedback. I think what you've described is a great idea, and I would pay for it. Ideally, it would let me have different-styled streams dependent on what I want to do with the music / what activity I'm doing while listening. Triple bonus points if it can consume an existing piece of music to learn more about some particular style of stream that I want.
3gwern
There have been a lot o' such projects. I like some of the tracks produced by DarwinTunes.
1Osiris
Welcome, fellow new person! You've got some wonderful music. Any particular things that interest you in the "confusing question" genre?
[-]volya130

Hi, I am Olga, female, 40, programmer, mother of two. Got here from HPMoR. Can not as yet define myself as a rationalist, but I am working on it. Some rationality questions, used in real life conversations, have helped me to tackle some personal and even family issues. It felt great. In my "grown-up" role I am deeply concerned to bring up my kids with their thoughts process as undamaged as I possibly can and maybe even to balance some system-taught stupidity. I am at the start of my reading list on the matter, including LW sequences.

1A1987dM
Welcome! Many people here call themselves aspiring rationalists.
[-]GTLisa130

Hello, my name is Lisa. I found this site through HPMOR.

I'm a Georgia Tech student double majoring in Industrial Engineering and Psychology. I know I want to further my education after graduation, probably through a PhD. However, I'm not entirely sure what field I would want to focus on.

I've been lurking for awhile and am slowly making my way through the sequences, though I'm currently studying abroad so I'm not reading particularly quickly. I'm particularly interested in behavioral economics, statistics, evolutionary psychology, and in education policy, especially in higher education.

0John_Maxwell
Fun fact: my high level of interest in education policy quickly evaporated as soon as I was no longer going to school.
[-]Rafe130

Hello everyone!

I've read occasional OB and LW articles and other Yudkowsky writings for many years, but never got into it in a big way until now.

My goal at the moment is to read the Quantum Physics sequence, since quantum physics has always seemed mysterious to me and I want to find out if its treatment here will dispel some of my confusion. I've spent the last few days absorbing the preliminaries and digressing into many, many prior articles. Now the tabs are finally dwindling and I am almost up to the start of the sequence!

Anyway, I have a question I didn't see in the FAQ. Given that I went on a long, long, long wiki walk and still haven't read very much of the core material, how big is Less Wrong? Has anyone done word counts on the sequences, or anything like that?

5sceaduwe
The sequences come close to a million words.
[-]Osiris130

Hello there, everyone! I am Osiris, and I came here at the request of a friend of mine. I am familiar with Harry Potter and the Methods of Rationality, and spent some time reading through the articles here. Everythin' here is so interesting! I studied to become a Russian Orthodox Priest in the early nineties, and moved to the USA from the Russian Federation at the beginning of the W. Bush Administration. The change of scenery inspired me, and within the first year, I had become an atheist and learned everything I could about biology, physics, and modern philosophy. Today, I am a philosophy/psychology major at a local college, and work to change the world one little bit at a time.

Though I tend to be a bit of a poet, I hope I can find a place here. In particular, I am interested in thinking of morality and the uses of mythology in daily life.

I value maintaining and increasing diversity, and plan on posting a few things which relate to this as soon as possible. I am curious to see how everyone will react to my style of presentation and beliefs.

2Jayson_Virissimo
Diversity of what, exactly?
2orthonormal
Hi Osiris, and welcome! If you're looking for awesome things that a poet can offer Less Wrong, there are people looking to create meaningful rationalist holidays with a sense of ritual to them.

Hi everyone,

I'm a humanities PhD who's been reading Eliezer for a few years, and who's been checking out LessWrong for a few months. I'm well-versed in the rhetorical dark arts, due to my current education, but I also have a BA in Economics (yet math is still my weakest suit). The point is, I like facts despite the deconstructivist tendency of humanities since the eighties. Now is a good time for hard-data approaches to the humanities. I want to join that party. My heart's desire is to workshop research methods with the LW community.

It may break protocol, but I'd like to offer a preview of my project in this introduction. I'm interested in associating the details of print production with an unnamed aesthetic object, which we'll presently call the Big Book, and which is the source of all of our evidence. The Big Book had multiple unknown sites of production, which we'll call Print Shop(s) [1-n]. I'm interested in pinning down which parts of the Big Book were made in which Print Shop. Print Shop 1 has Tools (1), and those Tools (1) leave unintended Marks in the Big Book. Likewise with Print Shop 2 and their Tools (2). Unfortunately, people in the present don't know which Print Shop... (read more)

[-]gwern110

I'm interested in associating the details of print production with an unnamed aesthetic object, which we'll presently call the Big Book, and which is the source of all of our evidence.

It's the Bible, isn't it.

Print Shop 1 has Tools (1), and those Tools (1) leave unintended Marks in the Big Book. Likewise with Print Shop 2 and their Tools (2). Unfortunately, people in the present don't know which Print Shop had which Tools. Even worse, multiple sets of Tools can leave similar Marks.

How can you possibly get off the ground if you have no information about any of the Print Shops, much less how many there are? GIGO.

I'm far from an expert in Bayesian methods, but it seems already that there's something missing here.

Have you considered googling for previous work? 'Bayesian inference in phylogeny' and 'Bayesian stylometry' both seem like reasonable starting points.

4Vaniver
Not quite. You can get quite a bit of insight out of unsupervised clustering.
2gwern
'No free lunches', right? If you're getting anything out of your unsupervised methods, that just means they're making some sort of assumptions and proceeding based on those.
7Vaniver
Right, but this isn't a free lunch so much as "you can see a lot by looking."
8HumanitiesResearcher
Sorry to interrupt a perfectly lovely conversation. I just have a few things to add: * I may have overstated the case in my first post. We have some information about print shops. Specifically, we can assign very small books to print shops with a high degree of confidence. (The catch is that small books don't tend to survive very well. The remaining population is rare and intermittent in terms of production date.) * There are some hypotheses that could be treated as priors, but they're very rarely quantified (projects like this are rare in today's humanities).
4HumanitiesResearcher
Interesting feedback. Ha, I wish. No, it's more specific to literature. We have minimal information about Print Shops. I wouldn't say the existing data are garbage, just mostly unquantified. Yes, but thanks to you I know the shibboleth of "Bayesian stylometry." Makes sense, and I've already read some books in a similar vein, but there are some problems. Most fundamentally, I have trouble translating the methods to a different type of data: from textual data like word length to the aforementioned Marks. Otherwise, my understanding of most stylometric analysis was that it favors frequentist methods. Can you clear any of this up? EDIT: I have a follow-up question regarding GIGO: How can you tell what data are garbage? Are the degrees of certainty based on significant digits of measurement, or what?
2gwern
Have to define your features somehow. Really? I was under the opposite impression, that stylometry was, since the '60s or so with the Bayesian investigation of Mosteller & Wallace into the Federalist papers, one of the areas of triumph for Bayesianism. No, not really. I think I would describe GIGO in this context as 'data which is equally consistent with all theories'.
9Vaniver
This is a problem that machine learning can tackle. Feel free to contact me by PM for technical help. To make sure I understand your problem: We have many copies of the Big Book. Each copy is a collection of many sheets. Each sheet was produced by a single tool, but each tool produces many sheets. Each shop contains many tools, but each tool is owned by only one shop. Each sheet has information in the form of marks. Sheets created by the same tool at similar times have similar marks. It may be the case that the marks monotonically increase until the tool is repaired. Right now, we have enough to take a database of marks on sheets and figure out how many tools we think there were, how likely it is each sheet came from each potential tool, and to cluster tools into likely shops. (Note that a 'tool' here is probably only one repair cycle of an actual tool, if they are able to repair it all the way to freshness.) We can either do this unsupervised, and then compare to whatever other information we can find (if we have a subcollection of sheets with known origins, we can see how well the estimated probabilities did), or we can try to include that information for supervised learning.

That's a hell of a summary, thanks!

I'm glad you mentioned the repair cycle of tools. There are some tools that are regularly repaired (let's just call them "Big Tools") and some that aren't ("Little Tools"). Both are expensive at first and to repair, but it seems the Print Shops chose to repair Big Tools because they were subject to breakage that significantly reduced performance.

I should add another twist since you mentioned sheets of known origins: Assume that we can only decisively assign origins to single sheets. There are two problems stemming from this assumption: first, not all relevant Marks are left on such sheets; second, very few single sheet publications survive. Collations greater than one sheet are subject to all of the problems of the Big Book.

I'm most interested in the distinction between unsupervised and supervised learning. And I will very likely PM you to learn more about machine learning. Again, thanks for your help!

EDIT: I just noticed a mistake in your summary. Each sheet is produced by a set of tools, not a single tool. Each mark is produced by a single tool.

4Vaniver
Okay. Are the classes of marks distinct by tool type- that is, if I see a mark on a sheet, I know whether it came from tool type X or tool type Y- or do we need to try and discover what sort of marks the various tools can leave?
6HumanitiesResearcher
Fortunately, we know which tool types leave which marks. We also have a very strong understanding of the ways in which tools break and leave marks. Thanks again for entertaining this line of inquiry.
6DaFranker
Good point! Also yay combining multiple fields of knowledge and expertise! applause Seriously though, the world does need more of it, and I felt the need to explicitly reward and encourage this.
4EHeller
Any time you are doing statistical analysis, you always want a sample of data that you don't use to tune the model and where you know the right answer. (a 'holdout' sample) In this case, you should have several books related to the various print shops that you don't feed into your Bayesian algorithm. You can then assess the algorithm by seeing if it gets these books correct. To account for the decay of the books, you need books that you know not only came from print shop x,y or z, but also you'd need to know how old the tools wee that made those books. Either that, or you'd have to have some understanding of how the tools decay from a theoretical model.
2HumanitiesResearcher
Very helpful points, thanks. The scholarly community already has a pretty good working knowledge of the Tools, and thus the theoretical model of Tool breakage ("breakage" may be more accurate than "decay," since the decay is non-incremental and stochastic). We know the order in which parts of the Tools break, and we have some hypotheses correlating breakage to gross usage. The twist is that we don't know when any Print Shops produced the Big Book, so we can only extrapolate a timeline based on Tool breakage Can you say more about the holdout sample? Should the holdout sample be a randomly selected sample of data, or something suspected to be associated with Print Shops [x,y,z] ? Print Shops [a,b,c] ?
1Vaniver
If you assume that the marks result from defects in the tool that accumulate, it should be relatively easy to build (and test) a monotonic model. Suppose we have an unordered collection of sheets, with some variable number of defects per sheet. If the defects are repeated (i.e. we can recognize defect A whenever we see it, as well as B, and so on), then we can build together paths- all of the sheets without defects pointing towards all of the sheets with just defect A, then defect A and B, and so on. There should be divergence- if we never see sheets with both defect A and C, then we can conclude the 0-A-B path is one tool (with the only some of the 0 defect sheets coming from that tool, obviously), the 0-C-D-E path is another tool, and the 0-F-G path is a third tool. (Noting that here 'tool' refers to one repair cycle, not the entire lifecycle.)
1EHeller
The first assumption seems bad to me- I would assume defects accumulate only until equipment is reset or repaired, which is why I think you'd want some actual data.
1Vaniver
That looks to me like it agrees with my assumption; I suspect my grammar is somehow unclear. (Note the last line of the grandparent.)
1PrawnOfFate
How about talking clearly about whatever you are currently hinting at?
8Kindly
I dunno, I find the complexity-hiding capitalized nouns things strangely attractive. Maybe there should be more capitalized nouns. Why isn't Sheets capitalized? This is probably coming back to my fascination with graph theory, which has similar but even more exotic terminology. "A spider is a subdivision of a star, which is a kind of tree made up only of leaves and a root; a star with three arcs is called a claw."
1HumanitiesResearcher
I was openly warned by a professor (who will likely be on the dissertation committee) not to talk about this project widely. The capitalized nouns are to highlight key terms. I believe the current description is specific enough to describe the situation accurately and without misleading people, but not too specific to break my professor's (correct) advice. Have I broken LW protocol? Obviously, I'm new here.

because I haven't wrapped it up in condescending niceties?

Being nice is important.

If that's still too ambiguous to render an opinion, what isn't?

Kindergarten level insults like "Mormon sort-of-rhymes with Moron" aren't just an expression of opinion. Mormon would be sort-of-rhyming with Moron, even if Mormonism had been true. What you instead expressed is a cutesy and juvenile way of insulting someone: "The mormon is a moron, the mormon is a moron, hahahaha!"

[-][anonymous]130

I found HPMOR nearly three years ago. Soon afterward, I finished the core sequences up through the QM sequence, read some of Eliezer's other posts, and other sequences and authors on LW. When I look back, I realize my thinking has been hugely influenced by what I have learned from this community. I cannot even begin to draw boundaries in my mind identifying what exactly came from LW; hopefully this means I have internalized the ideas and that I am actually using what I learned.

There is a story behind why I have now, after three years of lurking, finally created an account. I am currently a sophomore in high school. I have always been driven to learn by my curiosity and desire for truth and knowledge. But I am also a perfectionist and an overachiever. Somehow, in the last two years of high school, I began to latch onto academics as my “goal.” I started obsessing about ridiculous things - getting perfect scores on every assignment and test, guarding my perfect GPA, etc. It wasn't enough anymore that I understood the content without needing to study - I had to devote huge amounts of time and energy to achieve "perfection."

In March, over spring break, I returned to make some ... (read more)

4Alicorn
Ooh, good school, I went there, best of luck.

Hi everyone,

I'm a PhD student in artificial intelligence/robotics, though my work is related to computational neuroscience, and I have strong interests in philosophy of mind, meta-ethics and the "meaning of life". Though I feel that I should treat finishing my PhD as a personal priority, I like to think about these things. As such, I've been working on an explanation for consciousness and a blueprint for artificial general intelligence, and trying to conceive of a set of weighted values that can be applied to scientifically observable/measurable/calculable quantities, both of which have some implications for an explanation of the "meaning" of life.

At the center of the value system I'm working on is a broad notion of "information". Though still at preliminary stages, I'm considering a hierarchy of weights for the value of different types of information, and trying to determine how bad this is as a utility function. At the moment, I consider the preservation and creation of all information valuable; at an everyday level I try to translate this into learning and creating new knowledge and searching for unique, meaningful experiences.

I've been aware of Le... (read more)

Greetings, LessWrongers. I call myself Intrism; I'm a serial lurker, and I've been hiding under the cupboards for a few months already. As with many of my favorite online communities, I found this one multiple times, through Eliezer's website, TVTropes, and Methods of Rationality (twice), before it finally stuck. I am a student of computer science, and greatly enjoy the discipline. I've already read many of the sequences. While I can't say I've noticed an increase in rationality since I've started, I have made some significant progress on my akrasia, including recently starting on an interesting but unknown LW-inspired technique which I'll write up once I have a better idea of how well it's performing.

2VCavallo
Thank you for introducing me to the term akrasia!
[-][anonymous]120

How important are scholarly credentials vs just having that knowledge without a diploma?

I think in almost every field and occupation, having the scholarly credentials is extremely important. Knowledge without the credentials is pretty worthless (unless its worthwhile in itself, but even then you can't eat it): using that knowledge will generally require that people put trust in your having it, often when they're not in a position to evaluate how much you know (either because they're not experts, or they don't have the time). Credentials are generally therefore the basis of that trust. Since freelance work either requires more trust, or pays very badly and inconsistently, credentials are worth getting.

And that was the point of my previous post: some way or other, you have to earn people's trust that you can do a job worth paying you for. One way to earn that trust is to perform well despite lacking credentials. This will take an enormous amount of time and effort (during which you will not be paid, or at least not well) compared to doing whatever it takes to get as close to a 4.0 as you can. The faster you get people to trust you, the faster you can stop fighting to feed and she... (read more)

[-][anonymous]120

I said from the start that I didn't have any, and hoped you would, but when you guys couldn't help meI said "but there must be some out there."

This is a very odd epistemic position to be in.

If you expect there to be strong evidence for something, that means you should already strongly believe it. Whether or not you will find such evidence or what it is, is not the interesting question. The interesting question is why do you have that strong belief now? What strong evidence do you already posses that leads you to believe this thing?

If you haven't got any reason to believe a thing, then it's just like all the other things you don't have reason to believe, of which there are very many, and most of them are false. Why is this one different?.

The correct response, when you notice that a belief is unsupported, is to say oops and move on. The incorrect response is to go looking specifically for confirming evidence. That is writing the bottom line in the wrong place, and is not a reliable truth-finding procedure.

Also, "debate style" arguments are generally frowned upon around here. Epistemology is between you and God, so to speak. Do your thing, collect your evidence, come to your conclusions. This community is here to help you learn to find the truth, not to debate your beliefs.

9Bugmaster
That's a very good point. From what I've seen, most Christians who debate atheists end up using all kinds of convoluted philosophical arguments to support their position -- whereas in reality, they don't care about these arguments one way or another, since these are not the arguments that convinced them that their version of Christianity is true. Listening to such arguments would be a waste of my time, IMO.
4Eugine_Nier
The same is the case for a lot of atheist arguments. See my comment here.
1Bugmaster
Yeah, you make a good point when you say that we need "Bayesian evidence", not just the folk kind of "evidence". However, most people don't know what "Bayesian evidence" means, because this is a very specific term that's common on Less Wrong but approximately nowhere else. I don't know a better way to put it, though. That said, my comment wasn't about different kinds of evidence necessarily. What I would like to hear from a Christian debater is a statement like, "This thing right here ? This is what caused me to become a Reformed Presbilutheran in the first place." If that thing turns out to be something like, "God spoke to me personally and I never questioned the experience" or "I was raised that way and never gave it a second thought", that's fine. What I don't want to do is sit there listening to some new version of the Kalaam Cosmological Argument (or whatever) for no good reason, when even the person advancing the argument doesn't put any stock in it.
4CCC
I was raised Roman Catholic. I did give it a second thought; I found, through my life, very little evidence against the existence of God, and some slight evidence for the existence of God. (It doesn't communicate well; it's all anecdotal). I do find, on occasion, that the actions of God are completely mysterious to me. However, an omniscient being would have access to a whole lot of data that I do not have access to; in light of that, I tend to assume that He knows what He is doing. The existence of God also implies that the universe has some purpose, for which it is optimised. I'm not quite sure what that purpose is; the major purpose of the universe may be something that won't happen for the next ten billion years. However, trying to imagine what the purpose could be is an interesting occasional intellectual exercise.

I found, through my life, very little evidence against the existence of God

May I ask what you expected evidence against the existence of God to have looked like?

4CCC
That is entirely the right question to ask. And the answer is, I don't have the faintest idea. The question there is, what would a universe without God look like? And that question is one that I can't answer. I'd guess that such a universe, if it were possible, would have more-or-less entirely arbitrary and random natural laws; I'd imagine that it would be unlikely to develop intelligent life; and it would be unlikely for said intelligent life, if it developed, to be able to gather any understanding of the random and arbitrary natural laws at all. The trouble is, this line of reasoning promptly falls into the same trouble as any other anthropic argument. The fact that I'm here, thinking about it, means that there is intelligent life in this universe. So a universe without intelligent life is counterfactual, right from the start. I knew that when I started constructing the argument; I can't be sure that I'm not constructing an argument that's somehow flawed. It's very easy, when I'm sure of the answer, to create an argument that's more rationalising than rationality; and it can be hard to tell if I'm doing that.

Doesn't this argument Prove Too Much by also showing that without a Metagod, God should be expected to have arbitrary and random governing principles? The universe is ordered, but trying to explain that by appealing to an ordered God begs the question of what sort of ordered Metagod constructed the first one.

5Richard_Kennaway
Richard Dawkins does. The universe we see (he says somewhere; this is not a quote) is exactly what a world without God would look like: a world in which, on the whole, to live is to suffer and die for no reason but the pitiless working out of cause and effect, out of which emerged the blind, idiot god of evolution. A billion years of cruelty so vast that mountain ranges are made of the dead. A world beyond the reach of God.
8Bugmaster
To be fair, this type of argument only eliminates benevolent and powerful gods. It does not screen out actively malicious gods, indifferent gods, or gods who are powerless to do much of anything.
1CCC
I don't see what's so bad about mountain ranges being made of dead bodies. The creatures that once used those bodies aren't using them anymore - those mere atoms might as well get recycled to new uses. The problem of death is countered by the solution of the afterlife; an omniscient God would know exactly what the afterlife is like, and an omniscient benevolent God could allow death if the afterlife is a good place. (I don't have any proof of the existance of the afterlife at hand, unfortunately). Suffering, now; suffering is a harder problem to deal with. Which leads around to the question - what is the purpose of the universe? If suffering exists, and God exists, then suffering must have been put into the universe on purpose. For what purpose? A difficult and tricky question. What I suspect, is that suffering is there for its long-term effects on the human psyche. People exposed to suffering often learn a lot from it, about how to handle emotions; people can form long-term bonds of friendship over a shared suffering, can learn wisdom by dealing with suffering. Yes, some people can shortcut the process, figuring out the lessons without undergoing the lesson; but many people can't.

Suffering, now; suffering is a harder problem to deal with. Which leads around to the question - what is the purpose of the universe? If suffering exists, and God exists, then suffering must have been put into the universe on purpose. For what purpose? A difficult and tricky question.

What I suspect, is that suffering is there for

This is using your brain as an outcome pump. Start with a conclusion to be defended, observations that prima facie blow it out of the water, and generate ideas for holding onto the conclusion regardless. You can do it with anything, and it's an interesting exercise in creative thinking to come up with a defence of propositions such as that the earth is flat, that war is good for humanity, or that you're Jesus. (Also known as retconning.) But it is not a way of arriving at the truth of anything.

What your outcome pump has come up with is:

What I suspect, is that suffering is there for its long-term effects on the human psyche.

War really is good for humanity! But what then is the optimal amount of suffering? Just the amount we see? More? Less?

I expect that the answer is that the omniscience and omnibenevolence of God imply that what we see is indeed just... (read more)

6TheOtherDave
What makes suffering any harder a problem than death? Surely the same strategy works equally well in both cases. More precisely... the "solution of the afterlife" is to posit an imperceptible condition that makes the apparent bad thing not so bad after all, despite the evidence we can observe. On that account, sure, it seems like we die, but really (we posit) only our bodies die and there's this other non-body thing, the soul, which is what really matters which isn't affected by that. Applied to suffering, the same solution is something like "sure, it seems like we suffer, but really only our minds suffer and there's this other non-mind thing, the soul, which is what really matters and which isn't affected by that." Personally, I find both of these solutions unconvincing to the point of inanity, but if the former is compelling, I see no reason to not consider the latter equally so. If my soul is unaffected by death, surely it is equally unaffected by (e.g.) a broken arm?
5Bugmaster
As far as I can tell, most arguments of this kind hinge on that "slight evidence for the existence of God" that you mentioned. Presumably, this is the evidence that overcomes your low prior of God's existence, thus causing you to believe that God is more likely to exist than not. Since the evidence is anecdotal and difficult (if not impossible) to communicate, this means we can't have any kind of a meaningful debate, but I'm personally ok with that.
1Eugine_Nier
The problem here is that there is confusion between two senses of the word 'evidence': a) any Bayesian evidence b) evidence that can be easily communicated across an internet forum.
[-]Shmi120

You are fixating on atheism for some reason. Assigning low probability to any particular religion, and only a marginally higher probability to some supernatural creator still actively shaping the universe results naturally from rationally considering the issue and evaluating the probabilities. So do many other conclusions. This reminds me of the creationists picking a fight against evolution, whereas they could have picked a fight against Copernicanism, the way flat earthers do.

[-]Shmi120

Actually, the behavior Risto_Saarelma described fits the standard pattern. People who cannot be helped are ignored or rejected. Take any stable community, online or offline, and that's what you see.

For example, f someone comes to, say, the freenode ##physics IRC channel and starts questioning Relativity, they will be pointed out where their beliefs are mistaken, offered learning resources and have their basic questions answered. If they persist in their folly and keep pushing crackpot ideas, they will be asked to leave or take it to the satellite off-topic channel. If this doesn't help, they get banned.

Again, this pattern appears in every case where a community (or even a living organism) is viable enough to survive.

Saluton! I'm an ex-mormon athiest, a postgenderist, a conlanging dabbler, and a chronic three-day monk.

Looking at the above posts (and a bunch of other places on the net), I think ex-mormons seem to be more common than I thought they would be. Weird.

I'm a first-year college student studying only core/LCD classes so far because every major's terrible and choosing is scary. Also, the college system is madness. I've read lots of posts on the subject of higher education on LessWrong already, and my experience with college seems to be pretty common.

I discovered LessWrong a few months ago via a link on a self-help blog, and quickly fell in love with it. The sequences pretty much completely matched up with what I had come up with on my own, and before reading LW I had never encountered anyone other than myself who regularly tabooed words and rejected the "death gives meaning to life" argument et cetera. It was nice to find out that I'm not the only sane person in the world. Of course, the less happy side of the story is that now I'm not the sanest person in my universe anymore. I'm not sure what I think about that. (Yes, having access to people that are smarter than me ... (read more)

2Osiris
What will you do now that you can't form a movement of rationalists? Take over world? Become a superhero? Invent the best recipe for cookies? MAINTAIN AND INCREASE DIVERSITY? For example, I am going to post a recipe for a bacon trilobite and my experiences and thoughts about paperclipping among humans. Any interesting things you be thinkin' of postin'? ^^

IIRC the standard experimental result is that atheists who were raised religious have substantially above-average knowledge of their former religions. I am also suspicious that any recounting whatsoever of what went wrong will be greeted by, "But that's not exactly what the most sophisticated theologians say, even if it's what you remember perfectly well being taught in school!"

This obviously won't be true in my own case since Orthodox Jews who stay Orthodox will put huge amounts of cumulative effort into learning their religion's game manual over time. But by the same logic, I'm pretty sure I'm talking about a very standard element of the religion when I talk about later religious authorities being presumed to have immensely less theological knowledge than earlier authorities and hence no ability to declare earlier authorities wrong. As ever, you do not need a doctorate in invisible sky wizard to conclude that there is no invisible sky wizard, and you also don't need to know all the sophisticated excuses for why the invisible sky wizard you were told about is not exactly what the most sophisticated dupes believe they believe in (even as they go on telling children abo... (read more)

5MugaSofer
The trouble with this heuristic is it fails when you aren't right to start with. See also: creationists. That said, you do, in fact, seem to understand the claims theologians make pretty well, so I'm not sure why you're defending this position in the first place. Arguments are soldiers? Well, I probably know even less about your former religion than you do, but I'm guessing - and some quick google-fu seems to confirm - that while you are of course correct about what you were thought, the majority of Jews would not subscribe to this claim. You hail from Orthodox Judaism, a sect that contains mostly those who didn't reject the more easily-disprove elements of Judaism (and indeed seems to have developed new beliefs guarding against such changes, such as concept of a "written and oral Talmud" that includes the teachings of earlier authorites.) Most Jews (very roughly 80%) belong to less extreme traditions, and thus, presumably, are less likely to discover flaws in them. Much like the OP belonging to a subset of Mormons who believe in secret polar Israelites. Again, imagine a creationist claiming that they were taught in school that a frog turned into a monkey, dammit, and you're just trying to disguise the lies you're feeding people by telling them they didn't understand properly! If a claim is true, it doesn't matter if a false version is being taught to schoolchildren (except insofar as we should probably stop that.) That said, disproving popular misconceptions is still bringing you closer to the truth - whatever it is - and you, personally, seem to have a fair idea of what the most sophisticated theologians are claiming in any case, and address their arguments too (although naturally I don't think you always succeed, I'm not stupid enough to try and prove that here.)
2JohnH
I believe the result is that atheists have an above average knowledge of world religions, similar to Jews (and Mormons) but I don't know of results that show they have an above average knowledge of their previous religion. Assuming most of them were Christians then the answer is possibly. In this particular case I happen to know precisely what is in all of the official church material; I will admit to having no idea where his teachers may have deviated from church publications, hence me wondering where he got those beliefs. I suppose I can't comment on what the average believer of various other sects know of their sects beliefs, only on what I know of their sects beliefs. Which leaves the question of plausibility that I know more then the average believer of say Catholicism or Evangelical Christianity or other groups not my own. [edit] Eliezer, I am not exactly new to this site and have previously responded in detail to what you have written here. Doing so again would get the same result as last time.

Alright. Hi. I'm a senior in high school and thinking about majoring in Computer Science. Unlike most other people my age, this is probably my first post on any chat forum/ wiki/ blog. I also don't normaly type things without a spell checker and would like to get better. Any coments about my spelling or anything else would be appriciated.

My brother showed me this site a while back and also HP:MoR. Spicificly, I saw the Sequences. And they were long. Some of them were some-what interesting but mostly they were just long. In addition to that, I had just been introduced to the Methods of Rationality which, dispite being long, was realy interisting (actualy my favorite story that I have ever read), and there was some other things, so yeah . . . I still haven't read them. But anyway, that was about a year ago and at this point I have read through MoR at least three times. I feel that I am starting to think sort of rationaly and would like to improve on that.

In addition to that, I have this friend that I talk to at lunch. Normaly we talk about things that we probably don't have any ideas about that actualy reflect reality, like the origins of the universe, time travel, artificial intel... (read more)

8Dahlen
Since you asked... "comments", "appreciated". Welcome to LessWrong!
7PhilGoetz
Welcome! I should probably write a post, "Why not to major in computer science." My advice is to be aware that there is almost no money in the world budgeted to computer science research, that most people can't even conceive of or believe in the concept of computer science research, and that a degree in computer science leads only to jobs as a computer programmer unless it is from a top-five school.

jobs as a computer programmer

You say that like it's a bad thing.

Hi everyone, I'm labachevskij. I'm a long time lurker on this site, attracted by (IIRC) Bayesian Decision Theory. I'm completing my PhD studies in Maths, but I have also been caught by HPMOR, which is proving a huge source of procrastination (I'm reading it again for the third time). I'm also on my way with the reading of the sequences.

1wedrifid
Welcome labachevskij! What part of Math are you focusing on?

carefully evaluating both sides of an issue

Are we ever allowed to say "okay, we have evaluated this issue thoroughly, and this is our conclusion; let's end this debate for now"? Are we allowed to do it even if some other people disagree with the conclusion? Or do we have to continue the debate forever (of course, unless we reach the one very specific predetermined answer)?

Sometimes we probably should doubt even whether 2+2=4. But not all the time! Not even once in a month. Once or twice in a (pre-Singularity) lifetime is probably more than necessary. -- Well, it's very similar for the religion.

There are thousands of issues worth thinking about. Why waste the limited resources on this specific topic? Why not something useful... such as curing the cancer, or even how to invent a better mousetrap?

Most of us have evaluated the both sides of this issue. Some of us did it for years. We did it. It's done. It's over. -- Of course, unless there is something really new and really unexpected and really convincing... but so far, there isn't anything. Why debate it forever? Just because some other people are obsessed?

8TheOtherDave
So, I basically agree with you, but I choose to point out the irony of this as a response to a thread gone quiet for months.
1Viliam_Bur
LOL I guess instead of the purple boxes of unread comments, we should have two colors for unread new comments and unread old comments. (Or I should learn to look at the dates, but that seems less effective.)
2TheOtherDave
(blinks) Oh, is THAT what those purple boxes are!?! * learns a thing *
0[anonymous]
Wait, what purple boxes? Am I missing something?
1TheOtherDave
As I respond to this, your comment is outlined in a wide purple border. When I submit this response, I expect that your comment will no longer be outlined, but my comment will. If I refresh the screen, I expect neither of ours will. This has been true since I started reading LW again recently, and I have mostly been paying no attention to it, figuring it was some kind of "current selection" indicator that wasn't working very well. But if it's an "unread comment" indicator, then it works a lot better. Edit - I was close. When I submit, your comment is still purple, and mine isn't. If I refresh once, yours isn't and mine is. If I refresh again, neither is.
0[anonymous]
Oh now I see. Both of our comments are purple-boxed. Let's see what happens when I comment and refresh.

Hello! I’m a 15 year old sophomore in high school, living in the San Francisco Bay Area. I was introduced to rationality and Less Wrong while interning at Leverage Research, which was about a month ago.

I was given a free copy of Chapters 1-17 of HPMOR during my stay. I was hooked. I finished the whole series in two weeks and made up my mind to try and learn what it would be like being Harry.

I decided to learn rationality by reading and implementing The Sequences in my daily life. The only problem was, I discovered the length of the Eliezer’s posts from 2006-2010 was around around 10 Harry Potter books. I was told it would take months to read, and some people got lost along the way due to all the dependencies.

Luckily I am very interested in self improvement, so I decided that I should learn speed reading to avoid spending months dedicated solely to reading The Sequences. After several hours of training, I increased my reading speed (with high comprehension) five times, from around 150 words per minute to 700 words per minute. At that speed, it will take me 33.3 hours to read The Sequences.

It seems like most people advise reading The Sequences in chronological order in ebook form. I... (read more)

If I could spend 5 seconds to a minute after each blog post doing anything, what should I do?

Figure out how you would explain the main idea of the post to a smart friend.

0Brendon_Wong
Thanks! Just curious, how come you chose that over simply taking short 10 second notes allowing me to memorize all the main ideas?
2Eliezer Yudkowsky
IIRC notetaking is supposed to work less well than explaining something to others. I don't know about imagining how to explain something to others.
8Vaniver
I would imagine that actually explaining it out loud to a rubber duck is better than imagining explaining it to a friend, for the same reasons that it is a common debugging practice. Actually putting something into words makes weak spots in understanding obvious in a way that imagination can glide over.
0Brendon_Wong
Perhaps note taking works less well for understanding, but explaining it out loud without recording it down or even writing my explanation will do very little for long term recall. What good will it do if I forget everything I read, after spending many hours reading it?
0Brendon_Wong
At first, I think I will try explaining ideas out loud as I read to save time, then write ultrashort notes on main ideas for long term memory. Thanks for everyone's help!
0James_Miller
Both would work but my idea is less obvious so perhaps more helpful.
0Brendon_Wong
That's an interesting idea. I suppose it might help with better understanding the concept, but it might not work for long term memorization. Should I write the explanations down?
0James_Miller
That would probably help if you have the time.
1Nisan
Welcome! As you're interested in applying the Sequences to your daily life, I suggest checking out the Center for Applied Rationality. (Maybe you overlapped with them at Leverage?) As part of their curriculum development process, they offer free classes at their Berkeley office sometimes. If you sign up here you'll be put on a mailing list where they announce these sessions, usually a day or so in advance.
0Brendon_Wong
Thanks, I just signed up. Do you think taking a full CFAR workshop would be a good next step after The Sequences? I'll be done in about 4 days at current reading speed (no planning fallacy adjustments), so I should probably plan ahead now.
0Nisan
It would definitely be a good next step. I don't know if they have a minimum age for workshops, but it doesn't hurt to apply.
0Brendon_Wong
I don't believe they have age constraints, the issue is the monetary constraints :p Thanks for your help!
0Nisan
They offer financial aid, too.
0Brendon_Wong
Since I have a total of $23, I must get my parents to pay and allow me to go for a week, that will be the tricky part
1Jiro
People might not like my response, but I'd say that if you're in a situation where you believe something might be beneficial to you but it consumes a substantial portion of your resources, you should heavily lean towards not going. This applies as much to a rationality workshop attended by someone with a tiny budget as it applies to playing the stock market. Making large expenditures for an uncertain return is generally a bad bet even if the expected utility gain is positive, if failure has a very negative consequence. And human beings are notoriously bad at assessing the expected utility in such situations. You also need to be very confident in your ability to evaluate arguments if you don't want to end up worse than before. Obviously, this doesn't apply if you're absolutely certain that going gives you more benefit than you forego in money, time, and parental willingness to give in (which may, in fact, be in limited supply) so there is no risk of loss, but not too many people are really that certain.
0thomblake
But surely going to a rationality workshop is the best way to learn to evaluate whether to go to a rationality workshop. And whether it succeeds or not, you can be convinced it was a good idea!

Hello, Less Wrong, I'm Anna Zhang, a high school student. I found this site about half a month ago, after reading Harry Potter and the Methods of Rationality. On Mr. Yudkowsky's Wikipedia page, I found a link to his site, where I found a link to this site. I've been reading the sequence How to Actually Change Your Mind, as Mr. Yudkowsky recommended, and I've learned a lot from it (though I still have a lot to learn...)

2Brendon_Wong
Welcome! If you want to meet other high schoolers, this looks like a good place to start.

I'm going to unify a couple comment threads here.

Perhaps it's not fair of me to ask for your evidence without providing any of my own. However I really don't want to just become the irrational believer hopelessly trying to convince everyone else.

Honestly, I think you'd be coming across as much more reasonable if you were actually willing to discuss the evidence than you do by skirting around it. There are people here who wouldn't positively receive comments standing behind evidence that they think is weak, but at least some people would respect your willingness to engage in a potentially productive conversation. I don't think anyone here is going to react positively to "There's some really strong evidence, and I'm not going to talk about it, but you really ought to have come up with it already yourself."

Will Newsome gets like that sometimes, and when he does, his karma tends to plummet even faster than yours has, and he's built up a lot of it to begin with.

If you want to judge whether our inability to provide "good" arguments really is due to our lack of familiarity with the position we're rejecting, then there isn't really a better way than to expose us to ... (read more)

9DSimon
I second this recommendation. Ibidem, it seems that you don't want to be put in the position of defending your beliefs among people who might consider them weird, or stupid, or even harmful. I empathize a lot with that; I've been in the same situation enough times to know how nasty and unfun it can get. But unfortunately, I don't think there's another way the conversation can continue. You've said a few times that you expected us to know of some good arguments for theism, and that you're disappointed that we don't have any. Well, what can anyone say in response to that but "Okay, please show us what we're missing"? I think you can at least trust the community here to take what you say seriously, and not just dismiss you out of hand or use it as an opportunity to score tribal points and virtual high-fives. We're at least self-aware enough to avoid those discussion traps most of the time.

Discovered while researching the global effects of a Pak-Indo nuclear exchange. Once here I began to dig further and found it appealing. I am a simple soldier pushing myself into a Masters in biology. Am I rationalist? I am not sure to be honest. If I am I know the exact date and time when I started to become one. Nov 2004 I was part of the battle of Fallujah, during an exchange of gunfire a child was injured. I will never know if it was one my rounds that caused her head injury but my lips worked to bring her life again. It was a futile attempt, she passed and while clouded with this damn experience I myself was wounded. At that very moment I lost my faith in any loving deity. My endless pursuit of knowledge, to include academics provided by a brick and mortar school has helped me recover from the loss of a limb. I still have the leg however it does not function well. I like to think and philosophy fascinates me, and this site fascinates me. :) Political ideology- Fiscally Conservative Religion-possibilian Rather progressive on issues like gay marriage and abortion. Abortion actually the act I despise but as a man I feel somehow that I haven't the organs to complain. To sum me up I suppose I am a crippled, tobacco chewing, gun toting member of the Sierra Club with a future as a freshwater biologist with memories I would like to replace with Bayes. LoL Well I just spilled that mess out, might as well hit post. Please feel free to ask anything you like, I am not sensitive. Open honesty to those that are curious is good medicine.

3TimS
Welcome. Hope you find what you are looking for, and maybe find some of it here.

This is where you are confused. Almost certainly it is not the only confusion. But here is one:

Values are not claims. Goals are not propositions. Dynamics are not beliefs.

A machine that maximises paperclips can believe all true propositions in the world, and go on maximising paperclips. Nothing compels it to act any differently. You expect that rational agents will eventually derive the true theorems of morality. Yes, they will. Along with the true theorems of everything else. It won't change their behaviour, unless they are built so as to send those actions identified as moral to the action system.

If you don't believe me, I can only suggest you study AI (Thrun & Norvig) and/or the metaethics sequence until you do. (I mean really study. As if you were learning particle physics. It seems the usual metaethical confusions are quite resilient; in most peoples' cases I wouldn't expect them to vanish without actually thinking carefully about the data presented.) And, well, don't expect to learn too much from off-the-cuff comments here.

Designating PrawnOfFate a probable troll or sockpuppet. Suggest terminating discussion.

1Desrtopa
Request accepted, I'm not sure if he's being deliberately obtuse, but I think this discussion probably would have borne fruit earlier if it were going to. I too often have difficulty stepping away from a discussion as soon as I think it's unlikely to be a productive use of my time.
[-]jjvt110

Hi. I'm a computer science student in Oulu University (Finland).

I don't remember exactly how I got here, but I guess some of the first posts I read were about counterarguments to religious delial of evolution.

I have been intrested in rationality (along with sciense and technology) for a long time before I found lesswrong, but back then my view of rationality was mostly that it was the opposite of emotion. I still dislike emotions - I guess that it's because they are so often "immune to reflection" (ie. persistently "out of sync" with what I know to be the right thing to do). However, I'm aware that emotions do have some information value (worse than optimal, but better than nothing) and simply removing emotions from human neuroarchitechture without other changes might result something functionally closer to a rock than a superhuman...

I'm an atheist and don't believe in non-physical entities like souls, but I still believe in eternal life. This unorthodox view is because 1) I'm a (sort of) "modal realist": I believe that every logically possible world actually physically exists (it's the simplest answer I've found to the question "Why does anything... (read more)

2beoShaffer
Have you read Brain Lock?

Hey Lesswrong.

This is a sockpuppet account I made for the purpose of making a post to Discussion and possibly Main, while obscuring my identity, which is important due to some NDAs I've signed with regards to the content of the post.

I am explicitly asking for +2 karma so that I can make the post.

Yo. I've been around a couple years, posted a few times as "ZoneSeek," re-registered this year under my real name as part of a Radical Honesty thing.

[-]philh110

Nobody can recruit Grigori Perelman for IMO, either.

Perelman is an IMO gold medalist.

Hello LW. My pseudonym is DiscyD3rp, and this introduction is long overdo. I am 17, male, and currently enrolled in high school. I discovered this site over a year ago, via HPMoR, and have read a good percentage of the main sequences in a kinda correct order. However, i was experiencing significant angst from what I call Dungeon Crawl Anxiety (The same reason that when exploring RPG dungeons i double back and explore even AFTER discovering the correct path). I am now (re-)reading the entirety of Eliezer's posts in the ebook version of the sequences. I have found the re-read articles still useful after having gotten a basic handle on bayesian thought, and look forward to completing my enlightenment

As far as personality, I was (am) incredibly arrogant, and future goals involve MIRI and/or rationality teaching myself (one time involves an email to Eliezer claiming the ability to save the world, and subsequently learning that decision theory is HARD). I am not particularly talented in quickly absorbing technical fields of knowledge, but plan on on developing that skill. My existing talent seems to be manipulating idea and concepts easily and creatively once well understood. Im great at reading the map, but suffer difficulty in writing it. (In very mathy fields)

Im a born Christian, with a moderate upbringing, but likely saved from extremism by the internet just in time. Now a skeptic and an atheist.

6[anonymous]
I hope you will forgive the impertinance of offering unsolicited advice: if you havn't already, you might consider teaching yourself several programming languages in your free time. It's a very marketable skill, important to MIRI's work, and in many ways suffices for a basic education in logic. The mathy stuff is probably not optional given your ambitions, and much of the same discipline and attention to detail necessary cor programming can be applied to learning serious math. Arrogance will be a terrible burden if unaccompanied by usefulness and skill.
4DiscyD3rp
I am currently teaching myself Haskel and have a functional programming textbook on my device. While unsolicited, i apreciate ALL advice. Any other tips?
3[anonymous]
Nope, that's all I got. Wait, one more thing. I learned in a painful way that scholarly credentials are most cheaply won (time and effort wise) in high school, and then it gets exponentially more difficult as you age. Every hour you spend making sure you get perfect grades now is worth ten or a hundred hours in your early-mid twenties. Looking back, getting anything less than perfect grades, given how easy that is in high school, seems utterly foolish. Maybe you already know that. Good luck!
2wedrifid
Given your ambition I suggest changing your name to something respectable before you have spent time establishing a name for yourself. DiscyD3rp will make establishing credibility more difficult for you.

Everyone here is expecting me to provide good arguments. I said from the start that I didn't have any, and hoped you would, but when you guys couldn't help meI said "but there must be some out there."

Wait a minute.

You came here without any good reasons to believe in the truth of religion, and then were surprised when we, a group of (mostly) atheists, told you that we hadn't heard of any good reasons to believe in religion either?

I am honestly curious: what makes you think such good reasons exist? Why must there be some good arguments for religion out there? You, a religious person, have none, and you are (apparently?) still religious despite this.

P.S. For what it's worth, I hope you continue to participate in the discussion here, and I look forward to hearing your thoughts, and how your views have evolved.

1Eugine_Nier
See my distinction here.

Then you must believe the same with respect to homeopathic remedies, the flat earth society, and those who believe they can use their spiritual energy in the martial arts. Give us some good arguments for those.

There's a lot of stuff out there for which it seems to me there is no good argument. I mean really, let's try to maintain some sense of perspective here. The belief that everyone has a decent argument is, I think, pretty much demonstrably false. You presumably want us to believe that you're in the same category as people who ought to be taken seriously, but I don't really see how a belief in God is any more worthy of that than a belief in homeopathic remedies. At least, not based on your argument that all positions ought to be considered to have good arguments. If you're trying to make a general argument, you're going to get lumped in with them.

But you haven't showed much willingness so far to discuss your reasons for your belief in which way the evidence falls or ours.

I can understand not wanting to discuss a settled question with people who're too biased to analyze it reasonably, but if you're going to avoid discussing the matter here in the first place, it suggests to me that rather than concluding from your experience with us that we're rigid and closed-minded on the matter, you've taken it as a premise to begin with, otherwise where's the harm in discussing the evidence?

I consider the matter of religion to be a settled question because I've studied the matter well beyond the point of diminishing returns for interesting evidence or arguments. Are you familiar enough with the evidence that we're prepared to bring to the table that you think you could argue it yourself?

Just as I've been told repeatedly that your atheism is a foregone conclusion.

Can you point to where you've been told that?

What I think most of us would agree on, and what it seems to me that people here have told you, is that they consider atheism to be a settled question, which is not at all the same thing.

I never said that I considered people different than me to not be good. What I said in earlier comments is that I liked The God Delusion because it introduced me to the concept that you can be "a good, healthy, happy person without believing in God". I believed that those who did not have faith in God would be more likely to be immoral, would be more likely to be unhealthy, and would definitely be more unhappy than if they did believe in God. The book presented to me a case for how atheists can be just as moral, just as healthy, just as happy as theists, an argument I had never seen articulated before. I apologize that I had never conjured this idea up before reading The God Delusion, it just seemed obvious to me based on my study of the Gospel that they couldn't be.

What passages in the scriptures tell you that you can be moral, healthy, and happy without faith in God? It seems pretty consistent to me that in the scriptures they say you can only have those qualities in your life if you believe in God and follow his commandments.

I fail to see how blood atonement, Adam-God, racist theology, and polygamist theology gave you the slightest impression that the Journal of Disc

... (read more)

My $0.02: the most valuable piece of information I get from open-ended introductions is typically what people choose to talk about, which I interpret as a reflection of what they consider important. For example, I interpret the way you describe yourself here as reflecting a substantial interest in how other people judge you.

2Alrenous
Found helpful. Your conclusion is true, but not something I'd think to mention. Now I can construct an introduction template: "I'm Alrenous, and I find X important." It won't be complete, but at least it also won't be inaccurate.
[-]philh100

Selectivity, in the relevant sense, is more than just a question of how many people are granted something.

How many people are not on that site, but could rank highly if they chose to try? I'm guessing it's far more than the number of people who have never taken part in the IMO, but who could get a gold medal if they did.

(The IMO is more prestigious among mathematicians than topcoder is among programmers. And countries actively recruit their best mathematicians for the IMO. Nobody in the Finnish government thought it would be a good idea to convince and train Linus Torvalds to take part in an internet programming competition, so I doubt Linus Torvalds is on topcoder.)

There certainly are things as selective or more than the IMO (for example, the Fields medal), but I don't think topcoder is one of them, and I'm not convinced about "plenty". (Plenty for what purpose?)

3private_messaging
I've tried to compare it more accurately. It's very hard to evaluate selectivity. It's not just the raw number of people participating. It seems that large majority of serious ACM ICPC participants (both contestants and their coaches) are practising on Topcoder, and for the ICPC the best college CS students are recruited much the same as best highschool math students for IMO. I don't know if Linus Torvalds would necessarily do great on this sort of thing - his talents are primarily within software design, and his persistence as the unifying force behind Linux. (And are you sure you'd recruit a 22 years old Linus Torvalds who just started writing a Unix clone?). It's also the case that 'programming contest' is a bit of misnomer - the winning is primarily about applied mathematics - just as 'computer science' is a misnomer. In any case, its highly dubious that understanding of QM sequence is as selective as any contest. I get it fully that Copenhagen is clunky whereas MWI doesn't have the collapse, and that collapse fits in very badly. That's not at all the issue. However badly something fits, you can only throw it away when you figured out how to do without it. Also, commonly, the wavefunction, the collapse, and other internals, are seen as mechanisms of prediction which may, or may not, have anything to do with "how universe does it" (even if the question of "how universe does it" is meaningful, it may still be the case that internals of the theory have nothing to do with that, as the internals are massively based upon our convenience). And worse still, MWI is in many very important ways lacking.

I made an account seven months ago, but I wasn't aware of the last welcome thread, so I guess I'll post on this one.

I'm not sure when I exactly "joined". My first contact with this community was passing familiarity with "Overcoming bias" as one of the blogs which sometimes got linked in the blogosphere I frequented in high school. As typical of my surfing habits in those days, I spent one or two sessions reading it for hours and then promptly forgot about all it. Second contact was a recommendation from another user on reddit to Lesswrong. Third contact was a few months later when my roommate recommended I read hpmor. I lurked for a short time, and made an account, and went to my first few meetups about two months ago. Meetups are fun, you meet lots of smart people, and I highly recommend it.

First impressions? I think this is the (for lack of a better word) most intellectual internet community that I am familiar with. Almost every post or comment is worth reading, and the site has got an addictive reddit-ish feel about it (which hampers my productivity somewhat, but que sera, sera.)

I've noticed that most of the opinions here tend to align precisely with my own... (read more)

3[anonymous]
I noticed this as well, while first reading the sequences. I flew through blog posts, absorbing it all in, since it all either matched my own thoughts, or were so similar that it hardly took effort to comprehend. But I struggled to find anything original to say, which was part of why I initially didn't bother making an account - I didn't want to simply express agreement every time. (And now I notice that my second comment is precisely that.) That's one of the things I've frequently benefited from in my thinking. I have found that the concepts behind keywords like dissolving the question, mysterious answers, map and territory, and the teacher's password can be applied in so many areas, and that having the arsenal to use them makes it much easier to think clearly about otherwise elusive concepts.

Hello, my name is Cam :]

My goals in life are:

  1. To build a self sufficient farm I with renewable alternative energy and everything.
  2. Acquire financial assets to support the building of my farm and other hobbies and activities I pursue. 3 .To further my fitness and health and maintain it.
  3. Love and Romance.

That's pretty much it, hahaha, I want to learn the ways of a Rationalist to make the best decisions and solutions for problems I might encounter in pursuing these goals! I have a immature or childlike air around me, people tend to say, which is why I am ... (read more)

2Viliam_Bur
Have you already built something? Do you have specific plans?
[-]Zoe90

Hello Less Wrong community members,

My name is Zoe, I'm a philosophy student, and increasingly discombobulated by the inadequacy of my field of study to teach me how to Actually Do Things. I discovered Less Wrong 18 months ago, thanks to the story Harry Potter and the Method of Rationality. I've read a number of articles and discussions since then, mostly whenever I felt like reading something both intelligent and relevant, but I have not systematically read through any sequence or topic.

I have recently formed the goal to develop the skills necessary to 'ra... (read more)

0John_Maxwell
Welcome! Let me know if you figure something out. So far I haven't been able to do it without coming across as weird.

Hello, my name is Watson. The username comes from my initials and a Left 4 Dead player attempting to pronounce them. I am a math student at UC Berkeley and a longtime lurker. I've got a post on rational investing, based on the conclusions of years of research by academic economists, but despite lurking I never realized there is a karma limit to post in discussion. I'm interested in just about everything, a dangerous phenomenon.

Hello to the Less Wrong community. My name is Leslie Cuthbert and I'm a lawyer based in the United Kingdom. I look forward to reading the various sequences and posts here.

There are many other intelligent and thoughtful people who disagree. Why -- epistemically, not historically -- do you place particular weight on your parents' beliefs? How did they come by those beliefs?

A sufficiently intelligent mind (and I think I can assume that if God exists, then He is sufficiently intelligent) can impose self-consistency and order on itself.

This begs Eliezer's question, I think. Intelligence itself is highly non-arbitrary and rule-governed, so by positing that God is sufficiently intelligent (and the bar for sufficiency here is pretty high), you're already sneaking in a bunch of unexplained orderliness. So in this particular case, no, I don't think you can assume that if God exists, then He is sufficiently intelligent, just like I can't respond to your original point by assuming that if the universe exists, then it is orderly.

I've now had an overwhelming request to hear my supposed strong arguments. It would be awfully lame of me to drop out now.

Just say "Oops" and move on. My point is that you almost certainly don't have good arguments, which is why your post won't be well-received. If it is so, it's better to notice that it is so in advance and act accordingly.

A rationalist ought to have heard arguments and evidence that challenged his (dis)beliefs, and have come out stronger because of it.

A rationalist

You keep using that word...

In Avoiding Your Belief's Real Weak Points, Eliezer says:

There is a tradition of inquiry. But you only attack targets for purposes of defending them. You only attack targets you know you can defend.

In Modern Orthodox Judaism I have not heard much emphasis of the virtues of blind faith. You're allowed to doubt. You're just not allowed to successfully doubt.

The point being t... (read more)

Hi! I'm a 24 year old woman starting grad school this fall studying mathematics. Specifically I'm interested in mathematically modelling organizational decision making.

My parents raised me on Carl Sagan and Michael Shermer, so there was never really a point that I didn't identify as a rationalist. I discovered less wrong long enough ago that I don't actually remember how I found it. I've been lurking here for several years. I finally registered after doing the last survey, though I didn't make another post until the last few days.

Oh, and I have a talking c... (read more)

What I am wondering about is why it seems that atheists have complete caricatures of their previous theist beliefs.

Suppose there is diversity within a religion, on how much the sensible and silly beliefs are emphasized. If the likelihood of a person rejecting a religion is positively correlated with the religion recommending silly beliefs, then we should expect that the population of atheist converts should have a larger representation of people raised in homes where silly beliefs dominated than the population of theists. That is, standard evaporative c... (read more)

[-]jetm90

I've been browsing the site for at least a year. Found it through HP:MoR, which is absolutely amazing. I've been coming to the LessWrong study hall for a couple weeks now and have found it highly effective.

For the most part, I haven't really applied this at all. I ended up making a final break with Christianity, but the only significant difference is that I now say "Yay humanism!" instead of "Yay God!" I've used a few tricks here and there, like the Sunk Cost Fallacy, and the Planning Fallacy, but I still spent the majority of my time n... (read more)

Well, hello. I'm a first-year physics PhD student in India. Found this place through Yvain's blog, which I found when I was linked there from a feminist blog. It's great fun, and I'm happy I found a place where I can discuss stuff with people without anyone regularly playing with words (or, more accurately, where it's acceptable to stop and define your words properly). So, one of my favourite things about this place is the fact that it's based on the map to territory idea of truth and beliefs; I've been using it to insult people ever since I read it.

The po... (read more)

Hi,

I'm a philosopher (postdoc) at the London School of Economics who recently discovered Less Wrong. I am now reading through lots of old posts, especially Yudkowsky's and lukeprog's philosophy-related material, which I find very interesting.

I think lukeprog is right when he points out that the general thrust of Yudkowsky's philosophy belongs to a naturalistic tradition often associated with Quine's name. In general, I think it would be useful to situate Yudkowsky's ideas visavi the philosophical tradition. I hope to be able to contributre something here ... (read more)

Hi. I've been a distant LW lurker for a while now; I first encountered the Sequences sometime around 2009, and have been an avid HP:MOR fan since mid-2011.

I work in computer security with a fair bit of software verification as flavoring, so the AI confinement problem is of interest to me, particularly in light of recent stunts like arbitrary computation in zero CPU instructions via creative abuse of the MMU trap handler. I'm also interested in applying instrumental rationality to improve the quality and utility of my research in general. I flirt with some ... (read more)

[-][anonymous]80

Hello, I am a 46 yr old software developer from Australia with a keen interest in Artificial Intelligence.

I don’t have any formal qualifications, which is a shame as my ideal life would be to do full time research in AI - without a PhD I realise this won’t happen, so I am learning as much as I can through books, practice and various online courses.

I came across this site today from a link via MIRI and feel like I have struck gold - the articles, sequences and discussions here are very well written, interesting and thoughtful.

My current goals are to build a... (read more)

Hi, I'm Brayden, from Melbourne Australia. I attended the May 2013 CfAR workshop in Berkeley about 1 year after finding Less Wrong, and 2 years after finding HPMOR. My trip to The States was phenomenal, and I highly recommend the CfAR workshops.

My life is significantly better now than it was before, and I think I am on track with the planning process for eventually working on the highest impact causes that might help save the world.

Hello Less Wrong! I am Scott Garrabrant, a 23 year old math PhD student at UCLA, studying combinatorics. I discovered Less Wrong about 4 months ago. After reading MoR and a few sequences, I decided to go back and read every blog post. (I just finished all Eliezer's OB posts) I was going to wait and start posting after I got completely caught up, but then I started attending weekly meetups 2 months ago, and now I need to earn enough karma to make meetup announcements.

I have been interested in meta-thinking for a long time. I have spent a lot of time thinkin... (read more)

As a new member of this community, I am having a bit of difficulty with the numerous abbreviations that people use in their writing on this site. For example I have come across a number of these that are not listed on the Jargon page (eg: EY, PC, NPC, MWI...). I realize that as a new member, I will eventually understand many of these, however, it is very frustrating trying to read something and be continually distracted by having to look-up some of these obscure terms. This is especially a problem on the Welcome Thread, where a potential new member could ... (read more)

6John_Maxwell
I added the acronyms you mentioned to the Jargon page. Tell me if you come across any more. You can also edit the page to add them yourself as you learn them if you like.

Hi, my name is Danon. I just joined less wrong after reading a wonderful post by Swimmer963: http://lesswrong.com/lw/9j1/how_i_ended_up_nonambitious/ on her reasoning for why she ended up without ambition (actually, I felt she had a lot of ambition). I got to her post while trying to figure out why I am lazy, I was wondering if it was because I had no (or little, if any) ambition. Her post got me asking the right questions I have finally been able to save a private draft in LW stating a reasoning for my laziness. It really is refreshing to read the posts here at LW. Thank you for having me.

I want to know what everyone thinks of my [response] to EY

I think it's confused.

If I were part of a forum that self-identified as Modern Orthodox Jewish, and a Christian came along and said "you should identify yourselves as Jewish and anti-Jesus, not just Jewish, since you reject the divinity of Jesus", that would be confused. While some Orthodox Jews no doubt reject the divinity of Jesus a priori, others simply embrace a religious tradition that, on analysis, turns out to entail the belief that Jesus was not divine.

Similarly, we are a for... (read more)

1Viliam_Bur
I guess the core of the confusion is treating atheism like an axiom of some kind. Modelling an atheist as someone who just somehow randomly decided that there are no gods, and is not thinking about the correctness of this belief anymore, only about the consequences of this belief. At least this is how I decode the various "atheism is just another religion" statements. As if in our belief graphs, the "atheism" node only has outputs, no inputs. I am willing to admit that for some atheists it probably is exactly like this. But that is not the only way it can be. And it is probably not very frequent at LW. The ideas really subversive to theism are reductionism, and the distinction between the map and the territory (specifically that the "mystery" exists only in the map, that it is how an ignorant or a confused mind feels from inside). At first there is nothing suspicious about them, but unless stopped by compartmentalization, they quickly grow to materialism and atheism. It's not that I a priori deny the existence of spiritual beings or whatever. I am okay with using this label for starters; I just want an explanation about how they interact with the ordinary matter, what parts do they consist of, how those parts interact with each other, et cetera. I want a model that makes sense. And suddenly, there are no meaningful answers; and the few courageous attempts are obviously wrong. And then I'm like: okay guys, the problem is not that I don't believe you; the problem is that I don't even know what do you want me to believe, because obviously you don't know it either. You just want me to repeat your passwords and become a member of your tribe; and to stop reflecting on this whole process. Thanks, but no; I value my sanity more than a membership in your tribe (although if I lived a few centuries ago or in some unfortunate country, my self-preservation instinct would probably make me choose otherwise).

An always open mind never closes on anything. There is a time to confess your ignorance and a time to relinquish your ignorance and all that...

Are you saying it's more rational not ever to consider some ways of thinking?

Yes. Rationality isn't necessarily about having accurate beliefs. It just tends that way because they seem to be useful. Rationality is about achieving your aims in the most efficient way possible.

Oh, someone may have to look into some ways of thinking, if people who use them start showing signs of being unusually effective at achieving relevant ends in some way. Those people would become super-dominant, it would be obvious that their way of thinking was superior. However, ther... (read more)

[-][anonymous]80

I tend to focus on the current authorized messengers from God and the Holy Spirit as I feel that is what I have been instructed to do.

Who authorizes messengers from God? It's not like He has a public key, after all...

Apparently I have just registered.

So, I have a question. What's an introduction do? What is it supposed to do? How would I be able to tell that I've introduced myself if I somehow accidentally willed myself to forget?

Well... I'm an engineering student who intends to graduate in electronics. I became interested in AI when I started learning programming at the age of 12. I became fascinated with what I could make the computer do. And rather naively I tried for months and months to program something that was "intelligent" (and failed horribly of course). I set that project aside temporarily but never stopped thinking about it. Years later I discovered HPMoR and through it LessWrong and suddenly found a whole community of people interested in AI and similar thing... (read more)

[-]Shmi80

Please consider whether this exchange is worth your while. Certainly wasn't worth mine.

I know Mitchell Porter is likewise a physicist and he's not convinced at all either.

Mitchell Porter also advocates Quantum Monadology and various things about fundamental qualia. The difference in assumptions about how physics (and rational thought) works between Eliezer (and most of Eliezer's target audience) and Mitchell Porter is probably insurmountable.

Hello everyone, I'm Franz. I don't actually remember how I happened upon this site, but I do know it was rotting in my unsorted bookmark folder for over a year before I actually decided to read any post. This I do regret.

Because of circumstances I am currently in Brazil and due to a lack of internet infrastructure, I have to read the downloadable versions of the sequences and won't be able to comment often. I do enjoying reading your insightful thoughts!

I was wondering if anyone has directly applied EY methods to their own life? For what reason and what... (read more)

6[anonymous]
Welcome! I have. Specifically, the How to Actually Change Your Mind sequence was very helpful to me in real life. However, in spite of how some people feel about this site, for me, it is not about [only] EY. Lots of things from Less Wrong have affected my life outside of Less Wrong, specifically (quoting from an older draft of this comment, now, so that is why the flow may be weird here): One of the most helpful posts I came upon here was "The Power of Pomodoros", which introduced me to the Pomodoro technique. See this PDF from the official website for a more detailed guide. Another helpful thing I discovered via Less Wrong is the Less Wrong Study Hall. See "Co-Working Collaboration to Combat Akrasia" and "Programming the LW Study Hall". This is the current study hall (on Tinychat), but I think it will eventually be moved to somewhere else. Less Wrong taught me about existential risk and efficient charity. This has produced a tangible change in what I do with my money. lukeprog's The Science of Winning at Life sequence was also very helpful to me. I could write more, but I've already spent too much time on this comment. Enjoy Less Wrong!

Hi Everyone! I'm AABoyles (that's true most places on the internet besides LW).

I first found LW when a colleague mentioned That Alien Message over lunch. I said something to the effect of "That sounds like an Arthur C. Clarke short story. Who is the author?" "Eliezer Yudkowsky," He said, and sent me the link. I read it, and promptly forgot about it. Fast forward a year, and another friend posts the link to HPMOR on Facebook. The author's name sounded very familiar. I read it voraciously. I subscribed to the Main RSS feed and lurked for ... (read more)

From the book's website:

Are physicists and biologists willing to believe in anything so long as it is not religious thought? Close enough.

Is there a narrow and oppressive orthodoxy of thought and opinion within the sciences? Close enough.

Does anything in the sciences or in their philosophy justify the claim that religious belief is irrational? Not even ballpark.

I guess there is some tension between "narrow and oppressive orthodoxy of thought and opinion" and "willing to believe in anything"...

Redundancy isn't a design failure or a 'patch'.

I'm a Swiss medical student. I've read HPMoR and a large part of the core sequences. I've attended LW meetups in several US cities and met quite a few of you in the Bay Area and/or at the Effective Altruism Summit. I've interned for Leverage Research. I co-founded giordano-bruno-stiftung.ch (outreach organisation with German translations of some LessWrong blog posts, and other posts about rationality). Looking forward to participating in the comment section more often.

0[anonymous]
this is a test
[-]kvd70

Hi everyone,

I have been lurking LessWrong on and off for quite a while. I originally found this place through HPMoR; I thought the 'LessWrong' authorname was clever and it was nice to find out there was a whole community based around aiming to be less wrong! My tendency to overthink whatever I write has gotten in the way of actually taking part in the community so far though. Maybe now that I have gotten the introduction out of the way I'll be more likely to post.

A bit more about myself: I'm a student from the Netherlands, doing a masters in Artificial In... (read more)

Hello, Less Wrong! I'm Michael Odintsov from Ukraine, so sorry for my not-nearly-perfect :) English. Just like many here I found this site from Yudkowsky's link while reading his "Harry Potter and the Methods of Rationality". I am 27 years old programmer, fond of science in general and mostly math of all kinds.

I worked a bit in fields of AI and machine learning and looking forward for new opportunities. Well... that's almost all that I can tell about me right now - never been a great talker :) If anyone have questions or need some help with CS related topics - just ask, I always ready to help.

I don't believe that rationality in general is incompatible with religious belief, but if this community thinks that their particular brand of rationality is, people like me would love to know that.

Might we not, instead, disagree with you about rationality in general being compatible with religious belief, rather than asserting that we have some special incompatible brand of rationality?

I think it that most of your problems with theists would go away if you clarified LW's actual position.

Do we really have "problems with theists"...?

3Viliam_Bur
I don't. I just consider the debates about theism boring if they don't bring any new information.

Yes, but what I expected was...um...atheists who were better than most, who had arrived at atheism through two-sided discourse.

Bob Altemeyer asked college students about this, some of whom had a strong allegiance to 'traditional' authority and some less so:

Interestingly, virtually everyone said she had questioned the existence of God at some time in her life. What did the authoritarian students do when this question arose? Most of all, they prayed for enlightenment. Secondly, they talked to their friends who believed in God. Or they talked with their

... (read more)

"Reason and Emotion are a tag team in decision making in ethical domains. They do their best work together."

That statement is too strong. I can think of several instances where certain emotions, especially negative ones, can impair decision making. It is reasonable to assume that impaired decision making can extend into making ethical decisions.

The first page of the paper linked below provides a good summary of when emotions, and what emotions, can be helpful or harmful in making decisions. I do acknowledge that some emotions can be helpful in... (read more)

Hi! I've been lurking here for maybe 6 months, and I wanted to finally step out and say hello, and thank you! This site has helped to shape huge parts of my worldview for the better and improved my life in general to boot. I just want to make a list of a few of the things I've learned since coming here which I never would have otherwise, as nearly as I can tell.

  • I've dropped the frankly silly beliefs I held as an evangelical Christian; I wasn't as bad as most in that category but in hindsight that was just due to luck and strong logical skills. (I knew be
... (read more)
4Shmi
Anything written by Yvain, including his old and new blogs, though someone ought to compile a list of his greatest hits.

Hm.
OK.

So, I imagine the following conversation between two people (A and B):
A: It's absurd to say 'atheism is a kind of religion,'
B: Why?
A: Well, 'religion' is a word with an agreed-upon meaning, and it denotes a particular category of structures in the world, specifically those with properties X, Y, Z, etc. Atheism lacks those properties, so atheism is not a religion.
B: I agree, but that merely shows the claim is mistaken. Why is it absurd?
A: (thinks) Well, what I mean is that any mind capable of seriously considering the question 'Is atheism a religion?'... (read more)

Even now ethics in different parts of the world, and even between political parties, are different. You should know that more than most, having lived in two systems.

There's a ridiculous amount of similarity on anything major, though. If we pick ethics of first man on the moon, or first man to orbit the earth, it's pretty same.

If it turns out that most space-faring civilizations have similar ethics, that would be good for us. But then also there would be a difference between "most widespread code of ethics" and "objectively correct code

... (read more)

I dispute its applicability, because I've known very smart Mormons. Humans are not logic engines. It's rare to find even a brilliant person who doesn't have some blind spot.

Even if it were clinically applicable, you presented it as an in-group vs. out-group joke, which is an invitation for people from one tribe to mock people from another tribe. Its message was not primarily informational.

Crocker's Rules are not an invitation to be rude.

Hello LW users, I use the alias Citizen 9-100 (nine one-hundred) but you may call me Nozz. This account will be shared between my sister and I, but we will sign it with the name of whoever is speaking. I would write more but I wrote a lot already but it didn't post due to a laptop error, so all I'll say for now is anything you'd like to know, feel free to ask, just make sure you clarify who your asking. BTW, for those interested, you may call my sister, any of the following, Sam, Sammy, Samantha, or any version of that :)

I don't recommend sharing an account. It will be confusing, and signatures are not customary here.

4MugaSofer
Regardless of how good an idea sharing accounts is (not very, I'm guessing, for the record) who on earth downvotes an introduction? Upvoted back to neutral.

Hrm.

First, let me apologize pre-emptively if I'm retreading old ground, I haven't carefully read this whole discussion. Feel free to tell me to go reread the damned thread if I'm doing so. That said... my understanding of your account of existence is something like the following:

A model is a mental construct used (among other things) to map experiences to anticipated experiences. It may do other things along the way, such as represent propositions as beliefs, but it needn't. Similarly, a model may include various hypothesized entities that represent certa... (read more)

1Shmi
As was the case once or twice before, you have explained what I meant better than I did in my earlier posts. Maybe you should teach your steelmanning skills, or make a post out of it. The reification error you describe is indeed one of the fallacies a realist is prone to. Pretty benign initially, it eventually grows cancerously into the multitude of MRs whose accuracy is undefined, either by definition (QM interpretations) or through untestable ontologies, like "everything imaginable exists". This promoting any M->R or a certain set {MP}->R seems forever meaningful if you fall for it once. The unaddressed issue is the means of actualizing a specific model (that is, making it the most accurate). After all, if all you manipulate is models, how do you affect your future experiences?
6TheOtherDave
I've thought about this, but on consideration the only part of it I understand explicitly enough to "teach" is Miller's Law (the first one), and there's really not much more to say about it than quoting it and then waiting for people to object. Which most people do, because approaching conversations that way seems to defeat the whole purpose of conversation for most people (convincing other people they're wrong). My goal in discussions is instead usually to confirm that I understand what they believe in the first place. (Often, once I achieve that, I become convinced that they're wrong... but rarely do I feel it useful to tell them so.) The rest of it is just skill at articulating positions with care and precision, and exerting the effort to do so. A lot of people around here are already very good at that, some of them better than me. Yes. I'm not sure what to say about that on your account, and that was in fact where I was going to go next. Actually, more generally, I'm not sure what distinguishes experiences we have from those we don't have in the first place, on your account, even leaving aside how one can alter future experiences. After all, we've said that models map experiences to anticipated experiences, and that models can be compared based on how reliably they do that, so that suggests that the experiences themselves aren't properties of the individual models (though they can of course be represented by properties of models). But if they aren't properties of models, well, what are they? On your account, it seems to follow that experiences don't exist at all, and there simply is no distinction between experiences we have and those we don't have. I assume you reject that conclusion, but I'm not sure how. On a naive realist's view, rejecting this is easy: reality constrains experiences, and if I want to affect future experiences I affect reality. Accurate models are useful for affecting future experiences in specific intentional ways, but not necessary fo
3TheOtherDave
Actually, thinking about this a little bit more, a "simpler" question might be whether it's meaningful on this account to talk about minds existing. I think the answer is again that it isn't, as I said about experiences above... models are aspects of a mind, and existence is an aspect of a subset of a model; to ask whether a mind exists is a category error. If that's the case, the question arises of whether (and how, if so) we can distinguish among logically possible minds, other than by reference to our own. So perhaps I was too facile when I said above that the arguments for and against solipsism are the same for a realist and an instrumentalist. A realist rejects or embraces solipsism based on their position on the existence and moral value of other minds,, but an instrumentalist (I think?) rejects a priori the claim that other minds can meaningfully be said to exist or not exist, so presumably can't base anything such (non)existence. So I'm not sure what an instrumentalist's argument rejecting solipsism looks like.
2Bugmaster
In addition, I would really like to address the fact that current models can be used to predict future inputs in areas that are thus far completely unobserved. IIRC, this is how positrons were discovered, for example. If all we have are disconnected inputs, how do we explain the fact that even those inputs which we haven't even thought of observing thus far, still do correlate to our models ? We would expect to see this if both sets of inputs were contingent upon some shared node higher up in the Bayesian network, but we wouldn't expect to see this (except by chance, which is infinitesmally low) if the inputs were mutually independent.
2TheOtherDave
FWIW, my understanding of shminux's account does not assert that "all we have are disconnected inputs," as inputs might well be connected. That said, it doesn't seem to have anything to say about how inputs can be connected, or indeed about how inputs arise at all, or about what they are inputs into. I'm still trying to wrap my brain around that part. ETA: oops. I see shminux already replied to this. But my reply is subtly different, so I choose to leave it up.
2PrawnOfFate
ie, realism explain how you can predict at all.

Hey! My name is Vinney, I'm 28 years old and live in New York City.

To be exceedingly brief: I've been working through the sequences (quite slowly and sporadically) for the past year and a half. I've loved everything I've seen on LW so far and I expect to continue. I hope to ramp up my study this year and finally get through the rest of the sequences.

I'd like to become more active in discussions but feel like I should finish the sequences first so I don't wind up making some silly error in reasoning and committing it to a comment. Perhaps that isn't an ideal approach to the community discussions, but I suspect it may be common..

3[anonymous]
Welcome! Do finish the sequences, but you won't be done then; you'll still make stupid mistakes. Best to start making them now, I think.
1VCavallo
Thanks, I'll get started making stupid mistakes as quickly as I can! I'm sorry I wasn't able to make any here.

Greetings!

I'm Brian. I'm a full-time police dispatcher and part-time graduate student in the marriage and family therapy/counseling master's degree program at the University of Akron (in northeast Ohio). Before I began studies in my master's program, I earned a bachelor's degree in emergency management. I am an atheist and skeptic. I think I can trace my earliest interest in rationality back to my high school days, when I began critically examining theism (generally) and Catholicism (in particular) while taking an elective religion class called "Q... (read more)

[-][anonymous]60

Hello LessWrong!

I found LessWrong, like so many others, through Methods of Rationality. I have lurked for at least two years now, since I discovered this website; I have read many of Eliezer's short stories and a few scattered posts of the Sequences. Eventually, I intend to get around to those and read them in a systematic fashion.... eventually.

I'm a computer science student, halfway through my life as an undergraduate at a certain institute of technology. I recently switched my main area of interest to theoretical computer science, after taking an excell... (read more)

[This comment is no longer endorsed by its author]Reply

Hi everyone! I've been lurking around here for a few years, but now I want to be more active in the great discussions that often occur on this site. I discovered Less Wrong about 4 years ago, but the Methods of Rationality fanfic brought me here as a more attentive reader. I've read some of the sequences, and found them generally to use clear reasoning to make great points. If nothing else, reading them has definitely made me think very carefully about the way nature operates and how we perceive it.

In fact, this site was my first exposure to cognitive bias... (read more)

Hello, everyone. I stumbled upon LW after listening to Eliezer make some surprisingly lucid and dissonance-free comments on Skepticon's death panel that inspired me to look up more of his work.

I've been browsing this site for a few days now, and I don't think I've ever had so many "Hey, this has always irritated me, too!" moments in such short intervals, from the rant about "applause lights" to the discussions about efficient charity work. I like how this site provides some actual depth to the topics it discusses, rather than hand the r... (read more)

I don't see how this is any different with what Richard Dawkins is doing with his claim.

You mean, Dawkins has latched onto atheism for irrational reasons and is generating whatever argument will sustain it, without regard to the evidence?

For anyone who has taken on the mantle of professional atheist, as Dawkins has, there is a danger of falling into that mode of argument. Do you have any reason to think he has in fact fallen?

2Kawoomba
YouTube source (44s)
2Richard_Kennaway
I am itching to downvote Dawkins for that.

Greetings Less Wrong Community. I have been lurking on the site for a year reading the articles and sequences and now feel I've cut down the inferential differences enough to contribute meaningful comments.

My goal here is to have clear thought and effective communication in all aspects of my life, with special attention to application in the work environment.

Above most else I value the 12th virtue of rationality. Focus on the goal, value the goal, everything else is a tool to achieve the goal. Like chess, you only need two pieces to win, the only purpose ... (read more)

2wadavis
A little late, but I found Less Wrong while trying to understand what this comic was talking about.

I'll tell you what made me think that: I asked the community if they had any good, non-strawman arguments for God, and the overwhelming response was "Nah, there aren't any."

I'm not sure if anyone's brought this up yet, but one of the site's best-known contributors once ran a site dedicated to these sorts of things, though it does of course have a very atheist POV. That said, even there the arguments aren't amazingly convincing (which you can guess by the fact that lukeprog hasn't reconverted yet) though it does acknowledge that the other side ... (read more)

Told by someone other than myself, hopefully. While I do not expect to become a theist of any kind in the near future, neither do I intend to remain an atheist. Instead, I intend to hold a set of beliefs that are most likely to be true. If I gain sufficient evidence that the answer is "Jesus" or "Trimurti", then this is what I will believe.

[-][anonymous]60

So, if one is racist-1, how would one treat me?

Racist-1 reporting in. Believing that ethnicity is correlated with desirable or undesirable traits does not in itself warrant any particular kind of behavior. So how would I treat you? Like a person. If I had more evidence about you (your appearance, time spent with you, your interests, your abilities, etc), that would become more refined.

Am I white, for appearing white? Am I Asian, for the overwhelming number of my ancestors' coloration? In other words, what makes race? My genetics, or my skin? If it is

... (read more)

Hi Less Wrong,

My name is Sean Welsh. I am a graduate student at the University of Canterbury in Christchurch NZ. I was most recently a Solution Architect working on software development projects for telcos. I have decided to take a year off to do a Master's. My topic is Ethical Algorithms: Modelling Moral Decisions in Software. I am particularly interested in questions of machine ethics & robot ethics (obviously).

I would say at the outset that I think 'the hard problem of ethics' remains unsolved. Until it is solved, the prospects for any benign or fr... (read more)

2Shmi
Welcome! Not sure why you link rationality with "Academy" (academia?). Consider scanning through the sequences to learn with is generally considered rationality on this forum and how Eliezer Yudkowsky treats metaethics. Whether you agree with him or not, you are likely to find a lot of insights into machine (and human) ethics, maybe even helpful in your research.

Not programmed to, or programmed not to? If you can code up a solution to value drift, lets see it. Otherwise, note that Life programmes can update to implement glider generators without being "programmed to".

...with extremely low probability. It's far more likely that the Life field will stabilize around some relatively boring state, empty or with a few simple stable patterns. Similarly, a system subject to value drift seems likely to converge on boring attractors in value space (like wireheading, which indeed has turned out to be a problem... (read more)

[-]Shmi60

I cannot speak to your private examples, but I think you may be reading that into what Politzer said.

Not me. This tip-off story had been talked about in the community for a long time, just never publicly until Politzer decided to carefully and tactfully state what he knew personally and avoid speculating on what might have transpired. The result itself, of course, was ripe for discovery, and indeed was discovered but glossed over by others before him. I mentioned this particular story because it's one of the most famous and most public ones. Of course, it might all be rumors and in reality there was no issue.

2gwern
'When you hear hoofbeats, think horses, not zebras'. I see here by Politzer's testimony a multiple discovery of at least 3 (Gell-Mann and the more-than-one persons implied by 'several') and you ask me to believe that a fourth multiple is not yet another multiple but rather a plagiarism/theft based, solely on you say it was being talked about. It's not exactly a convincing case.
6[anonymous]
The general narrative sounds very similar to cases in my own field, but I'd rather not talk about it. I've been cautioned not to speak about my current projects with certain people, on account of this.
6Shmi
A week after Politzer shared his calculation: Why would they decide to redo the calculation (not a very hard one, but rather laborious back then, though it's a standard one in any grad QFT course now) at exactly the same time? Anyway, no point in further speculations without new data.

Yes. He said that I should be careful about sharing my project because, otherwise, I'll be reading about it in a journal in a few months. His warning may exaggerate the likelihood of a rival researcher and mis-value the expansion of knowledge, but I'm deferring to him as a concession of my ignorance, especially regarding rules of the academy.

"Don't worry about people stealing your ideas. If your ideas are any good, you'll have to ram them down people's throats."

This is heavily context-dependent. Many fields are idea-rich and implementation-poor, in which case you do have to ram ideas down people's throats, because there's a glut of other ideas you have to compete against. But in fields that are implementation-rich and idea-poor, ideas should be guarded until you've implemented them. There are no doubt academic fields where the latter case applies.

2gwern
Can you name any?
9Shmi
I've been privately told of several such cases in high-energy physics. Below is an excerpt from the Politzer's Nobel lecture. He discovered Asymptotic freedom (that quarks are essentially connected by the miniature rubber bands which have no tension when the quarks are close to each other). He does not explicitly say that Gross was tipped off, but it's easy to read between the lines. The rest of his lecture, titled The Dilemma Of Attribution is also worth reading.
2Vaniver
It may be more precise to say there are academic groups to which that description applies, and that discretion is worthwhile in their proximity. Examples of those still living will remain private for obvious reasons.
4IlyaShpitser
Yup, some specific people steal. This definitely happens (but I will not mention names for obvious reasons).

Both realism¹ and relativism are false. Unfortunately this comment is too short to contain the proof, but there's a passable sequence on it.

¹ As you've defined it here, anyway. Moral realism as normally defined simply means "moral statements have truth values" and does not imply universal compellingness.

Hi there, denizens of Less Wrong! I've actually been lurking around here for a while (browsing furtively since 2010), and only just discovered that I hadn't introduced myself properly.

So! I'm Bluehawk, and I'll tell you my real name if and when it becomes relevant. I'm mid-20's, male, Australian, with an educational history in Music, Cinema Studies and Philosophy, and I'm looking for any jobs and experience that I can get with the craft of writing. My current projects are a pair of feature-length screenplays; one's in the editing/second draft stages, the o... (read more)

[-]CCC60

Why would a superintelligence be unable to figure that out..why would it not shoot to the top of the Kohlberg Hierarchy ?

Why would Clippy want to hit the top of the Kohlberg Hierarchy? You don't get more paperclips for being there.

Clippy's ideas of importance are based on paperclips. The most important vaues are those which lead to the acquiring of the greatest number of paperclips.

So, I assume that the LDS is managed by the Prophet, similarly to how the Catholic Church is managed by the Pope ?

If memory serves, the President of the (LDS) Church, his advisors, and the members of the church's senior leadership council (called the Quorum of the Twelve Apostles) all hold the title of prophet -- specifically "prophet, seer, and revelator". That doesn't necessarily carry all the implications that "prophet" might outside of an Mormon context, though. One of the quirks of Mormonism is a certain degree of rank inflati... (read more)

I know that atheists can deal with a lot of prejudice from believers about why they are atheists so I would think that atheists would try and justify their beliefs based on the best beliefs and arguments of a religion and not extreme outliers for both, as otherwise it plays to the prejudice.

Really? It don't think it takes an exceptional degree of rationality to reject religion.

I suspect what you mean is that atheists /ought/ to justify their disbelief on stronger grounds than the silliest interpretation of their opponent's beliefs. Which is true, you sh... (read more)

I was not trying to justify my leaving the Mormon Church in saying I used to believe in the extraordinary interpretations I did. I just wanted to say that my re-education process has been difficult because I used to believe in a lot of crazy things. Also, I'm not trying to make a caricature of my former beliefs, everything I have written here about what I used to believe I will confirm again as an accurate depiction of what was going on in my head.

I think it is a misstatement of yours to say that these beliefs have "absolutely no relation to... anythi... (read more)

4CCC
In all fairness, JohnH wrote his post before you showed him those passages. So that data was not available to him at the time of writing.
[-][anonymous]60

You are arguing with a strawman.

It's not a utility function over inputs, it's over the accuracy of models.

If I were a shminux-style rationalist, I would not choose to go to the holodeck because that does not actually make my current preferred models of the world more accurate. It makes the situation worse, actually, because in the me-in-holodeck model, I get misled and can't affect the stuff outside the holodeck.

Just because someone frames things differently doesn't mean they have to make the obvious mistakes and start killing babies.

For example, I could d... (read more)

[-]TimS60

[trap closes]

Don't do that. I think the rest of your post is fine, but this is not a debate-for-debate's-sake kind of place (and even if it were, that's not a winning move).

which omits every single point that goes in favour of e.g. non-realism, because they are too irrational or too stupid.

No, that set of posts goes on at some length about how MWI has not yet provided a good derivation of the Born probabilities.

5EHeller
But I think it does not do justice to what a huge deal the Born probabilities are. The Born probabilities are the way we use quantum mechanics to make predictions, so saying "MWI has not yet provided a good derivation of the Born probabilities" is equivalent to "MWI does not yet make accurate predictions," I'm not sure thats clear to people who read the sequences but don't use quantum mechanics regularly. Also, by omitting the wide variety of non-Copenhagen interpretations (consistent histories, transactional, Bohm, stochastic-modifications to Schroedinger,etc) the reader is lead to believe that the alternative to Copenhagen-collapse is many worlds, so they won't use the absence of Born probabilities in many worlds to update towards one of the many non-Copenhagen alternatives.
8Eliezer Yudkowsky
Note that the Born probabilities really obviously have something to do with the unitarity of QM, while no single-world interpretation is going to have this be anything but a random contingent fact. The unitarity of QM means that integral-squared-modulus quantifies the "amount of causal potency" or "amount of causal fluid" or "amount of conserved real stuff" in a blob of the wavefunction. It would be like discovering that your probability of ending up in a computer corresponded to how large the computer was. You could imagine that God arbitrarily looked over the universe and destroyed all but one computer with probability proportional to its size, but this would be unlikely. It would be much more likely (under circumstances analogous to ours) to guess that the size of the computer had something to do with the amount of person in it. The problems with Copenhagen are fundamentally one-world problems and they go along with any one-world theory. If I honestly believed that the only reason the QM sequence wasn't convincing was that I didn't go through every single one-world theory to refute them separately, I could try to write separate posts for RQM, Bohm, and so on, but I'm not convinced that this is the case. Any single-world theory needs either spooky action at a distance, or really awful amateur epistemology plus spooky action at a distance, and there's just no reason to even hypothesize single-world theories in the first place. (I'm not sure I have time to write the post about Relational Special Relativity in which length and time just aren't the same for all observers and so we don't have to suppose that Minkowskian spacetime is objectively real, and anyway the purpose of a theory is to tell us how long things are so there's no point in a theory which doesn't say that, and those silly Minkowskians can't explain how much subjective time things seem to take except by waving their hands about how the brain contains some sort of hypothetical computer in which computi

The problems with Copenhagen are fundamentally one-world problems and they go along with any one-world theory. If I honestly believed that the only reason the QM sequence wasn't convincing was that I didn't go through every single one-world theory to refute them separately, I could try to write separate posts for RQM, Bohm, and so on, but I'm not convinced that this is the case. Any single-world theory needs either spooky action at a distance, or really awful amateur epistemology plus spooky action at a distance, and there's just no reason to even hypothesize single-world theories in the first place.

It is not worth writing separate posts for each interpretation. However it is becoming increasingly apparent that to the extent that the QM sequence matters at all it may be worth writing a single post which outlines how your arguments apply to the other interpretations. ie.:

  • A brief summary of and a link to your arguments in favor of locality then an explicit mention of how this leads to rejecting "Ensemble, Copenhagen, de Broglie–Bohm theory, von Neumann, Stochastic, Objective collapse and Transactional" interpretations and theories.
  • A brief summary of and a link to you
... (read more)
6EHeller
Not so. If we insist that our predictions need to be probabilities (take the Born probabilities as fundamental/necessary), then unitarity becomes equivalent to the statement that probabilities have to sum to 1, and we can then try to piece together what our update equation should look like. This is the approach taken by the 'minimalist'/'ensemble' interpretation that Ballentine's textbook champions, he uses probabilities sum to 1 and some group theory (related to the Galilean symmetry group) to motivate the form of the Schroedinger equation. Edit to clarify: In some sense, its the reverse of many worlds- instead of taking the Schroedinger axioms as fundamental and attempting to derive Born, take the operator/probability axioms seriously and try to derive Schroedinger. I believe the same consideration could be said of the consistent histories approach, but I'd have to think about it before I'd fully commit. Edit to add: Also, what about "non-spooky" action at a distance? Something like the transactional interpretation, where we take relativity seriously and use both the forward and backward Green's function of the Dirac/Klein-Gordon equation? This integrates very nicely with Barbour's timeless physics, properly derives the Born rule, has a single world, BUT requires some stochastic modifications to the Schroedinger equation.
2Shmi
What surprises me in the QM interpretational world is that the interaction process itself is clearly more than just a unitary evolution of some wave function, given how the number of particles is not conserved, requiring the full QFT approach, and probably more, yet (nearly?) all interpretations stop at the QM level, without any attempt at some sort of second quantization. Am I missing something here?
6EHeller
Mostly just that QFT is very difficult and not rigorously formulated. Haag's theorem (and Wightman's extension) tell us that an interacting quantum field theory can't live in a nice Hilbert space, so there is a very real sense that realistic QFTs only exist peturbatively. This makes interpretation something of a nightmare. Basically, we ignore a bunch of messy complications (and potential inconsistency) just to shut-up-and-calculate, no one wants to dig up all that 'just' to get to the messy business of interpretation.
2Shmi
Are you saying that people knowingly look where it's light, instead of where they lost the keys?
4EHeller
More or less. If the axiomatic field theory guys ever make serious progress, expect a flurry of me-too type interpretation papers to immediately follow. Until then, good luck interpreting a theory that isn't even fully formulated yet. If you ever are in a bar after a particle phenomenology conference lets out, ask the general room what, exactly, a particle is, and what it means that the definition is NOT observer independent.
3Shmi
Oh, I know what a particle is. It's a flat-space interaction-free limit of a field. But I see your point about observer dependence.
8EHeller
Then what is it, exactly, that particle detectors detect? Because it surely can't be interaction free limits of fields. Also, when we go to the Schreodinger equation with a potential, what are we modeling? It can't be a particle, there is non-perturbative potential! Also, for any charged particle, the IR divergence prevents the limit, so you have to be careful- 'real' electrons are linear combination of 'bare' electrons and photons.
6Shmi
What I meant was that if you think of a field excitation propagation "between interactions", they can be identified with particles. And you are right, I was neglecting those pesky massless virtual photons in the IR limit. As for the SE with a potential, this is clearly a semi-classical setup, there are no external classical potentials, they all come as some mean-field pictures of a reasonably stable many-particle interaction (a contradiction in terms though it might be). I think I pointed that out earlier in some thread. The more I learn about the whole thing, the more I realize that all of Quantum Physics is basically a collection of miraculously working hacks, like narrow trails in a forest full of unknown deadly wildlife. This is markedly different from the classical physics, including relativity, where most of the territory is mapped, but there are still occasional dangers, most of which are clearly marked with orange cones.
5A1987dM
Somebody: Virtual photons don't actually exist: they're just a bookkeeping device to help you do the maths. Someone else, in a different context: Real photons don't actually exist: each photon is emitted somewhere and absorbed somewhere else a possibly long but still finite amount of time later, making that a virtual photon. Real photons are just a mathematical construct approximating virtual photons that live long enough. Me (in yet a different context, jokingly): [quotes the two people above] So, virtual photons don't exist, and real photons don't exist. Therefore, no photons exist at all.
2EHeller
This is less joking then you think- its more or less correct. If you change the final to conclusion to "there isn't a good definition of photon" you'd be there. Its worse for QCD, where the theory has an SU(3) symmetry you pretty much have to sever in order to treat the theory perturbatively.
2OrphanWilde
It really is. When you look at the experiments they're performing, it's kind of a miracle they get any kind of usable data at all. And explaining it to intelligent people is this near-infinite recursion of "But how do they know that experiment says what they say it does" going back more than a century with more than one strange loop. Seriously, I've tried explaining just the proof that electrons exist, and in the end the best argument is that all the math we've built assuming their existence have really good predictive value. Which sounds like great evidence until you start confronting all the strange loops (the best experiments assume electromagnetic fields...) in that evidence, and I don't even know how to -begin- untangling those. I'm convinced you could construct parallel physics with completely different mechanics (maybe the narrow trails aren't as narrow as you'd think?) and get exactly the same results. And quantum field theory's history of parallel physics doesn't exactly help my paranoia there, even if they did eventually clean -most- of it up.
2Shmi
I fail to see the difference between this and "electrons exist". But then my definition of existence only talks about models, anyway. I am also not sure what strange loops you are referring to, feel free to give a couple of examples. Most likely. It happens quite often (like Heisenberg's matrix mechanics vs Schrodinger's wave mechanics). Again, I have no problem with multiple models giving the same predictions, so I fail to see the source of your paranoia... My beef with quantum physics is that there are many straightforward questions within its own framework it does not have answers to.
2A1987dM
Imagine there's a different, as-yet-unknown [ETA: simpler] model that doesn't have electrons but makes the same experimental predictions as ours.
2OrphanWilde
One example is mentioned; the proofs of electrons assumes the existences of (electircally charged) electromagnetic fields (Thomson's experiment), the proof of electromagnetic fields -as- electrically charged comes from electron scattering and similar experiments. (I'm fine with "electrons exist as a phenomenon, even if they're not the phenomenon we expect them to be", but that tends to put people in an even more skeptical frame of mind then before I started "explaining". I've generally given up such explanations; it appears I'm hopelessly bad at it.) Another strange loop is in the quantization of energy (which requires electrical fields to be quantized, the evidence for which comes quantization of energy to begin with). Strange loops are -fine-, taken as a whole - taken as a whole the evidence can be pretty good - but when you're stepping a skeptical person through it step by step it, it's hard to justify the next step when the previous step depends on it. The Big Bang Theory is another - the theory requires something to plug the gap in expected versus received background radiation, and the evidence for the plug (dark energy, for example) pretty much requires BBT to be true to be meaningful. (Although it may be that a large part of the problem with the strange loops is that only the earliest experiments tend to be easily found in textbooks and on the Internet, and later less loop-prone experiments don't get much attention.)
1EHeller
Depends on what you mean by 'different mechanics.' Weinberg's field theory textbook develops the argument that only quantum field theory, as a structure, allows for certain phenomenologically important characteristics (mostly cluster decomposition). However, there IS an enormous amount of leeway within the field theory- you can make a theory where electric monopoles exist as explicit degrees of freedom, and magnetic monopoles are topological gauge-field configurations and its dual to a theory where magnetic monopoles are the degrees of freedom and electric monopoles exist as field configurations. While these theories SEEM very different, they make identical predictions. Similarly, if you can only make finite numbers of measurements, adding extra dimensions is equivalent to adding lots of additional forces (the dimensional deconstruction idea), etc. Some 5d theories with gravity make the same predictions as some 4d theories without.
1A1987dM
Yes. While I'm not terribly up-to-date with the ‘state-of-the-art’ in theoretical physics, I feel like the situation today with renormalization and stuff is like it was until 1905 for the Lorentz-FitzGerald contraction or the black-body radiation, when people were mystified by the fact that the equations worked because they didn't know (or, at least, didn't want to admit) what the hell they meant. A new Einstein clearing this stuff up is perhaps overdue now. (The most obvious candidate is “something to do with quantum gravity”, but I'm prepared to be surprised.)
1private_messaging
And yet it proclaims the issue settled in favour of MWI and argues of how wrong science is for not settling on MWI and so on. The connection - that this deficiency is why MWI can't be settled on, sure does not come up here. Speaking of which, under any formal metric that he loves to allude to (e.g. Kolmogorov complexity), MWI as it is, is not even a valid code for among other things this reason. It doesn't matter how much simpler MWI is if we don't even know that it isn't too simple, merely guess that it might not be too simple. edit: ohh, and lack of derivation of Born's rules is not the kind of thing I meant by argument in favour of non-realism. You can be non-realist with or without having derived Born's rules. How QFT deals with relativistic issues, as outlined by e.g. Mitchell Porter , is a quite good reason to doubt reality of what goes on mathematically in-between input and output. There's a view that (current QM) internals are an artefact of the set of mathematical tricks which we like / can use effectively. The view that internal mathematics is to the world as rods and cogs and gears inside a WW2 aiming computer are to a projectile flying through the air.

That inference isn't made. Eliezer has other information from which to reach that conclusion. In particular, he has several years worth of ranting and sniping from Shminux about his particular pet peeve.

That very well could be, in which case my recommendation about that inference does not apply to Eliezer.

I will note that this comment suggests that Eliezer's model of shminux may be underdeveloped, and that caution in ascribing motives or beliefs to others is often wise.

Hello community.

I've been aware of LW for a while, reading individual posts linked in programmer/engineering hangouts now and then, and I independently came across HPMOR in search of good fanfiction. But the decision to un-lurk myself came after I attended a CFAR workshop (a major positive life change) and realized that I want to keep being engaged with the community.

I'm very interested in anti-aging research (both from the effective altruism point of view, and because I find the topic really exciting and fascinating) and want to learn about it in as much ... (read more)

Hi I'm N. Currently a systems engineer. Lurked for sometime and finally decided to create an account. I am interested in mathematics and computer science and typography. Fonts can give me happiness or drive me crazy.

I am currently in SoCal.

[This comment is no longer endorsed by its author]Reply

This account is used by a VA to post events for the Melbourne Meetup group. Comment is to accrue 2 karma to allow posting.

I chose more_wrong as a name because I'm in disagreement with a lot of the lesswrong posters about what constitutes a reasonable model of the world. Presumably my opinions are more wrong than opinions that are lesswrong, hence the name :)

My rationalist origin story would have a series of watershed events but as far as I can tell, I never had any core beliefs to discard to become rational, because I never had any core beliefs at all. Do not have a use for them, never picked them up.

As far as identifying myself as an aspiring rationalist, the main events t... (read more)

My name is Morgan. I was brought here by my brother and have been lurking for awhile. I've have read most of the sequences which have cleared up some of my confused thinking. There were things that I didn't think about because I didn't have an answer for them. Free will and morality used to confuse me and so I never thought much about them since I didn't have a guarantee that they were answerable.

Lesswrong has helped me get back into programming. It has helped me learn to think about things with precision. And to understand how an Cognitive algorithm feels from the inside to dissolve questions.

I am going to join this community and improve my skills. Tsuyoku Naritai.

Hello,

I'm a 34 yo programmer/entrepreneur in Romania, with a long time interest in rationality - long before I called it by that name. I think the earliest name I had for it was "wisdom", and a desire to find a consistent, repeatable way to obtain it. Must admit at that time I didn't imagine it was going to be so complicated.

Spent some of my 20s believing I already know everything, and then I made a decision that in retrospect was the best I ever made: never to look at the price when I buy a book, but only at the likelihood of finishing it. Which... (read more)

[-][anonymous]50

Hello, LW,

One of my names is holist. I am 45. Self-employed family man, 6 kids, 2 dogs, 1 cat. Originally a philosopher (BA+MA from Sussex, UK), but I've been a translator for 19 years now... it is wearing thin. Music and art are also important parts of my life (have sold music, musically directed a small circus, have exhibited pictures), and recently, with dictatorship being established here in Hungary, politics seems increasingly urgent, too. I dabble in psychotherapy and call myself a Discordian. Recently, I started thinking about doing a PhD somewhere.... (read more)

I am a celibate pedophile. That means I feel a sexual and romantic attraction to young girls (3-12) but have never acted on that attraction and never will. In some forums, this revelation causes strong negative reactions and a movement to have me banned. I hope that's not true here.

From a brief search, I see that someone raised the topic of non-celibate pedophilia, and it was accepted for discussion. http://lesswrong.com/lw/67h/the_phobia_or_the_trauma_the_probem_of_the_chcken/ Hopefully celibate pedophilia is less controversial.

I have developed views on ... (read more)

Assume that the reported p-values are true (and not the result of selection bias, etc.). Take a hundred papers which claim results at p=0.05. At the asymptote about 95 of them will turn out to be correct...

That's not how p-values work. p=0.05 doesn't mean that the hypothesis is 95% likely to be correct, even in principle; it means that there's a 5% chance of seeing the same correlation if the null hypothesis is true. Pull a hundred independent data sets and we'd normally expect to find a p=0.05 correlation or better in at least five or so of them, no ... (read more)

Take a hundred papers which claim results at p=0.05. At the asymptote about 95 of them will turn out to be correct and about 5 will turn out to be false.

No, they won't. You're committing base rate neglect. It's entirely possible for people to publish 2000 papers in a field where there's no hope of finding a true result, and get 100 false results with p 0.05).

If you understand the point there's no reason to make a comment like this except as an attempt to show off. Changing "250 IQ" to "+10 sd out from the mean intelligence" only serves to make the original point less accessible to people not steeped in psychometry.

1TheOtherDave
You don't have to be steeped in psychometry to understand what a standard deviation is. And if we're going to talk about intelligence at all, it is often helpful to keep in mind the difference between IQ and intelligence.

I am a maximum-security ex-con who studied and used logic for pro se, civil-rights lawsuits. (The importance of being a maximum-security ex-con is that I was stubborn iconoclast who learned and used logic in all seriousness.) Logic helped me identify the weak links in my opponent's arguments and to avoid weak links in my own arguments, and logic helped my organize my writing and evidence. I also studied and learned to use “The Option Process” for eliminating my negative emotions and to understand other people's negative emotions. The core truth of “The... (read more)

Hey, I'm dirtfruit.

I've lurked here for quite a while now. LessWrong is one of the most interesting internet communities I've observed, and I'd like to begin involving myself more actively. I've been to one meetup, in NYC, a few months ago, which was nice. I've read most of the sequences (I think I've read all of them at least once, but I haven't looked hard enough to be super-confident saying that). HPMOR is cool, I enjoyed reading it and continue to check for updates. I've tried to read most of what Eliezer has written, but gave up early on anything extr... (read more)

Hi, I'm a second year engineering student at a university of California. I like engaging in rational discussions and find importance in knowing about what's going on in the world and gain more insight on controversial issues such as abortion, gay rights, sexuality, immigration, etc. Someone on Facebook directed me to this site but I easily get bored so I may or may not be much of a contribution.

0Viliam_Bur
It is probably better to practice rationality skills on less controversial issues. When speaking about politics, people instinctively become less rational, because politics is usually not about being correct, but about belonging to the winning tribe.
[-]claus*50

Hi all, my name is Claus. I am unsure how exactly I got here, but I sure do know why I kept coming back. I'm so happy to have found such a large and confident group of like minded people.

Currently I am trying to finish some essays on Science and evidence based politics. I'm sure I will enjoy my stay here!

Hi everyone, I’m The Articulator. (No ‘The’ in my username because I dislike using underscores in place of spaces)

I found LessWrong originally through RationalWiki, and more recently through Iceman’s excellent pony-fic about AI and transhumanism, Friendship is Optimal.

I’ve started reading the Sequences, and made some decent progress, though we’ll see how long I maintain my current rate.

I’ll be attending University this fall for Electrical Engineering, with a desire to focus in electronics.

Prior to LW, I have a year’s worth of Philosophy and Ethics classes... (read more)

3Articulator
Okay, whoa, hey. I clearly and repeatedly explained my lack of total understanding of LW conventions. I'm not sure what about this provoked a downvote, but I would appreciate a bit more to go on. If this is about my noobishness, well, this is the Welcome Thread. Great job on the welcoming, by the way, anonymous downvoter. At the very least offer constructive criticism. Edit: Troll? Really? Edit,Edit: Thank you whoever deleted the negative karma!
2Said Achmiz
I wouldn't take downvotes to heart, if I were you, unless like, a whole bunch of people all downvote you. A downvote's not terribly meaningful by itself. Welcome to Less Wrong, by the way. Now, I didn't downvote you, but here's some criticism, hopefully constructive. I didn't read most of your post, from where you start discussing your philosophy (maybe I will later, but right now it's a bit tl;dr). In general, though, taking what you've learned and attempting to construct a coherent philosophical position out of it is usually a poor idea. You're likely to end up with a bunch of nonsense supported by a tower of reasoning detached from anything concrete. Read more first. Anyway, having a single "this is my philosophy" is really not necessary... pretty much ever. Figure out what your questions are, what you're confused about, and why, approach those things one at a time and in without an eye toward unifying everything or integrating everything into a coherent whole, and see what happens. Also: read the Sequences, they are pretty much concentrated awesome and will help with like, 90% of all confusion.
1Articulator
Okay, noted. It's just that from what I've seen so far, a post with a net downvote is generally pretty horrible. I admit I took some offense from the implication. I'll try not to let it bother me unless N is high enough for it to be me, entirely, that's the problem. Thanks. :) Thank you for taking the time to give constructive criticism. I will attempt to make it more coherent and summarized, assuming I keep any of it. I appreciate I am likely to inexperienced to come up with anything that impressive, but I was hoping to use this as a method to understand which parts of my cognitive function were not behaving rationally, so as to improve. I will absolutely continue to read, but with the utmost respect to Eliezer, I have yet to come across anything in the Sequences which did more than codify or verbalize beliefs I'd already held. By the point, two and a half sequences in, I felt it was unlikely that the enlightenment value would spike in such a way as to render my previously held views obsolete. I'll bear your objections in mind, but I fear I won't let go of this theory unless somebody points out why it is wrong specifically, as opposed to methodically. Not that I'm putting any onus on you or anyone else to do so. As I said, I am reading them, but have found them mostly about how to think as opposed to what to think so far, though I daresay that is intentional in the ordering. Thanks again for your help and kindness. :)
2Said Achmiz
It's not even that (ok, it's probably at least a little of that). Some of the most worthless and nonsensical philosophy has come from professional philosophers (guys with Famous Names, who get chapters in History of Philosophy textbooks) who've constructed massive edifices of blather without any connection to anything in the world. EDIT: See e.g. this quote. You've got it right. One of the points Eliezer sometimes makes is that true things, even novel true things, shouldn't sound surprising. Surprising and counterintuitive is what you get when you want to sound deep and wise. When you say true things, what you get is "Oh, well... yeah. Sure. I pretty much knew that." Also, the Sequences contain a lot of excellent distillation and coherent, accessible presentation of things that you would otherwise have to construct from a hundred philosophy books. As for enlightenment that makes your previous views obsolete... in my case, at least, that happened slowly, as I digested things I read here and in other places, and spent time (over a long period) thinking about various things. Others may have different experiences. Yeah, one of the themes in Less Wrong material, I've found, is that how to think is more important than what to think (if for no other reason than that once you know how to think, thinking the right things follows naturally).
1Articulator
Oh, I know. I start crying inside every time I learn about Kant. Well, I'll take what you've said on board. Thanks for the help!
2Vaniver
Welcome to LW! There is a metaethics sequence, of which this post asks what you would do if morality didn't exist. This may be a good place to start looking, but I wouldn't be too discouraged if you don't find it terribly useful (as Eliezer and others see it as not as communicative as Eliezer wanted it to be). The point I would focus on is that there's a difference between an ethical system that would compel any possible mind to follow it, and an ethical system in harmony with you and those around you. Figure out what you can get from ethics, and then seek to discover which the results of ethics you try. Worry more about developing a system that reliably makes small, positive changes than about developing a system that is perfectly correct. As it is said, a complex system that works is invariably found to have evolved from a simple system that worked.

Why the high prior, out of curiosity ?

Hello, smart weird people.

I've been lurking on and off for a while but now it seems to be a good time to try playing in the LW fields. We'll see how it goes.

I'm interested in "correct" ways of thinking, obviously, but I'm also interested in their limits. The edges, as usual, are the most interesting places to watch. And maybe to be, if you can survive it.

No particular hot-burning questions at the moment or any specific goals to achieve. Just exploring.

3DSimon
Hello, Lumifer! Welcome to smart-weird land. We have snacks. So you say you have no burning questions, but here's one for you: as a new commenter, what are your expectations about how you'll be interacting with others on the site? It might be interesting to note those now, so you can compare later.
2Lumifer
Hm, an interesting question. In the net space I generally look like an irreverent smartass (in the meatspace too, but much attenuated by real relationships with real people). So on forums where I hang out, maybe about 10% of the regulars like me, about a quarter hate me, and the rest don't care. One of the things I'm curious about is whether LW will be different. Or maybe I will be different -- I can argue that my smartassiness is just allergy to stupidity. Whether that's true or not depends on the value of "true", of course...

I don't know what you think a "strong argument" is. Arguments are not weapons, with a certain caliber and stopping power and so forth, such that two sides might go at each other with their respective arguments, and whoever's got the most firepower wins. That's not how it works.

An argument may be more or less persuasive (relative to some audience!), but that depends on many things, such as whether the argument hits certain emotional notes, whether it makes use of certain common fallacies and biases, or certain commonly held misconceptions; or whet... (read more)

Chaosmosis has a few hundred karma now after dropping at least that deep, being accused of being a troll, and facing a number of suggestions that he leave. It's certainly not un-doable.

That is quite a hefty bullet to bite: one can no longer say that South Africa is better society after the fall of Apartheid, and so on.

That's hardly the best example you could have picked since there are obvious metrics by which South Africa can be quantifiably called a worse society now -- e.g. crime statistics. South Africa has been called the "crime capital of the world" and the "rape capital of the world" only after the fall of the Apartheid.

That makes the lack of moral progress in South Africa a very easy bullet to bite - I'd use something like Nazi Germany vs modern Germany as an example instead.

I generally understand the phrase "objective morality" to refer to a privileged moral reference frame.

It's not an incoherent idea... it might turn out, for example, that all value systems other than M turn out to be incoherent under sufficiently insightful reflection, or destructive to minds that operate under them, or for various other reasons not in-practice implementable by any sufficiently powerful optimizer. In such a world, I would agree that M was a privileged moral reference frame, and would not oppose calling it "objective morality", though I would understand that to be something of a term of art.

That said, I'd be very surprised to discover I live in such a world.

Yes, value drift is the typical state for minds in our experience.

Building a committed Clipper that cannot accidentally update its values when trying to do something else is only possible after the problem of value drift has been solved. A system that experiences value drift isn't a reliable Clipper, isn't a reliable good-thing-doer, isn't reliable at all.

Next.

I didn't say it was universal among all entities of all degrees of intelligence or rationality. I said there was a non neglible probability that agents of a certain level of rationality converging on an understanding of ethics.

Where does this non-negligible probability come from though? When I've asked you to provide any reason to suspect it, you've just said that as you're not arguing there's a high probability, there's no need for you to answer that.

"SR" stands to super rational. Rational agents find rational arguments rationally compelli

... (read more)

I masquerade as a liberal Mormon on Facebook since I'm still in the closet with my unbelief. In my discussions with friends and family the most common position taken is that the First Presidency and the Twelve Apostles cannot teach false doctrine or else they will be forcibly removed by God. I even had a former missionary companion tell me that President Gordon B. Hinckley died in 2008 not from old age (he was 98) but because he had made false statements on Larry King Live concerning the doctrine of exaltation in which worthy Latter-day Saints can become gods.

2Desrtopa
How do they distinguish between true statements which precede their deaths, and false statements which cause their deaths?
4atomliner
Whatever the prophet says that doesn't match up with their own interpretation of Mormonism is false? I honestly do not know, I never thought this way when I was LDS.
[-]CCC50

Fact-checking, via sources similar to Kawoomba's, leads to the milder claim that melanin in the skin merely provides protection against sunburn, and not immunity. Levels of melanin in the skin are very strongly correlated with race; though it is not strictly equivalent (albinism is possible among black people) it is reasonable to say that black people, in general, are more resistant to sunburn than white people.

1Morendil
This smacks of circular reasoning - for a correlation to be demonstrated, you'd have to know that "there is a meaningful way to categorize human beings into races " to start with. So, this too needs a citation. There is a largish argumentative gap from "some genes confer a desirable resilience to sunburn" (possibly conferring some less desirable traits at the same time) to "some races enjoy unalloyed advantages over others by virtue of heredity".
2A1987dM
What about this: levels of melanin in the skin are very strongly correlated with the geographic provenance of one's ancestors in the late 15th century?

Student of economics. Not going to write any more than that about myself at this point.

"To post to the Discussion area you must have at least 2 points." - I'd like to post something I've written, but I need two karma to do so.

2MugaSofer
People generally get more than 2 karma for giving a full introduction, so you could try that. Alternately, you could look around and reply to something - doesn't matter if it's old, people'll probably see it in the recent comments.

New to LW... my wife re-ignited my long-dormant interest in AI via Yudkowski's Friendly AI stuff.

Is there a link somewhere to "General Intelligence and Seed AI"? It seems that older content at intelligence.org has gone missing. It actually went missing while my wife was in the middle of reading it online... very frustrating. Friendly AI makes a lot of references to it. Seems important to read it.

I'd prefer a PDF, if somebody knows where to find one.

Thanks!

So uhm. How do the experimental results, y'know, happen?

I think I understand everything else. Your position makes perfect sense. Except for that last non-postulate. Perhaps I'm just being obstinate, but there needs to be something to the pattern / regularity.

If I look at a set of models, a set of predictions, a set of experiments, and the corresponding set of experimental results, all as one big blob:

The models led to predictions - predictions about the experimental results, which are part of the model. The experiments were made according to the model th... (read more)

[-]TimS50

this model fails a number of tests

You are not using the word "tests" consistently in your examples. For luminiferous aether, test means something like "makes accurate predictions." Substituting that into your answer to wrong yields:

No, this model fails to make accurate predictions.

Which I'm having trouble parsing as an answer to the question. If you don't mean for that substitution to be sensible, then your parallelism does not seem to hold together.

But in deference to your statement here, I am happy to drop this topic if ... (read more)

There is a strong local convention against discussing topics for which certain positions are strongly enough affiliated with tribal identities that the identity-signalling aspects of arguments for/against those positions can easily interfere with the evidence-exploring aspects of those arguments. (Colloquially, "mindkilling" topics. as you say.)

That said, there's also a strong local convention against refraining from discussing topics just because such identity-signalling aspects exist.

So mostly, the tradition is we argue about what the traditio... (read more)

Your ball point is very different. My driving point is that there isn't even a nice, platonic-ideal type definition of particle IN THE MAP, let alone something that connects to the territory. I understand how my above post may lead you to misunderstand what I was trying to get it..

To rephrase my above comment, I might say: some of the features a MAP of a particle needs is that its detectable in some way, and that it can be described in a non-relativistic limit by a Schroedinger equation. The standard QFT definitions for particle lack both these features. Its also not-fully consistent in the case of charged particles.

In QFT there is lots of confusion about how the map works, unlike classical mechanics.

3Shmi
This reminds me of the recent conjecture that the black hole horizon is a firewall, which seems like one of those confusions about the map.

Interesting. If I may; what is it about technology/futurism you find so unappealing?

I think it would take a very long response to truly answer this, unfortunately. A lot of it has to do with exposing myself in the past through friends, media, and my surroundings to hippie-ish memeplexes that sort of reinforce this view. (Right now I go to school on a dairy farm, for example). Also in the past I had extremely irrational views on a lot of issues, one of which was a form of neo-luddism, and that idea is still in my brain somewhere.

Also, I have to ask: w

... (read more)
[-]Shmi50

Just wondering if you realize that you simply guessed the two-letter teacher's password ("SE") which acted perfectly as a curiosity stopper for you.

Hello,

I'd like to get some opinions about my future goals.

I'm 21 and I'm a second-year student of engineering in Prague, Czech Republic, focusing mainly on math and then physics.

My background is not stunning - I was born in 93, visiting sporting primary school and then general high school. Until I was in second year of high school, I behaved as an idiot with below-average results in almost everything, paradoxically except extraordinary "general study presupposes" (whatever it means). My not so bad IQ - according to IQ test I took when I was 15 ... (read more)

Hi everybody,

My name is Eric, and I'm currently finishing up my last semester of undergraduate study and applying to Ph.D. programs in cognitive psychology/cognitive neuroscience. I recently became interested in the predictive power offered by formal rational models of behavior after working in Paul Glimcher's lab this past summer at NYU, where I conducted research on matching behavior is rhesus monkeys. I stumbled upon Less Wrong while browsing the internet for behavioral economics blogs. After reading a couple of posts, I decided to join.

Some sample topi... (read more)

Hello, my name is Luke. I'm an urban planning graduate student at Cleveland State University, having completed an undergrad in philosophy at the University of New Hampshire a year ago. It was the coursework I did at that school which lead me to be interested in the nebulous and translucent topic of rationality, and I'm happy to see so many people involved and interested in the same conversations I'd spend hours having with classmates. Heck, the very question I was asking myself in something of an ontological sense--am I missing the trees for the forest--is... (read more)

G'day

As you can probably guess, I'm Alex. I'm a high school student from Australia and have been disappointed with the education system here from quite some time.

I came to LW via HPMoR which was linked to me by a fellow member of the Aus IMO team. (I seriously doubt I'm the only (ex-)Olympian around here - seems just the sort of place that would attract them). I've spent the past few weeks reading the sequences by EY, as well as miscellaneous other stuff. Made a few (inconsequential) posts too.

I have very little in the way of controversial opinions to off... (read more)

4Vaniver
Welcome! There have been previous political threads, like here, here, or here. If you search "politics," you'll find quite a bit. Here was my response to the proposal that we have political discussion threads; basically, I think politics is a suboptimal way to spend your time. It might feel useful, but that doesn't mean it is useful. Here's Raemon's comment on the norm against discussing politics. Explicitly political discussion can be found on MoreRight, founded by posters active on LessWrong, as well as on other blogs. (MoreRight is part of 'neoreaction', which Yvain has recently criticized here, for example.) I don't see what you mean by the 'pros and cons' of holding a particular ideology. Ideologies are, generally, value systems- they define what is a pro and what is a con.
4Lumifer
I must add that not all political discussion is a mud-flinging match between the Cyans and the Magentas. For example, the Public Choice theory is a bona fide intellectual topic, but it's also clearly political. I would also argue that knowing things like the scope of NSA surveillance is actually useful.
2Vaniver
I'm curious why you'd divert from the historically compelling example of the Blues and the Greens. It's about politics, but the methodology is not political. The part of politics that's generally fun for people is putting forth an impassioned defense of some idea or policy. That's generally not useful on LessWrong unless it's about a site policy- and even then, the passion probably doesn't help. Sure.
3Lumifer
I strongly associate the Greens with, well, the Greens -- a set of political parties in Europe and the whole environmentalist movement. Blue is a politically-associated color in the US as well. True, but LW is VERY unrepresentative sample :-) and maybe we could do a bit better. You're right in that discussing the "pros and cons" of ideological positions is not a good idea, but putting "Warning: mindkill" signs around a huge area of reality and saying "we just don't go there" doesn't look appealing either.

I did not find The Devil's Delusion to be persuasive/good at all. It's scientific quality is perhaps best summarized by noting that Berlinski is an opponent of evolution; I also recall that Berlinski spent an enormous amount of time on the (irrelevant) topic of whether some atheists had been evil.

ETA: Actually, now that I think about, The Devil's Delusion is probably why I tend to ignore or look down on atheists who spend lots of time arguing that God would be evil (e.g. Christopher Hitchens or Sam Harris)- I feel like they're making the same mistake, but on the opposite side.

0[anonymous]
Berlinski's thesis is not that evolution is incorrect or that atheists are evil; rather it is that our modern scientific system has just as many gaping holes in it as does any proper theology. Evolution is not incorrect, but the way it's interpreted to refute God is completely unfounded. Its scientific quality is in fact quite good; do you have any specific corrections or is it just that anything critical of Darwin is surely wrong?
0hairyfigment
How so? Someone involved with CFAR allegedly converted to Catholicism due to an argument-from-morality. Also, I know looking at the Biblical order to kill Isaac, and a general call to murder that I wasn't following, helped me to realize I didn't believe in God as such.
0Randaly
This is evidence that arguments-from-morality do persuade people, not that they should.
0hairyfigment
My point is that various atheists may wish to convince people who actually exist. Such people may give credence to the traditional argument from morality, or may think they believe claims about God while anticipating the opposite.

Actually speaking the words activates different areas of Broca's and Wernicke's regions (and elsewhere) than merely imagining them. Physically vocalizing the words, and hearing yourself vocalize them, allows them to be processed by more areas of your brain.

Hello! I'm here because...well, I've read all of HPMOR, and I'm looking for people who can help me find the truth and become more powerful. I work as an engineer and read textbooks for fun, so hopefully I can offer some small insights in return.

I'm not comfortable with death. I've signed up for cryonics, but still perceive that option as risky. As a rough estimate, it appears that current medical research is about 3% of GDP and extends lifespans by about 2 years per decade. I guess that if medical research spending were increased to 30% of current GDP, the... (read more)

0idea21
Hi, afterburger I find correct that you are not comfortable with death, the opposite of that would be unnatural. I don't know whether you have ever heard of this person https://en.wikipedia.org/wiki/Nikolai_Fyodorovich_Fyodorov "Fedorov argued that the struggle against death can become the most natural cause uniting all people of Earth, regardless of their nationality, race, citizenship or wealth (he called this the Common Cause)." Fedorov speculations about a future resurrection of all, although seen today as a joke, at least they are able to beat the "Pascal's wager" and, if we keep in mind the possiibilities of new particle physics, it is rational hoping that an extremely altruistic future humanity could decide to ressurrect all of us, by using resources on technology that today we cannot imagiine (the same way that current technology could have never been imagined by Plato or Aristotle). Although science and technology could maybe keep limits, the most important issue about that would have to do with motivations. Why should a future humanity would be interested in acting so? The only thing we could do today about helping that, would be starting to build up already the moral and cultural foundation of a fully altruistic and rational society (which would be inevitably, extremely economically efficient). And that is not done yet.

Hello again, Less Wrong! I'm not entirely new — I've been lurking since at least 2010 and I had an account for a while, but since that I've let that one lie fallow for almost two years now I thought I'd start afresh.

I'm a college senior, studying cognitive psychology with a focus on irrationality / heuristics and biases. In a couple of months I'll be starting my year-long senior thesis, which I'm currently looking for a specific topic for. I'm also a novice Python programmer and a dabbler in nootropics.

I'll be trying to avoid spending too unproductive time... (read more)

[-][anonymous]40

Hi, I'm Alex, high school student. Came here from hpmor and have been lurking for about 5 months for now.

I use my "rationalnoodles" nickname almost everywhere, however still can't decide if it's appropriate on LW. Would like to read what others think.

Thanks.

[This comment is no longer endorsed by its author]Reply
2atorm
It's not INappropriate.
[-][anonymous]40

Hi there. I'm thrilled to find a community so dedicated to the seeking of rational truth. I hope to participate in that.

Hi...I'm Will -- I learned about less wrong through a very intelligent childhood friend. I am quite nearly his opposite - so maybe I shouldn't say anything...ever...and just stick to reading and learning. But It recommended leaving an introduction post. I also like this as a method of learning. I skimmed a few of the articles in the about page and enjoyed them...they provided a good deal of information that I believe I am much better at processing and understanding as opposed to creating. Therefore, I'm excited to see what I get out of this. I'm also... (read more)

How funny, I'm Will too! Just a quick & probably useless suggestion: be sure to be extremely honest with yourself about what it is all parts of you want, including the parts that want to play League of Legends. If you understand those parts and how they're a non-trivial part of you, not just an adversarial thing set up to subvert your prefrontal cortex's 'real' ambitions, that will allow you to find ways in which those parts can be satisfied that are more in line with your whole self's ambitions. E.g. the appeal of League of Legends is largely that you have understandable, objective goals that you can make measurable cumulative progress on, which is intrinsically rewarding—the parts of you that are tracking that intrinsic reward might be just as well rewarded by a sufficiently well-taskified approach to learning, say, piano, Japanese, programming, and other skills that are more likely to provide long-term esteem-worthy capital. Finding a way to taskify things in general might be tricky, and it won't itself be the sort of thing that you're likely to make unambiguous cumulative progress on, but it's meta and thus is a very good way to bootstrap to a position where further bootstrapping is easier and where you can hold on to momentum.

Dawkins's "the world looks like we would expect it to look like if there were no God argument" strikes me as a case of this.

Dawkins has a case for drawing that conclusion. He is not merely pointing at the world and saying "Look! No God!" I have not actually read him beyond soundbites, merely know his reputation, so I can't list all the arguments he makes, but one of them, I know, is the problem of evil. The vast quantity of suffering in the world is absolutely what you would expect if there is no benevolent deity overseeing the show,... (read more)

I don't see how so.

I can imagine lots of ways in which the world would be different if a superpowerful superbeing was around with the ability and will to shape reality for whatever purpose -- but when I imagine the superbeing's absence it looks like the world around us.

When I try to ask the theists what the world would have looked like without God, I don't get very convincing answers.

Isn't this just the anthropic principle in action ? Mathematically speaking, the probability of "123456" is exactly the same as that of "632415" or any other sequence. We humans only think that "123456" is special because we especially enjoy monotonically increasing numbers.

2CCC
I'm not sure. The anthropic principle is arguing from the existence of an intelligent observer; I'm arguing from the existence of an orderly universe. I don't think that the existence of an orderly universe is necessarily highly correlated with the existence of an intelligent observer. Unfortunately, lacking a large number of universes to compare with each other, I have no proof of that. Yes. I do not claim that the existence of an orderly universe is undeniable proof of the existence of God; I simply claim that it is evidence which suggests that the universe is planned, and therefore that there is (or was) a Planner. Consider the lottery example; there are a vast number of sequences that could be generated. Such as (35, 3, 19, 45, 15, 8). All are equally probable, in a fair lottery. However, in a biased, unfair lottery, in which the result is predetermined by an intelligent agent, the sort of patterns that might appeal to an intelligent agent (e.g. 1, 2, 3, 4, 5, 6) are more likely to turn up. So P(bias|(1, 2, 3, 4, 5, 6)) > P(bias|(35, 3, 19, 45, 15, 8)).
4drnickbone
This depends on the direction of correlation doesn't it? It could well be that P[Observer|Orderly universe] is low (plenty of types of order are uninhabitable) but that P[Orderly universe|Observer] is high since P[Observer|Disorderly universe] is very much lower than P[Observer|Orderly universe]. So, for example, if reality consists of a mixture of orderly and disorderly universes, then we (as observers) would expect to find ourselves in one of the "orderly" ones, and the fact that we do isn't much evidence for anything. Another thought is whether there are any universes with no order at all? You are likely imagining a "random" universe with all sorts of unpredictable events, but then are the parts of the universe dependent or independent random variables? If they are dependent, then those dependencies are a form of order. If they are independent, then the universe will satisfy statistical laws (large number laws for instance), so this is also a form of order. Very difficult to imagine a universe with no order.
2CCC
Yes, it could be. And if this is true, then my line of argument here falls apart entirely. Huh. A very good point. I was thinking in terms of randomised natural laws - natural laws, in short, that appear to make very little sense - but you raise a good point. Hmmm... one example of a randomised universe might be one wherein any matter can accelerate in any direction at any time for absolutely no reason, and most matter does so on a fairly regular basis (mean, once a day, standard deviation six months). If the force of the acceleration is low enough (say, one metre per second squared on average, expended for an average of ten seconds), and all the other laws of nature are similar to our universe (so still a mostly orderly universe) then I can easily imagine intelligence arising in such a universe as well.
1drnickbone
Well let's take that example, since the amount of "random acceleration" can be parameterised. If the parameter is very low, then we're never going to observe it (so perhaps our universe actually is like this, but we haven't detected it yet!) If the parameter is very large, then planets (or even stars and galaxies) will get ripped apart long before observers can evolve. So it seems such a parameter needs to be "tuned" into a relatively narrow range (looking at orders of magnitude here) to get a universe which is still habitable but interestingly-different from the one we see. But then if there were such an interesting parameter, presumably the careful "tuning" would be noticed, and used by theists as the basis of a design argument! But it can't be the case that both the presence of this random-acceleration phenomenon and its absence are evidence of design, so something has gone wrong here. If you want a real-word example, think about radioactivity: atoms randomly falling apart for no apparent reason looks awfully like objects suddenly accelerating in random directions for no reason: it's just the scale that's very different. Further, if you imagine increasing the strength of the weak nuclear force, you'll discover that life as we know it becomes impossible... whereas, as far as I know, if there were no weak force at all, life would still be perfectly possible (stars would still shine, because that 's the strong force, chemical reactions would still work, gravity would still exist and so on). Maybe the Earth would cool down faster, or something along those lines, but it doesn't seem a major barrier to life. However, the fact that the weak force is "just in the right range" has indeed been used as a "fine-tuning" argument! Dark energy (or a "cosmological constant") is another great example, perhaps even closer to what you describe. There is this mysterious unknown force making all galaxies accelerate away from each other, when gravity should be slowing them down. If
1Bugmaster
Wait, isn't the Planner basically God, or at least some kind of a god ? That would be an interesting test to run, actually, regardless of theism or lack thereof: are sequential numbers more likely (or perhaps less likely) than chance in our current American lottery ? If so, it would be pretty decent evidence that the lottery is rigged (not surprising, since it was in fact designed by intelligent agents, namely us humans). That depends on the value of P(Agent prefers sequential numbers|Agent is intelligent). In any case, are sequential numbers more likely to turn up in sequences that are not directly controlled by humans, f.ex. rolls of reasonably fair dice ?

Hello everyone.

I go by bouilhet. I don't typically spend much time on the Internet, much less in the interactive blogosphere, and I don't know how joining LessWrong will fit into the schedule of my life, but here goes. I'm interested from a philosophical perspective in many of the problems discussed on LW - AI/futurism, rationalism, epistemology, probability, bias - and after reading through a fair share of the material here I thought it was time to engage. I don't exactly consider myself a rationalist (though perhaps I am one), but I spend a great de... (read more)

Easily communicated in a "ceteris paribus, having communicated my evidence across teh internets, if you had the same priors I do, just by you reading my description of the evidence you'd update similarly as I did when perceiving the evidence first hand", yea that would be a tall order.

Unfortunately, I've seen people around here through the Aumann's agreement theorem in the face of people who refuse to provide it. Come to think of it, I don't believe I've ever seen Aumann's agreement theorem used for any other purpose around here.

The comment above from EY is over-broad in calling this an "atheist forum", but I think it still has a good point:

It's logically rude to go to a place where the vast majority of people believe X=34, and you say "No, actually X=87, but I won't accept any discussion on the matter." To act that way is to treat disagreement like a shameful thing, best not brought up in polite company, and that's as clear an example of logical rudeness as I can think of.

An argument can be "decent" without being right. If you want an example, and can follow it Kurt Godel's ontological argument looks pretty decent. Consider that:

A) It is a logically valid argument

B) The premises sound fairly plausible (we can on the face of it imagine some sense of a "positive property" which would satisfy the premises)

C) It is not immediately obvious what is wrong with the premises

The wrongness can eventually be seen by carefully inspecting the premises, and checking which would go wrong in a null world (a possible worl... (read more)

Oh, and another thing:

The optimal situation is that both sides have strong arguments, but atheism's arguments are stronger.

What do you mean, "optimal"? Look, for any question where there is, in principle, a correct answer (which might not be known), the totality of the information available to us at any given time will point to some answer (which might not be the correct one, given incomplete information). Arguments for that answer might be correct. Arguments for some other answer will be wrong.

Why would we expect there to be good arguments f... (read more)

The question of what makes a value a moral value is metaethical, not part of object-level ethics.

Sure. But any answer to that metaethical question which allows us to class some bases for comparison as moral values and others as merely values implicitly privileges a moral reference frame (or, rather, a set of such frames).

Beyond that, I don't see where you are going.

Where I was going is that you asked me a question here which I didn't understand clearly enough to be confident that my answer to it would share key assumptions with the question you mean... (read more)

emergent

The Futility of Emergence

By hypothesis, clippers have certain functionalities walled off from update.

A paperclipper no more has a wall stopping it from updating into morality than my laptop has a wall stopping it from talking to me. My laptop doesn't talk to me because I didn't program it to. You do not update into pushing pebbles into prime-numbered heaps because you're not programmed to do so.

Does a stone roll uphill on a whim?

Perhaps you should study Reductionism first.

Biases are not determined by vote.

Hi! I'm Free_NRG. I've just started a physical chemistry PhD. I found this site through a link from Leah Libresco early last year (I can't remember exactly how I found her blog). I read through the sequences as one of the distractions from too much 4th year chemistry, and particularly liked the probability theory and evolutionary theory sequences. This year, I'm trying to apply some of the productivity porn I've been reading to my life. I'm thinking of blogging about it.

Well, there's the more obvious sense, that there can always exist an "irrational" mind that simply refuses to believe in gravity, regardless of the strength of the evidence. "Gravity makes things fall" is true, because it does indeed make things fall. But not compelling to those types of minds.

But, in a more narrow sense, which we are more interested in when doing metaethics, a sentence of the form "action A is xyzzy" may be a true classification of A, and may be trivial to show, once "xyzzy" is defined. But an agent... (read more)

2TimS
But isn't the whole debate about moral realism vs. anti-realism is whether "Don't murder" is universally compelling to humans. Noticing that pebblesorters aren't compelled by our values doesn't explain whether humans should necessarily find "don't murder" compelling.
3pragmatist
I identify as a moral realist, but I don't believe all moral facts are universally compelling to humans, at least not if "universally compelling" is meant descriptively rather than normatively. I don't take moral realism to be a psychological thesis about what particular types of intelligences actually find compelling; I take it to be the claim that there are moral obligations and that certain types of agents should adhere to them (all other things being equal), irrespective of their particular desire sets and whether or not they feel any psychological pressure to adhere to these obligations. This is a normative claim, not a descriptive one.
3nshepperd
1. What? Moral realism (in the philosophy literature) is about whether moral statements have truth values, that's it. 2. When I said universally compelling, I meant universally. To all agents, not just humans. Or any large class. For any true statement, you can probably expect to find a surprisingly large number of agents who just don't care about it. 3. Whether "don't murder" (or rather, "murder is bad" since commands don't have truth values, and are even less likely to be generally compelling) is compelling to all humans is a question for psychology. As it happens, given the existence of serial killers and sociopaths, probably the answer is no, it isn't. Though I would hope it to be compelling to most. 4. I have shown you two true but non-universally-compelling arguments. Surely the difference must be clear now.
5pragmatist
This is incorrect, in my experience. Although "moral realism" is a notoriously slippery phrase and gets used in many subtly different ways, I think most philosophers engaged in the moral realism vs. anti-realism debate aren't merely debating whether moral statements have truth values. The position you're describing is usually labeled "moral cognitivism". Anyway, I suspect you mis-spoke here, and intended to say that moral realists claim that (certain) moral statements are true, rather than just that they have truth values ("false" is a truth value, after all). But I don't think that modification captures the tenor of the debate either. Moral realists are usually defending a whole suite of theses -- not just that some moral statements are true, but that they are true objectively and that certain sorts of agents are under some sort of obligation to adhere to them.
2Bugmaster
I think you guys should taboo "moral realism". I understand that it's important to get the terminology right, but IMO debates about nothing but terminology have little value.
2nshepperd
Err, right, yes, that's what I meant. Error theorists do of course also claim that moral statements have truth values. True enough, though I guess I'd prefer to talk about a single well-specified claim than a "usually" cluster in philosopher-space.

I would say so also, but PrawnOfFate has already argued that sociopaths are subject to additional egocentric bias relative to normal people and thereby less rational. It seems to me that he's implicitly judging rationality by how well it leads to a particular body of ethics he already accepts, rather than how well it optimizes for potentially arbitrary values.

8Nornagest
Well, I'm not a psychologist, but if someone asked me to name a pathology marked by unusual egocentric bias I'd point to NPD, not sociopathy. That brings up some interesting questions concerning how we define rationality, though. Pathologies in psychology are defined in terms of interference with daily life, and the personality disorder spectrum in particular usually implies problems interacting with people or societies. That could imply either irreconcilable values or specific flaws in reasoning, but only the latter is irrational in the sense we usually use around here. Unfortunately, people are cognitively messy enough that the two are pretty hard to distinguish, particularly since so many human goals involve interaction with other people. In any case, this might be a good time to taboo "rational".

Are you aware that that is basically what every crank says about some other field?

Presumably, if I'm to treat as meaningful evidence about Desrtopa's crankiness the fact that cranks make statements similar to Desrtopa, I should first confirm that non-cranks don't make similar statements.

It seems likely to me that for every person P, there exists some field F such that P believes many aspects of F exist only because of incompetent "experts" perpetuating them. (Consider cases like F=astrology, F=phrenology, F=supply-side economics, F= feminism,... (read more)

I have no idea what you mean by that. I don't think value systems don't come into it, I just think they are not isolated from rationality. And I am sceptical that you could predict any higher-level phenomenon from "the ground up", whether its morality or mortgages.

I mean that value systems are a function of physically existing things, the way a 747 is a function of physically existing things, but we have no evidence suggesting that objective morality is an existing thing. We have standards by which we judge beauty, and we project those values ... (read more)

You are trying to impose your morality/

In what respect?

I can think of one model of moral realism, and it doesn't work, so I will ditch the whole thing.

This certainly doesn't describe my reasoning on the matter, and I doubt it describes many others' here either.

The way I consider the issue, if I try to work out how the universe works from the ground up, I cannot see any way that moral realism would enter into it, whereas I can easily see how value systems would, so I regard assigning non-negligible probability to moral realism as privileging the hypo... (read more)

it is absurd to characterise the practice of treating everyone the same as a form of bias.

Can you expand on what you mean by "absurd" here?

much-repeated confusions--the Standard Muddle

Can you explain what these confusions are, and why they're confused?

In my time studying philosophy, I observed a lot of confusions which are largely dispensed with on Less Wrong. Luke wrote a series of posts on this. This is one of the primary reasons I bothered sticking around in the community.

If people can't agree on how a question is closed, it's open.

A question can still be "open" in that sense when all the information necessary for a rational person to make a definite judgment is available.

Messy solutions are more common in mindspace than contrived ones.

Messy solutions are more often wrong than ones which control for the mess.

"Non-neglible probabiity", remember.

This doesn't even address my question.

As far as I can tell? No. But you're not doing a great job of arguing for the position that I agree with.

Prawn is, in my opinion, flatly wrong, and I'll be delighted to explain that to him. I'm just not giving your soldiers a free pass just because I support the war, if you follow.

Jesus goes so far as to discourage both humans and demons from telling people about his Messiahship; demons tended to be pretty quick to start yelling about how he was the messiah/could torment them /etc. Legion is the most memorable case, but I seem to remember an incident from earlier on in Jesus' life when he had to silence a demon that was revealing his identity (maybe it was in Luke?).

[-]CCC40

Crocker's Rules are not an excuse for you to be rude to others. They are an invitation for others to ignore politeness when talking to you. They are not an invitation for others to be rude to you for the sake of rudeness, either; only where it enables some other aim, such as efficient transfer of information.

What you did, when viewed from the outside, is a clear example of rudeness for the sake of rudeness alone. I don't see how Crocker's rules are relevant.

I would have expected most aspiring rationalists who happen to be theists to be mildly irritated by the anti-theism bits

Well, I don't strongly identify as a theist, so it's hard for me to have an opinion here.

That said, if I imagine myself reading a variant version of the sequences (and LW discourse more generally) which are anti-some-group-I-identify-with in the same ways.... for example, if I substitute every reference to the superiority of atheism to theism (or the inadequacy of theism more generally) with a similar reference to the superiority of, ... (read more)

Sure.


Agnosticism = believing we can't know if God exists

Atheism = believing God does not exist

Theism = believing God exists


turtles-all-the-way-down-ism = believing we can't know what reality is (can't reach the bottom turtle)

instrumentalism/anti-realism = believing reality does not exist

realism = believing reality exists


Thus anti-realism and realism map to atheism and theism, but agnosticism doesn't map to infinte-turtle-ism because it says we can't know if God exists, not what God is.

1Shmi
Or believing that it's not a meaningful or interesting question to ask That's quite an uncharitable conflation. Antirealism is believing that reality does not exist. Instrumentalism is believing that reality is a sometimes useful assumption.

I don't mind if it's turtles all the way down.

The claim that reality may be ultimately unknowable or non-algorithmic is different to the claim you have made elsewhere, that there is no reality.

2TheOtherDave
I'm not sure it's as different as all that from shminux's perspective. By way of analogy, I know a lot of people who reject the linguistic habit of treating "atheism" as referring to a positive belief in the absence of a deity, and "agnosticism" as referring to the absence of a positive belief in the presence of a deity. They argue that no, both positions are atheist; in the absence of a positive belief in the presence of a deity, one does not believe in a deity, which is the defining characteristic of the set of atheist positions. (Agnosticism, on this view, is the position that the existence of a deity cannot be known, not merely the observation that one does not currently know it. And, as above, on this view that means agnosticism implies atheism.) If I substitute (reality, non-realism, the claim that reality is unknowable) for (deity, atheism, agnosticism) I get the assertion that the claim that reality is unknowable is a non-realist position. (Which is not to say that it's specifically an instrumentalist position, but we're not currently concerned with choosing among different non-realist positions.) All of that said, none of it addresses the question which has previously been raised, which is how instrumentalism accounts for the at-least-apparently-non-accidental relationship between past inputs, actions, models, and future inputs. That relationship still strikes me as strong evidence for a realist position.
2PrawnOfFate
I can't see much evidence that the people who construe atheism and agnosticicsm in the way you describe ae actually correct. I agree that the no-reality position and the unknowable-reality position could both be considered anti-realist, but they are still substantively difference. Deriving no-reality from unknowable reality always seems like an error to me, but maybe someone has an impressive defense of it.
2TheOtherDave
Well, I certainly don't want to get into a dispute about what terms like "atheism", "agnosticism", "anti-realism", etc. ought to mean. All I'll say about that is if the words aren't being used and interpreted in consistent ways, then using them does not facilitate communication. If the goal is communication, then it's best not to use those words. Leaving language aside, I accept that the difference between "there is no reality" and "whether there is a reality is systematically unknowable" is an important difference to you, and I agree that deriving the former from the latter is tricky. I'm pretty sure it's not an important difference to shminux. It certainly isn't an important difference to me... I can't imagine why I would ever care about which of those two statements is true if at least one of them is.

(nods) That answers my question. Thank you.

So where did you address it?

It means that EY's musings about the Eborians splitting into the world's of various thicknesses according to Born probabilities no longer make any sense.

coughmeasurecough

I just meant you could use this knowledge to help avoid this ahead of time.

I understand. I'm suggesting it in that context.

That is, I'm asserting now that "if I find myself in a conversation where such terms are being used and I have reason to believe the participants might not share implicit arguments, make the argumentsexplicit" is a good rule to follow in my next conversation.

[-]TimS40

I think you are conflating two related, but distinct questions. Physical realism faces challenges from:

(1) the sociological analysis represented by works like Structure of Scientific Revolution

(2) the ontological status of objects that, in principle, could never be observed (directly or indirectly)

I took shminux as trying to duck the first debate (by adopting physical pragmatism), but I think most answers to the first question do not necessarily imply particular answers to the second question.

[-]tgb40

Yup, but it's not super elegant! There's some info here.

Also, AnkiWeb.net works for me - but you need to use https:// for Anki 2 and http:// for Anki 1.

If you both tap out, then anyone who steps into the discussion wins by default!

In many such cases it may be better to say that if both tap out then everybody wins by default!

2Randy_M
-3 karma, apparently.
1TheOtherDave
In discussions where everyone tapping out is superior to the available alternatives, I'm more inclined to refer to the result as "minimizing loss" than "winning".

I affirm wedrifid's instruction to change your posting style or leave LW.

The way I understand it, it's not that “new” worlds are created that didn't previously exist (the total “thickness” (measure) stays constant). It's that two worlds that looked the same ten seconds ago look different now.

1Shmi
That's a common misconception. In the simplest case of the Schrodinger' cat, there are not just two worlds with cat is dead or cat is alive. When you open the box, you could find the cat in various stages of decomposition, which gives you uncountably many worlds right there. In a slightly more complicated version, where energy and the direction of the decay products are also measurable (and hence each possible value is measured in at least one world), your infinities keep piling up every which way, all equally probable or nearly so.
5A1987dM
(By “two” I didn't mean to imply ‘the only two’.)
1Shmi
Which two out of the continuum of world then did you imply, and how did you select them? I don't see any way to select two specific worlds for which "relative thickness" would make sense. You can classify the worlds into "dead/not dead at a certain instance of time" groups whose measures you can then compare, of course. But how would you justify this aggregation with the statement that the worlds, once split, no longer interact? What mysterious process makes this aggregation meaningful? Even if you flinch away from this question, how do you select the time of the measurement? This time is slightly different in different worlds, even if it is predetermined "classically", so there is no clear "splitting begins now" moment. It gets progressively worse and more hopeless as you dig deeper. How does this splitting propagate in spacetime? How do two spacelike-separated splits merge in just the right way to preserve only the spin-conserving worlds of the EPR experiment and not all possibilities? How do you account for the difference in the proper time between different worlds? Do different worlds share the same spacetime and for how long? Does it mean that they still interact gravitationally (spacetime curvature = gravity). What happens if the spacetime topology of some of the worlds changes, for example by collapsing a neutron star into a black hole? I can imagine that these questions can potentially be answered, but the naive MWI advocated by Eliezer does not deal with any of this.
[This comment is no longer endorsed by its author]Reply
0Stefan_Schubert
Hi Elias, nice to see that you've found your way here. What are your academic interests? Philosophy, it seems, but what kind? And what else are you interested in?
0[anonymous]

Hi, My name is Zoltan Istvan. I'm a transhumanist, futurist, journalist, and the author of the philosophical novel "The Transhumanist Wager." I've been checking out this site for some time, but decided to create an account today to become closer to the community. I thought I'd start by posting an essay I recently wrote, which sums up some of my ideas. Feel free to share it if you like, and I hope you find it moving. Cheers.

"When Does Hindering Life Extension Science Become a Crime—or even Genocide?"

Every human being has both a minimum a... (read more)

4James_Miller
A tiny minority group such as transhumanists should not make threats against the powers that be.
2thirdfloornorth
He's making it himself, not as a spokesperson of the movement. However, as a transhumanist myself, I can't say I disagree with him. Morally speaking, when does not only actively hindering, but choosing to not vehemently pursue, life extension research constitute a threat on our lives? Maybe it is time (or if not, it will be very soon) for transhumanism and transhumanists to enter the public sphere, to become more visible and vocal. We have the capacity, for the first time in human history, to potentially end death, and not for our progeny but for ourselves, now. Yet we are disorganized, spread thin, essentially invisible in terms of public consciousness. People are having freakouts about something as mundane as Google Glass: We are talking about the cyberization or gross genetic manipulation of our bodies, increasing life spans to quickly approach "indefinite", etc., and not in some distant future, but in the next twenty or thirty years. We are being held back by lack of funding, poor cohesion, and a general failure of imagination, and that is largely our own fault for being content to be quiet, to remain a fringe element, optimistically debating and self-congratulating in nooks and niches of various online communities, bothering and being bothered by few if any. I believe it is our moral imperative to, now that is is possible, pursue life extension with every cent and scrap of resources we have available to us. To do otherwise is reprehensible. http://www.nickbostrom.com/fable/dragon.html Let Mr. Istvan make his threats, as long as it gets people talking about us.
2James_Miller
This means taking a consequentialist public relations strategy. Imagine that group X advocates Y, and you know little about X and based on superficial analysis Y seems somewhat silly. How would your opinion of group X change if you find members of this group want to "prosecute anyone" who stands in the way of Y?
2zoltanistvan
Hi, Thanks for the response. I should be clear; transhumanists are not making the threat. I'm making it myself. And I'm doing it as publicly and openly as possible so there can be no misunderstanding: http://www.psychologytoday.com/blog/the-transhumanist-philosopher/201401/when-does-hindering-life-extension-science-become-crime http://ieet.org/index.php/IEET/more/istvan20140131 The problem is that lives are on the line. So I feel someone needs to openly state what seems to be quite obvious. Thanks for considering my thoughts.
3Richard_Kennaway
Do you apply this stirring declaration to the beginning of a life as well as to the end of one?
1zoltanistvan
First, let me just say that the essay is designed to provoke and challenge, while also aiming to move the idea forward in hopes life extension can be taken more seriously. I realize the incredible difficulties and violations of freedom, as the ideas in the essay would require. But to answer your question, I tend to concentrate on "useful" lives, so the declaration would not apply to the beginning of life, but rather to those lives that are already well under way.
[-][anonymous]30

Hi, I've arrived here through HPMoR at least a year ago, but I was pretty intimidated by the size of the Sequences - I do try to catch up now. I'm a medical student from Hungary and I've never learnt maths beyond the high school requirements (I do intend to resolve this since it seems like a requirement here?).

I'm here to learn how to effectively change my mind and have intelligent discussion. I probably won't be active until later, as I don't think I would be able to present my reasonings in a sufficiently convincing way, and I already see a few points wh... (read more)

[This comment is no longer endorsed by its author]Reply