If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as an aspiring rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.


A few notes about the site mechanics

To post your first comment, you must have carried out the e-mail confirmation: When you signed up to create your account, an e-mail was sent to the address you provided with a link that you need to follow to confirm your e-mail address. You must do this before you can post!

Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax (you can click the "Help" link below the text box to bring up a primer).

You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.

However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.

Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.

It's definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there's a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way.  There's also a succession of open comment threads for discussion of anything remotely related to rationality.

Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.

There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion.

A few notes about the community

If you've come to Less Wrong to  discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.

If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)

If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma—honestly, you don't know what you don't know about the community norms here.)

Alternatively, if you're still unsure where to submit a post, whether to submit it at all, would like some feedback before submitting, or want to gauge interest, you can ask / provide your draft / summarize your submission in the latest open comment thread. In fact, Open Threads are intended for anything 'worth saying, but not worth its own post', so please do dive in! Informally, there is also the unofficial Less Wrong IRC chat room, and you might also like to take a look at some of the other regular special threads; they're a great way to get involved with the community!

If you'd like to connect with other LWers in real life, we have  meetups  in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page. There's also a Facebook group. If you have your own blog or other online presence, please feel free to link it.

If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter

A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.

A list of some posts that are pretty awesome

I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:

More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site!


Once a post gets over 500 comments, the site stops showing them all by default. If this post has 500 comments and you have 20 karma, please do start the next welcome post; a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves. (Step-by-step, foolproof instructions here; takes <180seconds.)

If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post.

Finally, a big thank you to everyone that helped write this post via its predecessors!


New Comment
518 comments, sorted by Click to highlight new comments since: Today at 5:25 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I'm Nate. I'm 23. My road here was a winding one.

I grew up as one of those "mathematically gifted" kids in a tiny rural town. I turned away from mathematics towards computer science (which I loved) and economics (which I decided I needed to understand if I wanted to save the world). I went on to became a software engineer at Google.

At the intersection of computer science and economics I fueled a strong belief that the world is broken and that we could do far better if we redesigned social structure from scratch, now that we have so much more knowledge & technology than we did when we created these antiquated governments. I despaired that most think progress entails playing the political tug of war instead of building a better system. I spent a long time refining my ideas.

In the interim I missed a number of opportunities to discover this site. In 2008 I stumbled across the Quantum Physics sequence on Overcoming Bias. I read it up till where it was still being written, then moved on. In 2010, I found HPMoR. I read it, noticed the links to this site, and poked around a little. Nothing came of it. I caught up to where HPMoR was being written, then put it out of my mind. I... (read more)

... We need to talk more.
Let's. I'm on the east coast until Aug 11. Perhaps we can meet up after work on the week of the 12th. (Context for others: The two of us met briefly at a meetup in June and exchanged usernames, but haven't spoken much.)
Do you have a recommendation for how to pronounce 'So8res'?
There's no canonical pronunciation; I enjoy the ambiguity. My surname (Soares) is pronounced "SOAR-ees" by my family, if that helps any.
So like how a Canadian would pronounce multiple apologies? I like it.

Hello. My name is Alex. I am the 10-year-old son of LessWrong user James_Miller.

I am very good at math for my age. I have read several of the books on rationality that my dad owns, and he convinced me to join this community. I like the idea of everyone in a community being honest because I often get into trouble at school for saying honest things that people don't like and talking back to adults(which seems like it's defined as not doing exactly what you're told.)

My favorite subject in school is math. At home, my interests are playing the video game Minecraft and doing origami, but I also like to read and play soccer.

I have much to learn in the art of rationality, such as finding more ways to be in flow. My dad tells me that there are a lot of people on this site who were like me as children, and I would love advice on how to be less bored in school, controlling my emotions, and finding ways to improve myself in general.

My name is Avi, and I'm 19.

I was similiar in some aspects to you when I was a kid, in particular being good at math (did calculus and programming at 12-13), getting in trouble, being bored in school, reading a lot, having trouble with emotions.

I hadn't had an explicitly rational upbringing, and only recently (9 months or so) got into it after a chance encounter with HPMOR.

I'll try to give advice on the things you asked. Bear in mind that I didn't actually try any of this when I was in school, it's mostly what I would advise my younger self if I had to do it over.

So, you mention being bored in school. There are at least three possible scenarios for that, which should be solved differently:

  1. You have trouble concentrating or generating the will to concentrate on material that you don't know, but think is important.
  2. You think the material being taught is unimportant and therefore don't care about paying attention.
  3. You already know all or some of the material that is being taught.

I don't really have anything for 1 aside from the standard "force yourself to pay attention", maybe others can help.

For 3, you could consider asking (or having your parents ask) to be skipped a cla... (read more)

Thanks! I'm the 3rd scenario in my case, and I joined that Brilliant website. It seems to be helpful so far. I do have to participate in classes where I know everything, so what I'll end up doing most of the time is having my dad send me to school with special math worksheets that are at my level that I can do during math class. I already have some Martin Gardener books, and will be ordering more, as you are not the only person who recommended him.

Hey Alex!

When I think back to when I was your age, I really wished I had gotten more involved in math competitions. Does your school have any programs like MATHCOUNTS, AMC8, etc.? I didn't compete in any academic competitions until high school, and I really wished that I had known about them earlier on. It makes getting ahead in math so much fun and it helps lay some really important foundations for the more complicated stuff.

Anyway, keep up the good work!

Also anything by Martin Gardner [https://en.wikipedia.org/wiki/Martin_Gardner_bibliography], because his books are so much fun and help to spark your imagination. At a young age one of the most important thing to develop is a habit of perseverance and not giving up when trying to solve a problem and avoiding developing areas of learned blankness [http://lesswrong.com/lw/5a9/learned_blankness/]. You should develop an unfaltering confidence to use your own head when trying to solve the problems. Sharpening mental capabilities and developing good mental habits and attitudes seems to be more important than learning more things (for example, the author of many AoPS books, Richard Rusczyk, thinks that it is better for kids to sharpen their minds solving olympiad problems than learn calculus [http://www.artofproblemsolving.com/Resources/articles.php?page=calculustrap]), although desire to learn more, to build your own understanding, is also important. And it is not necessary that the problems are mathematical in nature. For example, if you read Richard Feynman's "Surely You're Joking, Mr. Feynman!" [https://en.wikipedia.org/wiki/Surely_You%27re_Joking,_Mr._Feynman!], you would notice that as a young boy he loved to fix things and everybody brought their broken radios to him. He would then fix them, seeing it as a challenge, as a problem to solve. He had to find a way to fix it, no matter how non-obvious the problem was. I think this helped him to sharpen his mind and instilled a good habit to see interesting problems everywhere. If you have to think for yourself, you lessen the risk of developing learned blankness. Try to think for yourself, even if it takes much more time than simply finding solution on the internet. In the long run, developing good mental habits is probably the most important thing.
Also check out the Art of Problem Solving [http://www.artofproblemsolving.com/] books. They've also got some interesting resources on their website.
Also Journey through Genius by William Dunham and The Art and Craft of Problem Solving by Paul Zeitz.
I'm 29 now, but I was a lot like you at age 10. I think you'll like it here - you might find some material too advanced, but then I still do sometimes, so don't be too worried. You'll pick it up as you go along. I can tell you stories of what I was doing at your age, but frankly I don't think it'd help much(since I did a lot of things wrong myself). The one piece of advice I'll give you that I think might actually help is this essay: http://www.paulgraham.com/nerds.html [http://www.paulgraham.com/nerds.html] - more than anything else, it's what I wish I'd been able to read when I was your age. It does get better, and more quickly than you might expect. Also, to a lesser extent, the ever-interesting Yvain posted this bit on his blog, which might help explain why what teachers do bugs you so much:
My elementary school (I'm 28 by they way, so this is some two decades ago) actually had a program for students like that; one day a week , you would be pulled out of normal class for an alternative class where the material was taught through projects and discussions, logic was explicitly both encouraged in thinking and taught as a skill, and there was basically no rote memorization. We learned games like chess and Magic: the Gathering (I had no idea how huge that game would go on to become; I wonder if the teacher still has those first-edition decks?) during our breaks from "actual" instruction, and there were basically no tests. It was a ton of fun, but I only stayed in it for one year; the other four days a week were still boring me out of my skull. After the year in that pull-out program, I transferred to another school that had a fully accelerated / "gifted" curriculum. That was less boring - the material and pacing were both better, but I was still the top math student in the class and frequently bored there waiting for others to catch up, for example - but I missed the one-day-a-week program from the old school. As for what I did during the mind-numbing classes, I read. Fiction mostly, but some non-fiction - I really loved "The Way Things Work" books when I was about Alex's age - and I usually tried to make it not-entirely-obvious what I was doing. The teachers knew, of course, but as long as I didn't flaunt what I was doing and kept my scores up, they didn't generally care. I was bad at the participation / stupid games stuff in those classes, but I learned to read stuff way "above my level" and got way more benefit out of it that I would have from listening to the teacher drone on about how to do long division or whatever.
My school board did similar - I did the full-time gifted class, my brother did the one day a week. I also got accelerated to a rather extreme degree - I skipped 3 grades, and started highschool at age 10. It was a mixed blessing, frankly - it got me past the "kids are pure evil" years, and turned me from the obnoxiously nerdy kid into a curiousity, which got me picked on a lot less. The material didn't get much more interesting - once you catch up, it's being taught at the same pace. And on the downside, it made me a lot more awkward in highschool years than I probably would have been otherwise, because the age gap meant that the usual diversions of dating and drinking didn't open up for me until years after they had for everyone else(and when everyone else is years more experienced than you, self-consciousness sets in with dating, and slows you down even further - I didn't even ask a girl out until I was about 18-19).
Hi, Alex! I pretend to be named Ilzolende, and I'm 16, which puts me closer to you in age than the majority of commenters here. I'd suggest learning about common cognitive biases for general self-improvement. In terms of academic boredom, it may help to find a secondary activity that you can perform that does not interfere with your ability to absorb spoken information. Small, quiet things for you to play with in your hands without looking, like Silly Putty, are useful options. This doesn't always help, but trying to figure out why you feel a certain way can dampen some emotions. When I'm really angry at someone, but I don't want to be, sometimes telling myself "my body is having an anger reaction, but that doesn't mean I have to be upset at that person" is useful, as is directing feelings of aggression to an inanimate object. (Don't actually attack the object, just replace any images you have of you hurting someone with you hitting (for example) a drum set.) If you realize that you have no good reason you can think of for having an emotion, you may want to treat it as a physical problem. If I'm sad, but not due to actual external phenomena, then sometimes just reading something nice for half an hour works. I don't know how well this generalizes, and there may be some negative costs to playing with Silly Putty in class, so take this with a grain of salt.

Hi, I'm Amanda. I'm interning at MIRI right now. I found HP:MoR 3 years ago, and started reading the Sequences shortly after. After 2 years of high school, I dropped out, and started at the University of Kansas. Reading the Sequences probably contributed a lot to this; I was tired of feeling like I wasn't doing anything important. Likewise, after a year at a state school, and now experiencing 5 weeks in the Bay Area, I'm motivated to get out of Kansas and back here.

I'm studying computer science, and I just finished my freshman year. I also do computer science research during the year. My advisor had me work with genetic algorithms, which, looking back now, was mainly to get me programming. My only experience was one high school class, which was predictably bad.

Anyway, I programmed a web project, and realized that I actually enjoy programming! My parents are both software engineers, so I had initially seen it as a boring 9-5 cubicle job. Later, I viewed it as a tool, useful enough to devote my studies to, but not particularly enjoyable. After working on the web app, I remember thinking, "Why didn't anyone tell me how cool coding could be?"

I decided to intern at MIRI to hel... (read more)

Welcome to LessWrong! Sounds like you'll have some interesting things to share. Glad to have you.
It's not like your username sounds obviously feminine either, so how confident you are about whether a given user (except the obvious ones, say lukeprog or NancyLebovitz) is male or female? But yes, according to the last survey [http://lesswrong.com/lw/fp5/2012_survey_results/], only around 10% of the people here are women, and even fewer among the most prolific contributors [http://lesswrong.com/lw/fp5/2012_survey_results/7xhg].
I don't think LWers collaborate to write the survey (correct me if I'm wrong, though)...please don't generalize the decisions of a small group to the entire community. Edit: Oh, sorry, didn't realize you were the OP. lol. So you wouldn't know...and I'm not sure either.
Well, given that LW is/was* predominantly appealing to STEM-types, with a focus on computer science-y topics (artificial intelligence), decision theory etc., it's no wonder that the gender gap here reflects the gender gap in e.g. computer science colleges: Edit: * "was" because Harry Potter!
Welcome! Have you tried out Vibrams [http://vibram.com/]? I have found them to be a delightful shoe replacement. That feeling will fade as you read and do more. I do want to call back to something you said earlier, though: This is where you want to end up; it's one thing to talk a good game about biases, and another to understand them on the five second level [http://lesswrong.com/lw/5kz/the_5second_level/]. While reading through the sequences, it's helpful to try to turn the epiphanies into actions or reactions, rather than just abstract knowledge. If you are interested in putting your programming skills to work on rationality education, you might want to get to know some people at CFAR [http://rationality.org/]; there are a number of useful things that could exist but don't yet because no one has programmed them. (Here's an example of one of the useful things [http://www.bentspoongames.com/calibration.html] that does exist.)
Sort of. The main thing is identifying a situation that will trigger a behavior. For example, whenever I notice I'm the least bit confused, I say out loud "I notice I am confused." This is an atomic action that I can do out of habit, and which will make me much more likely to follow up on the confusion. Oftentimes, this will be something like saying "event is on Saturday the 25th," and then noticing that Saturday isn't the 25th. This is something I really ought to get to the bottom of, because thinking the event is on the wrong day will lead to missing the event, which is totally preventable at this point if I notice my confusion. Most people have defaults against noticing this sort of thing, though (I know I definitely did, even knowing a lot of decision science and about baises). Having a specific plan of action makes it way easier to react the right way in the moment, and having a workaround for one bias is better than knowing about twenty biases. This is a better approach, I think, but I'm leery of recommending it because enough people have trouble reading through the sequences one time that suggesting it two times seems like asking too much.
4Said Achmiz10y
I know this isn't true for everyone, but for me, Eliezer's writing is really fun to read; I've reread many of his posts just on that basis. The Sequences do have some dense parts, but for most parts, I couldn't tear myself away.
I applaud your pragmatic response to ridiculous social pressure.
I also prefer bare feet, though to a lesser extent. I hate wearing just socks, but I don't mind wearing worn tennishoes that bend easily.
1Said Achmiz10y
Welcome to Less Wrong! I don't have much else to say, except that several of your "traits that normal people find weird" are ones I share: I've been approaching that view myself, more and more, but I don't think I've seen this talked about much here (not directly, anyway; a lot of the "Dark Arts" / manipulation discussions are applicable, though). I think it would be cool if you wrote a post or two about your thoughts on this issue. (And/or linked to any related blog posts you might have, if you're willing.) Agreed. Also agreed. This view, I think many people here share. Yes, my family has a similar reaction to the idea of not voting.
Click me! [http://examinedthought.com/avoid-ads/]
Welcome to LW. :)
Note: the post talks about priming research. I made the following comment there: In general, a lot of research on priming is statistically dubious. There are a few robust findings, but there's also a lot of stuff that doesn't hold up under closer examination.
0Said Achmiz10y
Thanks! Hm, well, it seems that I agree with the recommendations in the post; I use AdBlock (and get rather angry when certain websites try to guilt-trip me about doing so), and I don't watch commercials on TV (by not watching shows on TV at all). (Here's a question: does anyone know of a way to get rid of ads in Youtube videos?) Of course, living in a city, it's difficult to avoid advertisements entirely. Billboards are all over the place. What I'd like to see are discussions about the ethics of advertisement — that is, is it just unethical for companies to use these techniques? (And if so, what forms of advertisement are ok?) Is it unethical to advertise at all? My intuitions say "yes" to the former and "no" to the latter, but I haven't examined said intuitions very deeply.
0Said Achmiz10y
Aha — it seems the extension you suggested is Adblock Plus (lowercase b), whereas I had been using an unrelated one called AdBlock (capital B, no "Plus"). I've now switched and the YouTube ads seem to be gone!
I'm sure many do; I agree with both statements. But I would caution against caching, or worse, identifying with, the belief that voting in general is pointless or otherwise not to be done. As to my agreement with the beliefs stated: political identification is certainly a mind-killer, so it's a good idea not to identify internally as a member of a political party. Also, the existing major parties, and their leaders, are inevitably badly flawed, but using your single plurality vote (the only one you get in most English-speaking countries) to support a third party candidate isn't going to accomplish anything. But I'd still encourage people to vote. I have an ulterior motive for saying this. Personally, I feel the need to have some amount of not-entirely-rational hope to keep me going. I find some of that hope in voting system reform (which is also a gratifyingly interesting hobby). This sort of structural reform has little chance of succeeding if all the people who are unhappy with the current system become identified with not voting. But even if you do not share my interest in this reform, I think there are times when participating in politics (which generally includes voting as one of the most basic steps) is a sensible and useful thing to do. The major parties will always be very flawed, but there are times when one of the choices on the ballot is clearly more flawed and when the power of participating is significant.
0Said Achmiz10y
Would you caution this more strongly than you might caution against caching, or identifying with, any other comparably-specific belief? Let's say we agree that "participating in politics" is a sensible and useful thing to do (I don't, for many nontrivial meanings of the phrase, but this is for the sake of argument). Is voting actually a meaningful, or effective, or necessary way to go about doing so? If so, why and how? Are there many instances when one choice is clearly more flawed, such that you can see this in advance, and you also have a nontrivial chance of affecting the outcome with your participation? For example, let's say it's 2012, and I think Obama is horrible, just horrible, and that him being re-elected would be a disaster (and I also somehow know that Romney will be a good president). I am in New York. What would you say, roughly, is the chance that with my vote, Romney takes NY, but without my vote, Obama takes NY?
Depends on what you mean by "comparably-specific". The belief I spoke of was a generalization: that because a certain set of elections were not worth worrying about, that all future elections will not be. A notable feature of elections is their variability; it is clearly the case that results vary. A single vote is massively unlikely to affect anything important. Political campaigns, however, can have a reasonable probability of doing so. Campaigns are about convincing large numbers of people to vote in a certain way. The messages you put out about whether or not you intend to vote affect your friends. A 2012 study using a facebook button showed that by voting themselves, individuals could bring 4.5 other voters to the polls. Obviously the specific circumstances of that study are not likely to repeat, but the overall message that it's about more than just your one vote are likely to be applicable more generally. If you intend to canvass or phonebank, of course, this is even more relevant; it is likely that voting yourself is a better investment than trying to lie effectively about whether you believe individual votes matter. Again, we'd have to define the terms, but if you have a significant altruistic term in your utility function I think it's a good bet. Your choices are to be a habitual voter, a habitual nonvoter, or an occasional voter based on individual calculations of the expected value of each election. Whichever choice you make is leaky; if you have friends, they will be influenced by your decision. In this circumstance, being an occasional voter seems unlikely to be rational; your outlay on calculating the expected value, and the reduced contagion of your voting decision even when you do find that a specific election is worth it, probably overwhelm the trivial effort you save by not voting. So the question is, is it worth a few hours a year to be a habitual voter? It would be easy to overestimate the cost, but remember, this should be compared not agai
0Said Achmiz10y
I barely have 4.5 people that I ever discuss politics with, and all of their political views are at least as established as mine. I would be surprised if my voting brought so much as one other voter to the polls. Good god, no! This is contrary to my experience. Am I really likely to spend more effort on deciding whether to vote than on deciding whom to vote for? Especially in local elections? The problem is not that deciding to vote is itself some difficult, complex decision. The problem (well, a problem, anyway) is that in any election where I'm even remotely likely to influence the outcome (i.e. local elections), I have to spend a tremendous effort to even get enough relevant information about the candidates to make an informed decision, much less consider and analyze said information. And this isn't even factoring in the effort required to have a sufficient understanding of "the issues", and the political process, etc., all of which are crucial in figuring out what the effects of your vote will be. One of my friends engages in political advocacy, votes, canvasses, researches candidates, and all that stuff. I see how much of her time it takes up. Personally, I think it's a colossal waste of her intelligence and talents. She could be writing, for example (which she does also, to be fair, but she could be writing more), or doing something else far more interesting and productive. Also: How do you figure this? Why aren't we comparing to work hours? And why are we valuing non-work hours only in money earned?
I think we've mostly said what we have to say, and this is off-topic. My numbers showed that at best voting is instrumentally a break-even proposition. I do it because I find it hedonically rational; for instance, I don't have to lie to my family about it. Part of what makes it a net plus for me hedonically is that I have a vision and a plan for a world where a better voting system (such as approval voting or SODA voting) is used and so I am not doomed to eternally pick the lesser of two evils. I can understand if Crystal makes a different decision for her own hedonic reasons. I also suspect that metarational considerations such as timeless decision theory would argue in favor of it, because free riding on other people's voting effort is akin to betrayal in a massively-multiplayer prisoners' dilemma. I have not worked out the math on that, but my mathematical intuition tends to be pretty good. Your description of your friends' advocacy suggests you are attached to the idea that politics is a waste of time, not just for you, but for others. I suspect that belief of yours is not making you or anyone else happier. I recognize that you could probably make the converse criticism of me, but I am happy to prefer a world where aspiring rationalists vote to one where they don't (even when their vote would probably be negatively correlated with mine, as I suspect yours would be).
I waffle about this a lot. Sure, one effect -- perhaps even the overwhelmingly primary effect -- of my vote is to influence which candidate gets elected, and to use that power responsibly I have to know enough to decide which candidate would be better to elect, which requires tremendous effort. (Of course, that's only an argument for not-voting if responsibly using my power to not-vote doesn't require equal knowledge/effort, but either way that's beside my point.) But another effect is to reward or punish campaigns, which has an effect on the kind of campaigns that get run in the future, and it often seems to me that this is worth doing and requires less knowledge to do usefully. Of course, the magnitude of the effects in question are so miniscule it's hard to care very much in either case.
I think most of your points here are well made, but Most people do not have the option to add more hours of work and thereby receive more money at the same rate. If you work a salaried 9-5, it's misleading to calculate the value of your time as if your hours not already committed to work could be converted to money at the same rate, and even if you do work at a job that allows you to work overtime hours, you'll generally only have the choice of whether to make that tradeoff for specific hours out of your week, not any hour as-desired. If you're typically employed, your work hours are already committed, so for the most part you only need to evaluate the tradeoffs on your remaining hours.
0Said Achmiz10y
Well, all of that is actually false for me, as I can work my hours whenever I feel like, but that's moot; I feel like your comment addresses a point other than the one I made. What I meant was — are we stipulating that voting necessarily takes place during hours when I can't work? Why? That seems unwarranted. Also, I repeat this part of my question, which none of the above reasoning touches at all: Let's say I work a salaried 9-5, have no option to work more, and vote after I leave work. There's still some opportunity cost. Maybe I miss my favorite TV show or my WoW raid or whatever. Maybe I don't get to spend as much time with my family. Maybe I get less sleep. Why should we ignore such costs?
I agree that it's not wise to ignore the associated opportunity costs, but it's a rather common fallacy (at least, one that's popped up quite often here) that one's time is fungible for money at the rate one is compensated for work. On the other hand, for many individuals there are also likely to be associated gains, such as the fact that voting tends to be widely viewed as an effective signal of conscientiousness. Personally, whatever my feelings about the likelihood of my vote having a meaningful effect on the course of an election, I would prefer most of my acquaintances to think of me as the sort of person who votes.
0Said Achmiz10y
I, on the other hand, would really rather not be thought of as the sort of person who votes. Who are your acquaintances that they view voting as an effective signal of conscientiousness? Like... normal people, or something? Because that's weird.
0Said Achmiz10y
For someone who lives in New York? Yes. Yes it is. (will respond to rest of your post later)

Hello. I'm Ouri Maler, or "sun tzu" on some other forums; turning 29 in August.

I don't exactly remember when I started thinking of myself as a rationalist, but I know the core of my pro-science, pro-logic worldview was formed between the age of 8 and 10. For many years, I planned to be a physicist. In college, I studied to become a roboticist. And since that hasn't entirely panned out, I'm currently struggling to get employed as a programmer. I also write as a hobby, and I do try to reconstruct rationalism in my current urban fantasy story, "Saga of Soul".

Less Wrong has been on my "to check one of these days" list for a few years. It came to my attention again recently when Mr. Yudowsky recommended Saga of Soul on Facebook, prompting me to marathon HPatMoR over the past few days. I finished yesterday, and figured it was time to join the community and see what'll come of it.

Oh hey, I have encountered this thing in the past and I think you have interacted with one of my beta readers and you promoted my friend Emily's Kickstarter. Hi!
Hello! Unless I'm mistaken, you're the author of Hi to Tsuki to Hoshi no Tama? I used to read that.
I am, yes, but I now consider all the webcomics I used to do embarrassing and would rather steer you towards my more recent prose, like Luminosity [http://www.luminous.elcenia.com/about.shtml].
Speaking of your recent prose, what's the update schedule on Goldmage?
Goldmage is stalled due to plothole. (Basically, I thought I could write about goldmagic without doing any math, and this doesn't seem to be the case.) I don't have an ETA on fixing it. Elcenia is not suffering from that specific problem but my life in general is being eaten by a freeform roleplay thing I am doing that leaves me with this tendency to open story files, stare at them, and then close them.
Damn, that's too bad. I really thought it was a clever idea. And to end on a cliffhanger! Sigh.
I haven't actually decided to abandon the story, it just needs math to happen and a significant part of my brain wants the math to happen via magic.
I... understand? A significant part of my brain always wants math to happen via magic. Sometimes it does! Sort of.
Well, it's your call. But for what it's worth, I enjoyed HtTtHnT when it was running (particularly how the protagonists handled the loss of their secret identities). Luminosity sounds like an interesting idea, though I'll confess I've never read any of the Twilight books...
Well, you could always try reading the first few chapters and stop if you don't like it >:D
Luminosity requires no knowledge of nor affection for canon Twilight.
4Eliezer Yudkowsky10y
Oh hey, welcome! Any magical girl who takes the time to view the Earth from space has my vote, but you already know that.
Thank you! And thanks again for the link - I got around 250% as many unique views in the 48 following hours as I had in the entire preceding month.

Hello, Less Wrong! I'm Wes W., which username I've chosen as a compromise between anonymity and real-life-usability, since I do intend/hope to get involved in meatspace once my schedule permits.

I've been lurking here and working my way through the Sequences for a couple months now. I'm intentionally pacing myself, so I can process things sufficiently. (Also, it's mildly alarming to finish reading a post and find that my brain has already vented all previous opinions on the topic and replaced them with the writer's.) I don't really know anymore how I found this site, because I've been aware of its existence for a couple years, but only recently realized both the full extent of the material here, and that I wanted to be involved in it.

I've been an atheist for several years, following another several years of diminishing faith in my native Mormonism, but it wasn't until I started reading Eliezer that this felt like a good thing, rather than a loss.

I currently have a job as a math tutor, which I originally got as just a college summer job, but turned into an "oh, this is what I want to do with my life" thing, so I'm now working on becoming a teacher. So clarity of thought is especially helpful to me, since I have to know something backwards and forwards in my sleep before I can do much to help a student understand. Ideas like "guessing the teacher's password" and "how could I regenerate this knowledge, if I lost it" have been directly useful to me, and I also hope to get better at overcoming akrasia.

I know what you mean about the author's views replacing your own! I think it's good to sit on your thoughts for a few days afterwards and let your excitement simmer down so your rationality can kick in and pull it apart and put it back together again, although I have a feeling that with most posts you'll still end up conceding that your (new) view is on par with the author's!

Hello everyone, I'm Nicholas Rutherford! I'm a 21 year old undergraduate student at the University of Saskatchewan studying pure math.

My original start to rationality is due to OK Cupid (hooray for on line dating!). After being fed up with the lack of people in my area I decided to see who my top world wide match was (It turns out that this 'top' person will actually change so I guess I lucked out). This person's profile was written in a very clear, well thought out manner and the answers to their questions showed that they had a fantastic decision making process. After chatting with them they told me the secret to their knowledge was less wrong.

From there I started making my way through The Sequences (currently about 40% of the way through), reading HPMOR and lurking the general discussion board here. I also had the pleasure of attending the July 2013 CFAR workshop, which has really inspired me to focus on improving my rationality and actually being a part of the community (and not just a lurker).

This community is awesome and I can't wait to improve it in any way I can! I mean, it is the least I can do after all I've gained from it :)


Hello again. I've been posting for a while as ModusPonies. As much as I like the old name, it's time to retire it. More and more, I'm interacting with the community in meatspace and via email. I'm switching to my real name so that people who know me in one context will recognize me in another.

Hello again. I've been posting for a while as ModusPonies.

A bit late to say this, but: best username ever.

My name is Anders. I have been lurking for a long time, and have attended meetups in Boston for the last three years. I recently began commenting more frequently. This is a new account; after discussing Ben's name change with him at the meetup today, I decided to switch to something closer to my real name, sacrificing my 20 karma points in the process. I am 31 years old. I am a doctoral candidate in Epidemiology at the Harvard School of Public Health, where I work on some new implementations of causal models for comparative effectiveness research, particularly for screening interventions. I am originally from Norway. I attended medical school in Ireland, and worked for 18 months as a junior doctor in western Norway before moving to Boston. On Less Wrong, I am particularly interested in the material on causality and decision theory. I am also interested in epistemic rationality and cognitive bias in general, and in the extent to which our actions are explained by signaling. In terms of mainstream philosophy, I see myself as formalist, falsificationist and prioritarian consequentialist. The "formalist" part is due to spending a year as an undergraduate student in mathematics; 12 years later, the only thing I retain from that year is a persistent belief that mainstream philosophy is underrating the importance of David Hilbert.

Hello, I'm Erin. I am currently in high school, so perhaps a little younger than the typical reader.

I'm fascinated by the thoughts here. This is the first community I've found that makes an effort to think about their own opinions, then is self aware enough to look at their own thought processes.

But, this might not be the place for this, I'm am struggling to understand anything technical on this website. I've enjoyed reading the sequences, and they have given me a lot to thing about. Still, I've read the introduction to Bayes theorem multiple times, and I simply can't grasp it. Even starting at the very beginning of the sequences I quickly get lost because there are references to programming and cognitive science which I simply do not understand.

I recently returned to this site after taking a statistics course, which has helped slightly. But I still feel rather lost.

Do you have any tips for how you utilized rationality when you were starting? How did you first incorporate it into your thought processes? Can you recommend any background material which might help me to understand the sequences better?

You could just trying to read the posts even if you don't explain all the jargon: over time, as you get more exposed to the terms that people use, I'd expect it to get easier to understand what the examples mean. And you might get a rough idea of the main point of a post even if you don't get all the details. Eric Drexler actually argues that if you want to learn a bit of everything, this is the way to do it.

If you don't understand some post at all, you could always ask for a summary in plain English. Many of the posts in the Sequences are old and don't get much traffic, so they might not be the best places to ask, but you could do it an Open Thread... and now that I think of it, I suspect that a lot of others are in the same position as you. So I created a new thread for asking for such explanations, to encourage people to ask! Here it is.

Thank you for the link and for starting the thread. The article made me realize that I am going about trying to understand rationality as if I have a major exam in a couple months. Reading many of the articles on here for a second time, I'm grasping them a lot better than I did before. The new thread seems like it will be immensely useful. I really appreciate you taking the time to answer my question!
Glad I could help. :)
Welcome, Erin! As Adele said, even if math is not your passion, you can still learn a lot about your own thinking from what Eliezer and others wrote. For a look back by one notable LWer, see http://slatestarcodex.com/2014/03/13/five-years-and-one-week-of-less-wrong/ [http://slatestarcodex.com/2014/03/13/five-years-and-one-week-of-less-wrong/] . Be sure to check out Scott's other blog entries, they are almost universally eloquently written, well-researched, charitable, insightful and thought-provoking.
Thank you for the link. I'm very pleased to find another source of such interesting ideas. I anticipate the day when I too will read the sequences and be able to say "everything in them seems so obvious."
Hi Erin, I'm Adele! It's good to see young rationalists here. I think you might really like Thinking, Fast and Slow by Daniel Kahneman [http://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555]. Daniel Kahneman is a well-known psychologist, and winner of the 2002 Nobel prize in Economics. In this book, he goes through different thinking processes that humans often use, and how they are often wrong. It is not very technical, and is a pretty easy read IMO. It might also help with some of the cognitive science stuff in the sequences. It's okay to not understand Bayes' theorem for now, knowing the math doesn't really make you that much better at being rational - there are easier things to do with larger gains. If you want to get the programming references, it might be worth learning to program. There are some [http://code.org/] online courses [http://www.codecademy.com/] which make it relatively easy to get started. It's also a good skill to have for when you are looking for employment. One thing that has helped me a lot in being more rational is having friends who can point out when I am being irrational. Another good place to look at (and go if you can) is CFAR [http://rationality.org/], whose point is basically to help you get better at being rational.
Thank you for the resources! Kahneman's book looks very interesting, and luckily my library has it. I'll check it out as soon as possible. I am planning on taking a Java Programming class next year. Does Java have the same set up/structure/foundation as the languages that are referenced on here? What would you say is the programming language that is most relevant to rationality (even if it isn't a good beginning language)?
I definitely recommend learning to program in a different language before you take your Java class. Java makes things more complicated than they need to be for a beginner, so it's good to have a conceptual foundation in a simpler language. If all you care about is being able to reason abstractly about recursion and that sort of thing, Scheme is a language that's good for beginners and will teach you to do that. (You could download this [http://download.racket-lang.org/] and read this free book [http://www.eecs.berkeley.edu/~bh/ss-toc2.html] or this free book [http://htdp.org/].) If you want to focus more on kicking butt in your Java class and building games/web applications/scripts for automating your computer, I recommend learning Python [http://www.python.org/] (I like this guide [http://learnpythonthehardway.org/]; here's another free book [http://www.greenteapress.com/thinkpython/thinkpython.html]). These are both great choices compared to the languages people typically start learning to program with. I would lean towards Python because the resources for teaching it to yourself are better (there's a Udacity class, the community online is bigger, etc.) and it will still give you most or all of the rationality-related benefits of learning to program. Search on Google or talk to me if you run in to problems (teaching yourself is tough).
Awesome! Pretty much any language will give you enough background to understand the programming references here. I agree with John that Scheme and Python are good languages to start with. The most rational language to use depends a lot on what exactly you are trying to do, what you already know, and your personal style, so don't worry about that too much.
Hello and welcome! I don't know about the sequences in general, but for Bayes' Theorem you could try Luke's An Intuitive Explanation of Eliezer Yudkowsky’s Intuitive Explanation of Bayes’ Theorem [http://commonsenseatheism.com/?p=13156].
I'll throw in a couple [http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/] more explanations [http://oscarbonilla.com/2009/05/visualizing-bayes-theorem/] as well. (It's hard to know in advance which one might make the idea click neatly into place!)
Thank you both! Just starting to go through those explanations, Bayes Theorem is making a lot more sense, and I'm also starting to see why everyone is excited about it.

Hi! HPMOR brought me here. I now spend about as much time telling people to read it as I do discussing the weather with them. I’ve read about half of the sequences. I lurked for a long time because I often find that getting involved in discussions blurs my ability to think objectively. Right now I’m working on a Litany Against Non-Participation, as well as taking gradual steps towards participating more, in an attempt to remedy this. I’m very interested in learning how to ask better questions.

I’m entering my fourth year of an interdisciplinary-or-is-it-multidisciplinary program at McMaster University in Hamilton, Ontario. Basically, I've chosen to focus my formal education on skill development (reasoning, writing, researching, etc.) instead of specialized content acquisition (that’s for my spare time).

For at least the last five years, I've been a philosophy-based thinker. Most of my courses were non-philosophy, but I took them to aid with my philosophical education. Sort of like how a guitar player might learn piano to improve their music theory and develop new musical ideas. I have a (very idealistic) vision for philosophy, one in which philosophy is the ‘highest’ discipline that... (read more)

That's definitely true. But there is an advantage to posting. Often, I'll have an idea and start to write it out. But then, I realize that it's not quite up to my internal "less wrong standards." So, I'll start refining the idea, and end up with a much better one than I started with. Or I'll find out that the idea isn't as good as I thought it was, and end up not posting.

My name's Noah Caldwell, I am a lesser being who currently resides in rationalist Hell. That is, I am a minor (17 years) and I live in Tennessee (not by choice (it's not THAT bad here, though)).

I was in a program called TAG (Talented and Gifted) in elementary school, and my mother once said I have genius IQ, which despite meaning little because you can't represent intelligence numerically remains highly flattering. It may have contributed to a very, very miniscule ego (or so I like to think), but it's made me believe I can do better in anything: Tsuyoku naritai! Whenever I have an interest, I pursue it; I've been like that for a long time. So the net gain was, I think, worth it, even if her statement may have been untrue.

I am currently trying to do well in school while shoving as much coding, science, math, language, musical theory, and history in my head. I plan on getting a HAM radio license very soon. I'm also trying to cleanse myself of bias now. My dream college would be MIT, but that is one heck of a reach school, no matter who you are. I also need to figure out how to insert my little segues into my monologue without parenthesis, because wow does that look weird. Maybe I'm just being self-conscious. (But that's a GOOD THING!)

The traditional recreational activities I partake of include reading, piano, backpacking, and videogames (I'm digging into the original Deus Ex with delight right now). I also need to read the sequences; I've only sampled bits and pieces like an anorexic at a chocolate buffet.

If you come to visit MIT, and you happen to be around campus on a Sunday, we'd love to have you at one of the Boston meetups [http://www.meetup.com/Cambridge-Less-Wrong-Meetup/]. Also, if you want to talk to some MIT students or alumni, let me know and I'll see if I can put you in touch.
I sometimes forget how much untapped potential in term of networking opportunities Less Wrong holds.
I didn't realize it at the time, but that's further incentive to attend MIT: I can actually go to LW meetups! I don't see myself touring the school any time soon (I've done plenty of research via the admissions blogs and other testimonials, and plane tickets happen to be expensive), but I would love to discuss any peculiarities you don't learn about until being a student, or anything else I should know before applying.
I might also take you up on that offer if you are willing. I've been considering MIT as a university since I heard that it has a insanely good Bio (and everything else) program. I'm currently getting my citizenship, reporting as a birth abroad (I'm 17 and have all the necessary qualifications) and want to do better than attending the ULeth Bio program as while it is decent it's nowhere near as good as MIT or any of the good universities in the states. Sorry if I seem overeager, It's just that things are a little stressful for me to pick a University at the moment. Sigh according to my friends I am insanely lucky, but I want to do better than chance.

Hi. I'm a software engineer and history enthusiast. Been reading for years, and just recently got around to making an account. Still building up the courage to dive in, but this place has done wonders for reducing sloppy thinking on my part.

Hi, Antiochus. What areas of history are you interested in? I'm similarly interested in history -- particularly paleontology and archaeology, the history or urban civilizations (rise and collapse and reemergence), and the history of technology. I kind of lose interest after World War II, though. You?
Any and all! Though I have a lot of interest in military history in particular, which lead me to wargaming, with some specialized interest in the Hellenistic period and the ancient world in general, medieval martial arts, and the black powder era of linear battles.
Sad to say, my only experience with wargaming was playing Risk in high school. I'm not sure that counts.

Hello, LessWrong. I'm an 18-year-old recent high school graduate with an interest in computers and science and nerdery-in-general. A summary of your-life-until-Lesswrong seems to be the norm in this thread, so I suppose that's what I'll do.

I was born and raised Mormon. About as Mormon as they come, really- nearly all of my relatives practice the religion, and all of the norms and rituals were expectations for me- everything the church said was presented as fact, and everything the church did was something my family participated in, right up to the five-in-the-morning seminary classes in high school and obligatory two years of preaching about the church (for the boys, at least, because I was one). My social group was almost entirely comprised of members of the church as well, which meant I was almost never exposed to ideas that wouldn't be discussed either in a church or by public school teachers. All this to say that I managed to really, truly believe it- right up until I was around 14, which is when I got my hands on a means of unsupervised internet access. I was honestly surprised by how normal things seemed, outside that bubble in which I had grown up. Everything seemed s... (read more)

Hello and welcome to Lesswrong! That's quite the journey! You've come a long way under your own sailing power it seems, and trust me, you aren't alone here. You'll find plenty of others who've made similar trips out of unquestioning dogma into exploration and experimentation. We each have a different life and learning, certainly. But many here share similar backgrounds (religious cultures, advanced at a young age, high intelligence compared to their peers) and many share similar resources (Internet as a connection tool, HPMoR as a gate way to the community). We are certainly glad to have you join and add your unique view to the conversation. Glad to see you've already dove head first into some the resources. I usually try to make suggestions of the Sequences for new peoples, but I see you've beaten me to the punch! Yes, it's not uncommon to come away from some of the posts thinking "I KNEW that. I just didn't know how to frame it." That intuitiveness helps with introducing some of the harder concepts that get discussed here... and can encourage people to experiment with ideas and expand on them. After all, we aren't here to talk about how smart Yudkowsky or Yvain or Alicorn were when they wrote this or that. We're here to do better. This is certainly a place where questions are welcomed! Living forever, gender, boredom, we'll discuss it all. Politics, of course, tends to be handled like an unexploded ordinance, but as long as the conversation is well reasoned and beneficial, we welcome it. You probably already know the site lay out, but if you'd like to start contributing to the conversation with your own posts, visit the latest Open Thread [http://lesswrong.com/r/discussion/lw/kwc/open_thread_sept_17_2014/]. It's a good place to start posting because it will let you get a feel for the standards and norms of the community and the types of conversations we have. Also, posting comments on other posts is a good start as well. Once you get settled into the milieu, yo

Hello, I am Jay, a 16 year old incoming High School Senior (I skipped a grade if anyone cares). The way I came across this site was through reading an article about a certain thought experiment I don't want to mention because I don't want to piss anyone off in my first post (If anyone knows what I'm talking about is mentioning that thought experiment on Less Wrong still banned because I do find it very interesting). Anyway, what drew me to this site was the quest for answers. I have been seeking and contemplating what the answers to life, the universe, and everything in between for a while now. Have I been doing this in a logical or rational way? No, I have simply been walking through the everyday motions in life in an autopilot state with no real purpose or goals wondering what the hell I should be doing with my life. Lately, I have realized if I want to find meaning in my life I will actually have to strive to find it. I cant sit around waiting for answers to come to me. That is why for the most part I have come to this site. I want to learn and see if I can find out what is the purpose of living in this strange universe and to learn some interesting things along the way. That is all. If anybody has recommendations as to what I should start out reading on this site that would be greatly appreciated. Thank you.


Hello, and welcome to LessWrong! If improving is important to you, as it sounds, then I'm sure you will find this site quite useful.

First off, I'm pretty sure you're speaking of Roko's Basilisk. As far as I am aware, the ban on the basilisk has diminished/dissolved in light of a.) the Streisand effect that made further attempts to ban it just more fuel for the fire and b.) the fact that the issue is quite thoroughly solved and no longer very dangerous except in terms of misconceptions (see Streisand effect above). It is still a sore issue. Partly because of the bad ways in which it was handled by different parties, but also because people are just tired of hearing about it. No one's going to shoot you for mentioning it or asking about it, but do be aware that the topic has been pretty well hashed out. It's not some minotaur lurking in the labryinth. We're just tired of revisiting it.

As for recommendations, the Sequences are a good place to start. I don't know how much you know about the culture around here, so, to briefly explain: the Sequences are mostly written by Eliezer Yudkowsky, who many around here hold as one of the major (if not the major) spokesperson for LessWrong's cen... (read more)

Wow thank you for the awesome reply. If all the people in the Less Wrong community are as friendly and as knowledgeable as you are then I have obviously joined the right site. You were right I was talking about Roko's Basilisk and since it is okay to mention it, here is the article [http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.html] that introduced me to this site if anyone is interested. I will definitely check out the Sequences in addition to the articles you suggested. There is so much interesting information on this site that it is hard to know where to start. One question I do have is what exactly is the importanceof decision theories? That is another thing that I am interested in. Are they applicable in real life situations or only in thought experiments? What is the importance of finding a perfect decision theory? I know the basics of Causal and Evidential Decision Theory but I am baffled by Timeless Decision Theory. If you could point me in the direction of where to find articles on these issues that would be greatly appreciated. Thank you again for the thoughtful and useful reply, it helped a lot. Edit: I started reading Mysterious Answers to Mysterious Questions today and found it so engaging that I didn't stop reading until I finished it. It was definitely a mind opening experience for me as I was exposed to a plethora of ideas and biases that I had no idea existed. I am definitely going to try reading the rest of the Sequences now.

What is the importance of finding a perfect decision theory?

Three motivations are common around here:

  1. Building a Friendly AI that is based on decision theory.
  2. Understanding what ideal rationality looks like, so we have a better idea of what to aim for as far as improving our own rationality.
  3. Curiosity. If we knew what the perfect decision theory was, many philosophical questions may be answered or would be closer to being answered.

For some relevant posts, see 1 and 2.

Thank you for the clear and informative reply.
If you want to get a handle on the "Less Wrong" approach to decision theory, I'd recommend starting with Wei Dai's Updateless Decision Theory (UDT) rather than with Timeless Decision Theory (TDT). The basic mathematical outline of UDT is more straightforward, so you will be up and running quicker. Wei's posts introducing UDT are here [http://lesswrong.com/lw/15m/towards_a_new_decision_theory/] and here [http://lesswrong.com/lw/1s5/explicit_optimization_of_global_strategy_fixing_a/]. I wrote a brief write-up [http://dl.dropbox.com/u/34639481/Updateless_Decision_Theory.pdf] that just gives a precise description of UDT without any motivation, justification, or examples.
Just wanted to say you're off to a great start posting to LW -- asking very good questions! (Also, please break posts like this into more than one paragraph.)
Thank you I'm just trying to learn all I can.
One of the main functions of a good decision theory is to bridge the territory-map divide: by solving problems in your head, it shows you how to solve problems in the real world. You can identify a good decision theory when it works in theory and in practice. If a decision theory seems to work in practice, but is not describable in a precise language (e.g. "do what feels good"), it actually hasn't been well thought out and puts you at risk of being paralyzed when a very serious and very complex situation arises. On the other hand, if it only works in theory but is impracticable (e.g. "pray to Minerva for an omen"), it will be a waste of storage space in your head. In short, a decision theory should serve as a tool for you to manage your life.
TDT just augments CDT by saying that running two copies of the same algorithm with the same input will always yield the same result.
What? No it doesn't. That's not remotely what TDT says. That isn't even a claim with particularly relevance to decision theory.
Hmm. It does capture most of the essence of TDT, doesn't it? See for example the last paragraph of chapter 12 and the last two paragraphs of chapter 13.3 in the TDT paper [http://intelligence.org/files/TDT.pdf]. I disagree with the "just" in the grandparent, but given e.g. "mostly"? Maybe I'm reading too much into the one-sentence description, though.
No. Most of the interesting applications of TDT are about producing the same (or complimentary) outputs with different input. Moreover that description doesn't even imply making a correct decision on Newcomblike problems (the motivation for producing TDT in the first place). In fact, CDT augmented by the assumption that two copies of the same algorithm with the same input will always yield the same result yields CDT. To get closer to an (oversimplified) 'essence' of TDT I'd instead suggest building from the title. CDT augmented by not caring about which point on the time dimension you are in.
Although neither of these articles is on LessWrong, they reflect the core moral values of many LW members. Astronomical Waste [http://www.nickbostrom.com/astronomical/waste.html] Consequentalism FAQ [http://www.raikoth.net/consequentialism.html]
Thank you for the reply. I will be sure to read these articles.
Welcome! I don't know so much about reading materials for finding purpose, but as an intro to rationality: * I happen to like Benito's version [http://lesswrong.com/user/Benito/] of how to read the Sequences [http://wiki.lesswrong.com/wiki/Sequences], but other people like other formats, and some don't like the Sequences too much at all (the writing style doesn't work for some). * CFAR's reading list [http://rationality.org/reading/], and maybe their videos [http://rationality.org/videos/]; you can also maybe see if you can get into SPARC [http://rationality.org/sparc/]
Thank you for the recommendations I will be sure to check them out.
Oh yes, and check out hpmor.com.

I'm a 17 year old female student in Singapore, currently in my last semester in high school. I've been lurking around this site for at least the past year, and have made my way through some of the beginning sequences. However, what really made me want to stick around was lukeprog's post on How To Be Happy. Funnily enough, I don't think I've deliberately taken up any of the suggestions, though I have realised that my slow path to extroversion over the past few years contributed significantly to my baseline happiness increasing, as has my recent focus on writing. I guess one could say that my focus when reading this site is instrumental rationality, or basically what can I glean from here to make my life the way I want it to be.

Recently, however, I've been unable to focus as much because a small part of my mind seems constantly devoted to panicking about college. I'm planning on studying computer engineering in university, and I'm fully confident that I will get into the two local universities of my choice. I'm aiming for US universities too, and getting into them is very important to me, because I'm gay. I'm well aware of Singapore's active scene in that regards, it's just that stay... (read more)

You sound pretty rational to me. I think that if you identify yourself as a rationalist then you are one. Yes, there is a lot of effort in becoming really good at it and overcoming human irrational biases but if you think its a worthwhile objective then you are a rationalist - I think. I wish you luck in getting into a US university so you don't need to suffer the stress of hiding your sexual orientation from your parents for 4 years. Of course you'll eventually have to tell them (well - ideally) but I assume it will be easier when you are 4 years older and more self-sufficient. Best of luck to you. I think you'll find this site a pretty objective outlet for your feelings on the matter as long as they are well thought out. I'm not gay myself so I don't presume to know the challenges that you face but I'm willing to listen to what you have to say and give my honest thoughts in a helpful way. I think that most here are pretty open-minded because they are rationalists. I don't presume to speak for the community here but it is the impression that I get. As long as you support your assertions with good reasons and are willing to explain your feelings and engage an argument I think you'll find support here (again not a promise as I don't speak for others - but my impression of the way it works here).

Hello all, my name is Glen and I am a fairly long-time lurker here. I first found this site through the Sword of Good short story, and filed it in my "List of things I want to read but will never actually get around to" and largely forgot about it until I recognized the name while reading HPMOR. I've read most, but not all, of the sequences and am currently going through Quantum Mechnics. I'm Chicago based and work as a programmer for an advertising company. I consider myself a low-mid level rationalist and am working at getting better.

I run or play in a wide range of tabletop games, where I'm known as being a GM-Friendly Munchkin. That is to say, I like finding exploits and unusual combinations, but then I talk to the person running the game about them and usually explain why I shouldn't be allowed to do that. It lets me have fun breaking the system without actually making hte game less fun. I've also used basic information theory to great effect, unless the GM tells me to knock it off. Currently in love with Exalted. Been burned by Shadowrun in the past, but I just can't stay mad at her.

We're curious how you've used information theory in RPGs. It sounds like there are some interesting stories there.

The most interesting stories come from a power in Exalted called "Wise Choice". Basically, you give it a situation and a finite list of actions you could take and it tells you the one that will have the best outcome for you within the next month. It also requires a moderate expenditure of mana, so it can't be used over and over without cost. When I read what the charm did, I thought of Harry's time-experiment with prime numbers. It was immediately obvious that Wise Choice could factorize any number easily, although perhaps not cheaply if it has a large number of factors. From there, it also expanded to finding literally anything in the world either with one big question (if low on mana) or a quick series of smaller ones (if low on time) by dividing the world into a grid and either listing every square or doing a basic binary search via asking the power "Given that I'm going to keep divind the world in half and asking a similar question to this one, which half of the world should I focus on to get within 10 feet of Item/Person X's location at exactly 7PM tomorrow evening" I also figured out that you can beat the one month time limit by pre-committing to asking th... (read more)


Hello. I'm Leor Fishman, and also go by 'avret' on both reddit and ffn. I am currently 16. The path I took to get here isn't as...dramatic as some of the others I've seen, but I may as well record it: For as long as I can remember, I've been logically minded, preferring to base hypotheses on evidence than to rest them on blind faiths. However, for the majority of my life, that instinct was unguided and more often than not led to rationalizations rather than belief-updating. A few years back, I discovered MoR during a stumbleupon binge. I took to it like a fish to water, finishing up to the update point in a matter of days before hungrily rereading to attempt to catch whatever plot points I could glean from hints and asides in earlier chapters. However, I still read it almost purely for story-enjoyment, noting the rationality techniques as interesting asides if I noticed them.
About a year later, I followed the link on the MoR website to LW, and began reading the sequences. They were...well, transformative doesn't quite fit. Perhaps massively map-modifying might be a better term. How to Actually Change Your Mind specifically gave me the techniques I needed to update on rather... (read more)

Welcome! I'm also 16. Welcome to the group of people who answer "no" to the "were you alive 20 years ago" question [http://lesswrong.com/lw/2n6/cryonics_questions/] on a technicality. It's really great to know about risk assessment errors and whatnot when we're still teenagers, just because the bugs in our brains are even more dangerous when ignored than normal.
Not only that--the greater degree of neuroplasticity that I think 16-year olds still have(if I'm wrong about this, someone please correct me) makes it a good deal easier to learn skills/ingrain rationality techniques.
As a fellow 16-year-old (there really seem to be a lot of us popping up around here recently), I concur. With that said, rationality skills are difficult for anyone to learn, because the human brain did not evolve to be rational, but rather to succeed socially. I would add that a good deal of rationality potential is ingrained in those who find themselves attracted to LW at a young age, particularly since surveys have shown that LW users tend to have a higher incidence rate of Asperger Syndrome [http://lesswrong.com/lw/28w/aspergers_poll_results_lw_is_nerdier_than_the/], the symptoms of which include social awkwardness. This suggests to me that rational thinking comes more easily to people with certain personality types, which is arguably genetic. As a single data point, I suppose I'll add that I myself have been diagnosed with Asperger's when I was younger, although with how trigger-happy American doctors are with their diagnoses these days, that's not really saying much.
That's an interesting correlation, but I'm curious about the causal link: is it that a certain type of neural architecture causes both predisposition to rationality and asperger's, or the social awkwardness added on to the neural architecture creates the predisposition--i.e. I'm curious to see how much being social affects rationality. I shall need to look into this more closely.
On the subject of potential causal linkages: I think that at least part of the reason us diagnosed autistic/Asperger's people are more prevalent on LessWrong is that those of us diagnosed as children spend a lot of time with adults who think that something's wrong with our mental processes, often without telling us why. I know that I picked up on this, and then when I heard about cognitive biases, I jumped to the conclusion "These are what's wrong with me, but if I read more about them, then I can try and correct for them." Then, I looked up cognitive biases, found the Overcoming Bias blog, decided it was more economics than I could handle, and then I ended up here, because it had less real-world economics. Test: See if more LWs were incorrectly given a psychiatric diagnosis as children than members of the general population were.
Sounds useful. A survey, perhaps, or maybe a poll?
We could try and get Yvain [http://lesswrong.com/user/Yvain] to include this question in next year's survey, which is the best obvious way to get an unbiased sample. However, it does involve waiting months for data, so if you're in a hurry, you could poll the forums now.
Oh how I wish I had access to this kind of material when I was 16.
Welcome, Leor! I'm also a 16 year old new member.
Nice to meet you--it's rather reassuring to see another member at my age.

Hello, thank you for this post. I am a criminal law attorney, and what attracts me to learning more about rational decision-making is the practical experience that juries, clients, and many attorneys make what seem to be irrational, or at least counter-intuitive, decisions all the time. I am in the very early stages of trying to learn what's on the site and how to fix my own thought processes, but I also have irrationally high hopes that there's achievable progress to be made by bringing the LW tools to bear on my profession and the legal regime. I look forward to talking it through with you all.

Hi, jackal_esq. As someone involved in criminal justice, you might find the following interesting, if you haven't seen them already:

Evidence under Bayes theorem, Wikipedia
R v Adams, Wikipedia
Sally Clark, Wikipedia
Amanda Knox case, Less Wrong (followup post linked at bottom)
A formula for justice, Guardian
Bayesian analysis under threat in British courts, Less Wrong

Aside from that, welcome to Less Wrong!

Ekke Ekke Ekke Ekke Ptangya Zoooooooom Boing Ni!

I'll be going by Regex. I stumbled upon this site due to a side story from the MLP:FIM fanfiction Friendship is Optimal: http://www.fimfiction.net/story/62074/friendship-is-optimal which is a bit weird, but I guess I'm weird. Yes, I like small candy colored equines. Ponies are my lifeblood.

My life history in a nutshell: Highschool was spent mostly figuring out how terrible middleschool was and realizing my ability to control my environment. Learned basic coding, drawing, and organization skills. Found a path in life due to the launch of the Curiosity rover. Robots were cool. Installed Linux.

I am currently a college sophomore pursing mechanical engineering: I've been inspired to create robots. Despite going for a ME degree I have more computer knowledge. My preferred OS is Linux, but I'm not skilled enough with it yet to do much beyond what I can do with Windows.

I am quite interested in personal development, hence why I am here. A lot of the thought processes here seem to mirror my own far more than I've seen elsewhere, so there was kind of a "these are my people" moment. I have been lightly reading the site, but there is... (read more)

Hi there Regex, Welcome to LessWrong! Yay! If you liked Iceman's Friendship is Optimal and other conversion bureau stories, you might enjoy Chatoyance's 27 Ounces [http://www.fimfiction.net/story/1868/27-ounces] and Caelum est Conterrens [http://www.fimfiction.net/story/69770/friendship-is-optimal-caelum-est-conterrens]. As far as personal development goes, I feel like I personally learned a lot about how to make better predictions about the world from CFAR's Credence Game [http://acritch.com/credence-game/], though, um, you might prefer reading through the core sequences to playing the calibration game. I have been told that Mysterious Answers to Mysterious Questions [http://wiki.lesswrong.com/wiki/Mysterious_Answers_to_Mysterious_Questions] is a good place to start reading through the sequences, though I personally read through most of the sequences in no particular order, as, at the time, that approach suited me more than a structured approach to reading the sequences would have. In any case, it is great to have a new friend join us; I hope you feel welcome here.
More fanfiction? Served up by a butter yellow pegasus? Don't mind if I do. (I spent a whole month of my summer reading 5 million words of fanfiction. It wasn't enough, but after a solid month it is really hard to justify reading more...) I'll definitely give Mysterous Answers another look, and also see what that Credence Game is all about. My current methodology has been similar to browsing TVtropes: click the first article that catches my attention, then click all of the new links. I then save the links for later after I've browsed enough for the day. It is like a human based web crawling algorithm. Thank you. I suspect I'll like it here.
Hello and welcome to LessWrong! Well you've certainly come to the right place if self-improvement and overcoming bias [http://www.overcomingbias.com/] are what interest you. As Fluttershy pointed out, the Sequences are a great place to dive into the culture and conversation of LW. If you're looking for sequences specifically about self-improvement, check out Living Luminously [http://lesswrong.com/lw/1xh/living_luminously/] and The Science of Winning at Life [http://wiki.lesswrong.com/wiki/The_Science_of_Winning_at_Life]. You aren't the only one here to try out habit changes and "life hacks," so feel free to share your personal improvements or experiments. We have quite a sizeable demographic of people experimenting with things like Soylent [http://www.soylent.me/] and MealSquares [http://www.mealsquares.com/] and other ideas. So it's always good to have another voice willing to strive for optimization. I don't know what college you attend, but consider checking out if you have a local LW meetup [http://wiki.lesswrong.com/wiki/Less_Wrong_meetup_groups] in your area. Meetups are great places to get acquainted and have some real conversation with fellow rationalists (and to just hang out). They're a great place to start getting your feet wet. Glad to have you join the conversation! Hope to see you around.
I actually had lived for a whole month off of DIY soylent I made, but I eventually stopped because the process was actually slightly more time consuming (although a third the cost) than my regular methods. (While probably healthier, I didn't really notice any difference) I suspect there are probably easier ways than I had been doing it though. One thing I've noticed is that LW doesn't seem to be sending me email notifications when I get a reply. I see that I can tick a thing to get notifications of other people's specific comments, but in my own there is what appears to be a deletion button. I would then assume it is supposed to be automatically notifying me. Fortunately I noticed the recent comments box. Definitely going to be liking it here. Thanks.
2Swimmer963 (Miranda Dixon-Luinenburg) 9y

Hi LW!

I've read LW on and off for quite some time, mostly just whenever I've gotten linked to it and found myself idly browsing. I used to not post very much on forums, just read around, but I decided to sign up for a few and give posting a try. So here I am!

My name is Sean, I'm 20 and I live in Florida. I'm an undergraduate student studying Cell and Molecular Biology with a minor in Mathematics. I enjoy a lot of things - reading, learning, hiking, discussing, exploring. My interests are pretty wide - I've done a lot of computer programming, but mostly hobby stuff, I do a lot of hiking, a little bit of gardening, I read a lot from a wide variety of topics (though, more often than not, it's either fantasy in my downtime, or research in my work time, lol), and when I have the time I play games and hang out on forums now apparently.

I don't really have an extraordinary story about how I ended up here. I just like to discuss things, and due to my interests, I find myself in places like this a lot.

I like to be in places where I can either learn, or I can help educate. I've had a good bit of experience with teaching and tutoring professionally, and I think one of my strongest qualities i... (read more)

Hello and welcome to LessWrong! Sounds like you're quite exposed to a variety of fields. Very admirable! It never hurts to have a wide background [http://lesswrong.com/lw/l7/the_simple_math_of_everything/], and that exposure to all those different hobbies and areas can improve your work in your central field of interest. No need for some great story to join. Having an interest in learning is good enough! If you want to read some LW material to give you an idea of the type of writings you'll see and the type of topics we discuss, feel free to read the Sequences [http://wiki.lesswrong.com/wiki/Sequences], which collect a large number of LW posts from over the years. It's something of a crash course on a variety of topics and issues. Quite heavy reading, but very useful. If you want to join the conversation, check out the Discussion [http://lesswrong.com/r/discussion/new/] board. This is where the day-to-day conversations on LW take place. It's a good place to get a feel for the conversation standards of the community before you start contributing your own ideas. Also, definitely check out the latest Open Thread [http://lesswrong.com/r/discussion/lw/ksc/open_thread_1824_august_2014/]. It's a bit more laid back than the Discussion board as a whole, but still a good place to talk, ask questions, and engage fellow LWers. Also, I don't know where you live in Florida, but if meeting up and chatting with fellow LWers in physical space interests you, Florida has two LW meetups [http://wiki.lesswrong.com/wiki/Less_Wrong_meetup_groups]: one in Fort Lauderdale, one in Coral Gables. LW meetups are great places to get acquainted with your fellow rationalists, discuss different topics, and just to have fun. Glad to have you with us! Look forward to seeing you around the forums soon.

I am a long time LessWronger (under an anonymous pseudonym), but recently I've decided that it is finally time to bite the bullet, abandon my few thousand karma, and just move over to my real name already.

Back in the day, when I joined LessWrong for the first time, I followed my general policy of anonymity on the Internet. Now, I'm involved with the Less Wrong community enough that I find this anonymity holding me back. Thus the new account.

Edit: For my first post on this new account, I posted a few of my thoughts on logical uncertainty.

Hi! I've been lurking non-intensely for a while. I'm currently reading the sequences, and they've given me a lot of food for thought. I have a couple of rationalist friends (including RobbBB) who have gotten me interested in rationalism. I'm also a big fan of HPMOR, which is by far the best fanfic I've ever read.

Anyway, I'm trying to become a research scientist in linguistics, so it seems best that for professional development, in addition to personal development, I learn how to think and recognize why I think I know the things I think I know etc. So far, I've mostly been squirming in embarrassment over the fallacious reasoning I've been engaged in my whole life, but I hope that I can move forward to more productive thinking.

Hello, I'm Jennifer.

I'm here to get better at accomplishing my goals. I'd also like to get better at figuring out what my goals are, but I don't know if LW will help with that.

I don't identify as an aspiring rationalist. I try to be rational, but I am generally leery of identifying as much of anything. Labels are a useful layer of abstraction for dealing with people you don't really know well enough to consider as individuals, but I don't see much benefit in internally applying labels to oneself. If you do find it useful to think of yourself as an aspiring rationalist, I'd like to know what benefits you're seeing.

I have not so much lurked as sporadically encountered LW over the past several years. I don't recall how I first found the site, but I have followed links here on several separate occasions.

My historical usage pattern:

  • Follow a link to LW
  • Open a half dozen tabs (much like I do on TVTropes)
  • Read the tabs (usually from the sequences)
  • Realize that I've hit mental saturation
  • Close LW until the next time I stumble across a link

I became more interested in LW as a community when I got to know a community member in RL, but I still didn't register because I have an aversion to opening myself up to potentially hurtful comments on the internet, and LW seems particularly prone to the type of comment which I find most difficult to deal with. Then I decided to improve my criticism handling skills, so I registered.

Hi, everyone. My name is Teresa, and I came to Less Wrong by way of HPMOR.

I read the first dozen chapters of HPMOR without having read or seen the Harry Potter canon, but once I was hooked on the former, it became necessary to see all the movies and then read all the books in order to get the HPMOR jokes. JK Rowling actually earned royalties she would never have received otherwise thanks to HPMOR.

I don't actually identify as a pure rationalist, although I started out that way many, many years ago. What I am committed to today is SANITY. I learned the hard way that, in my case at least, it is the body that keeps the mind sane. Without embodiment to ground meaning, you get into problems of unsearchable infinite regress, and you can easily hypothesize internally consistent worlds that are nevertheless not the real world the body lives in. This can lead to religions and other serious delusions.

That said, however, I find a lot of utility in thinking through the material on this site. I discovered Bayesian decision theory in high school, but the texts I read at the time either didn't explain the whole theory or else I didn't catch it all at age 14. Either way, it was just a cute trick fo... (read more)

The chief deficiency of embodiment philosophy-of-mind, at least among AIers and cognitivists, is that they constantly say "embodiment" when they should say "experience of embodiment". And when you put it that way, most of the magic leaches away and you're left facing the same old hard problem of consciousness. Meaning, understanding, intentionality are all aspects of consciousness. And various studies can show that body awareness is surprisingly important in the genesis and constitution of those things. But just having a material object governed by a hierarchy of feedback loops does not explain why there should be anyone home in that object - why there should be any form of awareness in, or around, or otherwise associated with that object.
I sort of agree with you: if the "hard problem of consciousness" is indeed a coherent problem that needs to be solved, then what you say makes perfect sense. But I am not convinced that it's a problem worth solving. I don't care whether Mitchell_Porter is an entity that really, truly experiences consciousness, or whether it's only a "material object governed by a hierarchy of feedback loops", so long as Mitchell_Porter has interesting things to say, and can hold up his/her/its own end of the conversation. Is there any reason why I should care ?
Let's distinguish between superficial and fundamental ignorance. If you flip a coin, you may not know which way it came up until you look. This typifies what I will call superficial ignorance. The mechanics of a flat disk of metal, sent spinning in a certain way, is not an especially mysterious subject. Your ignorance of whether the coin shows head or tails does not imply ignorance of the essence of what just happened. Fundamental ignorance is where you really don't know what's going on. The sun goes up and down in the sky and you don't know why, for a third of each day you're in some other reality where you don't remember the usual one, and so on. The situation with respect to consciousness is in this category. It could be argued that you should care about any instance of fundamental ignorance, because its implications are unknown in a way that the implications of superficial ignorance are not. Who knows what further wonderful, terrible, or important facts it obscures? Then again, it could be argued that there's fundamental ignorance beneath every instance of superficial ignorance. Consider the spinning coin: we have a physical mechanics that can describe its motion: but why does that mechanics work? Conversely, in the case of consciousness, there's an argument for complacency: I may not understand why brains are conscious, but human beings pretty consistently act in the ways that I tentatively regard as indicative of consciousness, and (I could say) in my dealing with them, it's how they behave which matters. There are a few further reasons why someone may end up caring whether other people/beings are truly conscious or not. One is morality. I may consider it important to know (if only I could know), whether they really are happy or suffering, or whether they are just automata pantomiming the behaviors of happiness and suffering. Another is intellectual curiosity. Perhaps you just decide that you want to know, not because of the argument from the unknown signi
I think that you are unintentionally conflating two very different questions: 1). What is the mechanism that causes us to perceive certain entities, including humans, as possessing consciousness ? 2). Let's assume that there's a hidden factor, called "consciousness", that is sufficient but not necessary to cause us to perceive humans as being conscious. How can we test for the presence or absence of this factor ? Answering (2) may help you answer (1), but (2) is unanswerable if the assumption you are making in it is wrong. I personally see no reason to postulate the presence of some hidden, undetectable factor that causes humans to be conscious. I would love to know how is it exactly that human brains produce the phenomenon we perceive as "consciousness", but I'm not convinced that such a feature could only have a single possible implementation. This is indeed important with respect to morality: If the presence of consciousness is unfalsifiable, then you can't know, and you're obligated to treat all entities that appear to be happy or suffering equally (for the purposes of making your moral decisions, that is). On the other hand, if the presence of consciousness is falsifiable, then tell me how I can falsify it. If you hand-wave the answer by saying, "oh, it's a hard problem", then you don't have a useful model, you've got something akin to Vitalism. It'd be like saying, "Some suns are powered by fusion, and others are powered by undetectable sun-goblins that make it look like the sun is powered by fusion. Our own sun is powered by goblins. You can't ever detect them, but trust me, they're there".
Would it be appropriate to say that superficial ignorance is factual (one does not know the particular inputs to the equations which govern the coin's movement) where fundamental ignorance is conceptual (one does not have a concept that the coin is governed by equations of motion)?
I don't know.
You defect in the Prisoner's Dilemma against a rock with “defect” written on it, defect in the PD against a rock with “cooperate” written on it, and cooperate in the PD against a copy of yourself. So, if you're ever playing PD against Mitchell_Porter, you want to know whether he's more like a rock or like yourself.
Right, but in order to figure out whether to cooperate with or defect against Mitchell_Porter, all I need to know is what strategy he is most likely to pursue. I don't need to know whether he's a "material object governed by a hierarchy of feedback loops" or a biological human possessed of "consciousness" or an animatronic garden gnome; I just need to know enough to find out which button he'll press.
I am not familiar with Stevan Harnad, but this sounds counterintuitive to me (though it's very likely that I'm misunderstanding your point). I am currently reading your words on the screen. I can't hear you or see your body language. And yet, I can still understand what you wrote (not fully, perhaps, but enough to ask you questions about it). In our current situation, I'm not too different from a software program that is receiving the text via some input stream, so I don't see an a priori reason why such a program could not understand the text as well as I do.
4Said Achmiz10y
I assume telms is referring to embodied cognition [http://en.wikipedia.org/wiki/Embodied_cognition], the idea that your ability to communicate with her, and achieve mutual understanding of any sort, is made possible by shared concepts and mental structures which can only arise in an "embodied" mind. I am rather skeptical about this thesis as far as artificial minds go; somewhat less skeptical about it if applied only to "natural" (i.e., evolved) minds — although in that case it's almost trivial; but in any case don't know enough about it to have a fully informed opinion.
Oh, ok, that makes more sense. As far as I understand, the idea behind embodied cognition is that intelligent minds must have a physical body with a rich set of sensors and effectors in order to develop; but once they're done with their development, they can read text off of the screen instead of talking. That definitely makes sense in case of us biological humans, but just like you, I'm skeptical that the thesis applies to all possible minds at all times.
Some representative papers of Stevan Harnad are: * The symbol grounding problem [http://cogprints.org/3106/] * Other bodies, other minds: A machine incarnation of an old philosophical problem [http://cogprints.org/1578/]
I skimmed both papers, and found them unconvincing. Granted, I am not a philosopher, so it's likely that I'm missing something, but still: In the first paper, Harnad argues that rule-based expert systems cannot be used to build a Strong AI; I completely agree. He further argues that merely building a system out of neural networks does not guarantee that it will grow to be a Strong AI either; again, we're on the same page so far. He further points out that, currently, nothing even resembling Strong AI exists anywhere. No argument there. Harnad totally loses me, however, when he begins talking about "meaning" as though that were some separate entity to which "symbols" are attached. He keeps contrasting mere "symbol manipulation" with true understanding of "meaning", but he never explains how we could tell one from the other. In the second paper, Harnad basically falls into the same trap as Searle. He lampoons the "System Reply" by calling it things like "a predictable piece of hand-waving" -- but that's just name-calling, not an argument. Why precisely is Harnad (or Searle) so convinced that the Chinese Room as a whole does not understand Chinese ? Sure, the man inside doesn't understand Chinese, but that's like saying that a car cannot drive uphill at 70 mph because no human driver can run uphill that fast. The rest of his paper amounts to a moving of the goalposts. Harnad is basically saying, "Ok, let's say we have an AI that can pass the TT via teletype. But that's not enough ! It also needs to pass the TTT ! And if it passes that, then the TTTT ! And then maybe the TTTTT !" Meanwhile, Harnad himself is reading articles off his screen which were published by other philosophers, and somehow he never requires them to pass the TTTT before he takes their writings seriously. Don't get me wrong, it is entirely possible that the only way to develop a Strong AI is to embody it in the physical world, and that no simulation, no matter how realistic, will suffice. I am o
9Said Achmiz10y
Haven't read the Harnad paper yet, but the reason Searle's convinced seems obvious to me: he just doesn't take his own scenario seriously — seriously enough to really imagine it, rather than just treating it as a piece of absurd fantasy. In other words, he does what Dennett calls "mistaking a failure of imagination for an insight into necessity". In The Mind's Eye, Dennett and Hofstadter give the Chinese Room scenario a much more serious fictional treatment, and show in great detail what elements of it trigger Searle's intuitions on the matter, as well as how to tweak those intuitions in various ways. Sadly but predictably, Searle has never (to my knowledge) responded to their dissection of his views.
I like the expression and can think of times where I have looked for something that expresses this all-to-common practice simply.
6Said Achmiz10y
Having now read the second linked Harnad paper, my evaluation is similar to yours. Some more specific comments follow. Harnad talks a lot about whether a body "has a mind": whether a Turing Test could show if a body "has a mind", how we know a body "has a mind", etc. What on earth does he mean by "mind"? Not... the same thing that most of us here at LessWrong mean by it, I should think. He also refers to artificial intelligence as "computer models". Either he is using "model" quite strangely as well... or he has some... very confused ideas about AI. (Actually, very confused ideas about computers in general is, in my experience, endemic among the philosopher population. It's really rather distressing.) This has surely got to be one of the most ludicrous pronouncements I've ever seen a philosopher make. One of these things is not like the others... Well, maybe our chess-playing module is not autonomous, but as we have seen, we can certainly build a chess-playing module that has absolutely no capacity to see, move, manipulate, or speak. Most of the rest of the paper is nonsensical, groundless handwaving, in the vein of Searle but worse. I am unimpressed.
Yeah, I think that's the main problem with pretty much the entire Searle camp. As far as I can tell, if they do mean anything by the word "mind", then it's "you know, that thing that makes us different from machines". So, we are different from AIs because we are different from AIs. It's obvious when you put it that way !
Well, I certainly agree that there are important aspects of human languages that come out of our experience of being embodied in particular ways, and that without some sort of model that embeds the results of that kind of experience we're not going to get very far in automating the understanding of human language. But it sounds like you're suggesting that it's not possible to construct such a model within a "disembodied" algorithmic system, and I'm not sure why that should be true. Then again, I'm not really sure what precisely is meant here by "disembodied algorithmic system" or "ROBOT". For example, is a computer executing a software emulation of a humanoid body interacting with an emulated physical environment a disembodied algorithmic system, or an AI ROBOT (or neither, or both, or it depends on something)? How would I tell, for a given computer, which kind of thing it was (if either)?
An emulated body in an emulated environment is a disembodied algorithmic system in my terminology. The classic example is Terry Winograd's SHRDLU, which made significant advances in machine language understanding by adding an emulated body (arm) and an emulated world (a cartoon blocks world, but nevertheless a world that could be manipulated) to text-oriented language processing algorithms. However, Winograd himself concluded that language understanding algorithms plus emulated bodies plus emulated worlds aren't sufficient to achieve natural language understanding. Every emulation necessarily makes simplifying assumptions about both the world and the body that are subject to errors, bugs, and munchkin effects. A physical robot body, on the other hand, is constrained by real-world physics to that which can be built. And the interaction of a physical body with a physical environment necessarily complies with that which can actually happen in the real world. You don't have to know everything about the world in advance, as you would for a realistic world emulation. With a robot body in a physical environment, the world acts as its own model and constrains the universe of computation to a tractable size. The other thing you get from a physical robot body is the implicit analog computation tools that come with it. A robot arm can be used as a ruler, for example. The torque on a motor can be used as a analog for effort. On these analog systems, world-grounded metaphors can be created using symbolic labels that point to (among other things) the arm-ruler or torque-effort systems. These metaphors can serve as the terminal point of a recursive meaning builder -- and the physics of the world ensures that the results are good enough models of reality for communication to succeed or for thinking to be assessed for truth-with-a-small-t.
OK, thanks for clarifying. I certainly agree that a physical robot body is subject to constraints that an emulated body may not be subject to; it is possible to design an emulated body that we are unable to build, or even a body that cannot be built even in principle, or a body that interacts with its environment in ways that can't happen in the real world. And I similarly agree that physical systems demonstrate relationships, like that between torque and effort, which provide data, and that an emulated body doesn't necessarily demonstrate the same relationships that a robot body does (or even that it can in principle). And those aren't unrelated, of course; it's precisely the constraints on the system that cause certain parts of that system to vary in correlated ways. And I agree that a robot body is automatically subject to those constraints, whereas if I want to build an emulated software body that is subject to the same constraints that a particular robot body would be subject to, I need to know a lot more. Of course, a robot body is not subject to the same constraints that a human body is subject to, any more than an emulated software body is; to the extent that a shared ability to understand language depends on a shared set of constraints, rather than on simply having some constraints, a robot can't understand human language until it is physically equivalent to a human. (Similar reasoning tells us that paraplegics don't understand language the same way as people with legs do.) And if understanding one another's language doesn't depend on a shared set of constraints, such that a human with two legs, a human with no legs, and a not-perfectly-humanlike robot can all communicate with one another, it may turn out that an emulated software body can communicate with all three of them. The latter seems more likely to me, but ultimately it's an empirical question.
You make a very important point that I would like to emphasize: incommensurate bodies very likely will lead to misunderstanding. It's not just a matter of shared or disjunct body isomorphism. It's also a matter of embodied interaction in a real world. Let's take the very fundamental function of pointing. Every human language is rife with words called deictics that anchor the flow of utterance to specific pieces of the immediate environment. English examples are words like "this", "that", "near", "far", "soon", "late", the positional prepositions, pronominals like "me" and "you" -- the meaning of these terms is grounded dynamically by the speakers and hearers in the time and place of utterance, the placement and salience of surrounding objects and structures, and the particular speaker and hearers and overhearers of the utterance. Human pointing -- with the fingers, hands, eyes, chin, head tilt, elbow, whatever -- has been shown to perform much the same functions as deictic speech in utterance. (See the work of Sotaro Kita if you're interested in the data). A robot with no mechanism for pointing and no sensory apparatus for detecting the pointing gestures of human agents in its environment will misunderstand a great deal and will not be able to communicate fluently. Then there are the cultural conventions that regulate pointing words and gestures alike. For example, spatial meanings tend to be either speaker-relative or landmark-relative or absolute (that is, embedded in a spatial frame of cardinal directions) in a given culture, and whichever of these options the culture chooses is used in both physical pointing and linguistic pointing through deictics. A robot with no cultural reference won't be able to disambigurate "there" (relative to me here now) versus "there" (relative to the river/mountain/rising sun), even if physical pointing is integrated into the attempt to figure out what "there" is. And the problem may not be detected due to the illustion of double t
If I am talking to you on the telephone, I have no mechanism for pointing and no sensory apparatus for detecting your pointing gestures, yet we can communicate just fine. The whole embodied cognition thing is a massive, elementary mistake as bad as all the ones that Eliezer has analysed in the Sequences. It's an instant fail.
6Said Achmiz10y
Can you expand on this just a bit? I am leaning, slowly, in the same direction, and I'd like a bit of a sanity check on this claim.

Firstly, I have no problem with the "embodied cognition" idea so far as it relates to human beings (or animals, for that matter). Yes, people think also with their bodies, store memories in the environment, point at things, and so on. This seems to me both true and unremarkable. So unremarkable as to hardly be worth the amount of thought that apparently goes into it. While it may be interesting to trace out all the ways in which it happens, I see no philosophical importance in the details.

Where it goes wrong is the application to AGI that says that because people do this, it is an essential part of how an intellgence of any sort must operate, and therefore a man-made intelligent machine must be given a body. The argument mistakes a superficial fact about observed intelligences for a fact about the mechanism whereby an intelligence of any sort must operate. There is a large and expanding body of work on making ever more elaborate robot puppets like the Nao, explicitly following a research programme of developing "embodied cognition".

I cannot see these projects as being of any interest. I would be a lot more interested in seeing someone build a human-sized robot t... (read more)

This is where you lose me. Isn't that an equally effective argument against AGI in general?
"AGI in general" is a thing of unlimited broadness, about which lack of success so far implies nothing more than lack of success so far. Cf. flying machines, which weren't made until they were. Embodied cognition, on the other hand, is a definite thing, a specific approach that is at least 30 years old, and I don't think it's even made a contribution to narrow AI yet. It is only mentioned in Russell and Norvig in their concluding section on the philosophy of Strong AI, not in any of the practical chapters.
I took RichardKennaway's post to mean something like the following: "Birds fly by flapping their wings, but that's not the only way to fly; we have built airplanes, dirigibles and rockets that fly differently. Humans acquire intelligence (and language) by interacting with their physical environment using a specific set of sensors and effectors, but that's not the only way to acquire intelligence. Tomorrow, we may build an AI that does so differently."
But since that idea has been around in strength since the 1980s, and can be found in Turing in 1950, apparently it's fair to say that if it worked beyond the toy projects that AGI attempts always produce, we would have seen it by now.
I think that we have seen it by now, we just don't call it "AI". Even in Turing's day, we had radar systems that could automatically lock on to enemy planes and shoot them down. Today, we have search engines that can provide answers (with a significant degree of success) to textual or verbal queries; mapping software that can plot the best path through a network of roadways; chess programs that can consistently defeat humans; cars that drive themselves; planes that fly themselves; plus a host of other things like that. Sure, none of these projects are Strong AI, but neither are they toys.
This depends on the definition of 'toy projects' that you use. For the sort of broad definition you are using, where 'toy projects' refers literally to toys, Richard Kennaway's original claim that the embodied approach had only produced toys is factually incorrect. For the definition of 'toy projects' that both Richard Kennaway and Document are using, in which 'toy projects' is more closely related to 'toy models' [http://en.wikipedia.org/wiki/Toy_model]- i.e.attempts at a simplified version of Strong AI- this is an argument against AGI in general.
I see what you mean, but I'm having trouble understanding what "a simplified version of Strong AI" would look like. For example, can we consider a natural language processing system that's connected to a modern search engine to be "a simplified version of Strong AI" ? Such a system is obviously not generally intelligent, but it does perform several important functions -- such as natural language processing -- that would pretty much be a requirement for any AGI. However, the implementation of such a system is most likely not generalizable to an AGI (if it were, we'd have AGI by now). So, can we consider it to be a "toy project", or not ?
The "magic ingredient" may be a bridging of intuitions: an embodied AI which you can more naturally interact with offers more intuitive metrics for progress; milestones which can be used to attract funding since they make more sense intuitively. Obviously you can build an AGI using only lego stones. And you can build an AGI "purely" as software (i.e. with variable hardware substrates). The steelman for pursuing embodied cognition would not be "embodiment is strictly necessary to build AGIs" (boring!), but that "given humans with a goal of building an AGI, going the embodiment route may be a viable approach". I well remember that early morning in the CS lab, the better part of a decade ago, when I stumbled -- still half asleep -- into a sideroom to turn on the lights, only to stare into the eye of Eccerobot [http://www.designboom.com/weblog/images/images_2/2011/jenny/eccerobot/ecce01.jpg] (in an earlier incarnation), which was visiting our lab. Shudder. I used to joke that my goal in life would be to build the successor creature, and to be judged by it (humankind and me both). To be judged and to be found unworthy in its (in this case single) eye, and to be smitten. After all, what better emotional proof to have created something of worth is there than your creation judging you to be unworthy? Take my atoms, Adambot!
Are misunderstanding more common over the telephone for things like negotiation?
I don't know, but I doubt that the communication medium makes much difference beyond the individual skills of the people using it. People can use multiple modalities to communicate, and in a situation where some are missing, one varies one's use of the others to accomplish the goal. In adversarial negotiations one might even find it an advantage not to be seen, to avoid accidentally revealing things one wishes to keep secret. Of course, that applies to both parties, and it will come down to a matter of who is more skilled at using the means available. People even manage to communicate in writing!
Sure, I agree that we make use of all kinds of contextual cues to interpret speech, and a system lacking awareness of that context will have trouble interpreting speech.For example, if I say "Do you like that?" to Sam, when Sam can't see the thing I'm gesturing to indicate or doesn't share the cultural context that lets them interpret that gesture, Sam won't be able to interpret or engage with me successfully. Absolutely agreed. And this applies to all kinds of things, including (as you say) but hardly limited to pointing. And, sure, the system may not even be aware of that trouble... illusions of transparency abound. Sam might go along secure in the belief that they know what I'm asking about and be completely wrong. Absolutely agreed. And sure, I agree that we rely heavily on physical metaphors when discussing abstract ideas, and that a system incapable of processing my metaphors will have difficulty engaging with me successfully. Absolutely agreed. All of that said, what I have trouble with is your apparent insistence that only a humanoid system is capable of perceiving or interpreting human contextual cues, metaphors, etc. That doesn't seem likely to me at all, any more than it seems likely that a blind person (or one on the other end of a text-only link) is incapable of understanding human speech.
1Said Achmiz10y
Are you really claiming that ability to understand the very concept of indexicality, and concepts like "soon", "late", "far", etc., relies on humanlike fingers? That seems like an extraordinary claim, to put it lightly. Also: "Detecting pointing gestures" would be the function of a perception algorithm, not a sensory apparatus (unless what you mean is "a robot with no ability to perceive positions/orientations/etc. of objects in its environment", which... wouldn't be very useful). So it's a matter of what we do with sense data, not what sorts of body we have; that is, software, not hardware. More generally, a lot of what you're saying (and — this is my very tentative impression — a lot of the ideas of embodied cognition in general) seems to be based on an idea that we might create some general-intelligent AI or robot, but have it start at some "undeveloped" state and then proceed to "learn" or "evolve", gathering concepts about the world, growing in understanding, until it achieves some desired level of intellectual development. The concern then arises that without the kind of embodiment that we humans enjoy, this AI will not develop the concepts necessary for it to understand us and vice versa. Ok. But is anyone working in AI these days actually suggesting that this is how we should go about doing things? Is everyone working in AI these days suggesting that? Isn't this entire line of reasoning inapplicable to whole broad swaths of possible approaches to AI design? P.S. What does "there, relative to the river" mean?
Yeah, I am advancing the hypothesis that, in humans, the comprehension of indexicality relies on embodied pointing at its core -- though not just with fingers, which are not universally used for pointing in all human cultures. Sotaro Kita has the most data on this subject for language, but the embodied basis of mathematics is discussed in Where Mathematics Comes From, by by Geroge Lakoff and Rafael Nunez . Whether all possible minds must rely on such a mechanism, I couldn't possibly guess. But I am persuaded humans do (a lot of) it with their bodies. In most European cultures, we use speaker-relative deictics. If I point to the southeast while facing south and say "there", I mean "generally to my front and left". But if I turn around and face north, I will point to the northwest and say "there" to mean the same thing, ie, "generally to my front and left." The fact that the physical direction of my pointing gesture is different is irrelevant in English; it's my body position that's used as a landmark for finding the target of "there". (Unless I'm pointing at something in particular here and now, of course; in which case the target of the pointing action becomes its own landmark.) In a number of Native American languages, the pointing is always to a cardinal direction. If the orientation of my body changes when I say "there", I might point over my shoulder rather than to my front and left. The landmark for finding the target of "there" is a direction relative to the trajetory of the sun. But many cultures use a dominant feature of the landscape, like the Amazon or the Missippi or the Nile rivers, or a major mountain range like the Rockies, or a sacred city like Mecca, as the orientation landmark, and in some cultures this gets encoded in the deictics of the language and the conventions for pointing. "Up" might not mean up vertically, but rather "upriver", while "down" would be "downriver". In a steep river valley in New Guinea, "down" could mean "toward the river"
I was able to follow this explanation (as well as the rest of your post) without seeing your physical body in any way. In addition, I suspect that, while you were typing your paragraph, you weren't physically pointing at things. The fact that we can do this looks to me like evidence against your main thesis.
Ah, but you're assuming that this particular interaction stands on its own. I'll bet you were able to visualize the described gestures just fine by invoking memories of past interactions with bodies in the world. Two points. First, I don't contest the existence of verbal labels that merely refer -- or even just register as being invoked without refering at all. As long as some labels are directly grounded to body/world, or refer to other labels that do get grounded in the body/world historically, we generally get by in routine situations. And all cultures have error detection and repair norms for conversation so that we can usually recover without social disaster. However, the fact that verbal labels can be used without grounding them in the body/world is a problem. It is frequently the case that speakers and hearers alike don't bother to connect words to reality, and this is a major source of misunderstanding, error, and nonsense. In our own case here and now, we are actually failing to understand each other fully because I can't show you actual videotapes of what I'm talking about. You are rightly skeptical because words alone aren't good enough evidence. And that is itself evidence. Second, humans have a developmental trajectory and history, and memories of that history. We're a time-binding animal in Korzybski's terminology. I would suggest that an enculturated adult native speaker of a language will have what amount to "muscle memory" tics that can be invoked as needed to create referents. Mere memory of a motion or a perception is probably sufficient. "Oh, look, it's an invisible gesture!" is not at all convincing, I realize, so let me summarize several lines of evidence for it. Developmentally, there's quite a lot of research on language acquisition in infants and young children that suggests shared attention management -- through indexical pointing, and shared gaze, and physical coercion of the body, and noises that trigger attention shift -- is a criti
What do you mean by "fully" ? I believe I understand you well enough for all practical purposes. I don't agree with you, but agreement and understanding are two different things. I'm not sure what you mean by "merely refer", but keep in mind that we humans are able to communicate concepts which have no physical analogues that would be immediately accessible to our senses. For example, we can talk about things like "O(N)", or "ribosome", or "a^n +b^n = c^n". We can also talk about entirely imaginary worlds, such as f.ex. the world where Mario, the turtle-crushing plumber, lives. And we can do this without having any "physical context" for the interaction, too. All that is beside the point, however. In the rest of your post, you bring up a lot of evidence in support of your model of human development. That's great, but your original claim was that any type of intelligence at all will require a physical body in order to develop; and nothing you've said so far is relevant to this claim. True, human intelligence is the only kind we know of so far, but then, at one point birds and insects were the only self-propelled flyers in existence -- and that's not the case anymore. Furthermore, your also claimed that no simulation, no matter how realistic, will serve to replace the physical world for the purposes of human development, and I'm still not convinced that this is true, either. As I'd said before, we humans do not have perfect senses; if physical coordinates of real objects were snapped to a 0.01mm grid, no human child would ever notice. And in fact, there are plenty of humans who grow up and develop language just fine without the ability to see colors, or to move some of their limbs in order to point at things. Just to drive the point home: even if I granted all of your arguments regarding humans, you would still need to demonstrate that human intelligence is the only possible kind of intelligence; that growing up in a human body is the only possible way to develop
Let me refer you to Computation and Human Experience, by Philip E. Agre, and to Understanding Computers and Cognition, by Terry Winograd and Fernando Flores.
Can you summarize the salient parts ?
1Said Achmiz10y
But wait; whether all possible minds must rely on such a mechanism is the entire question at hand! Humans implement this feature in some particular way? Fine; but this thread started by discussing what AIs and robots must do to implement the same feature. If implementation-specific details in humans don't tell us anything interesting about implementation constraints in other minds, especially artificial minds which we are in theory free to place anywhere in mind design space, then the entire topic is almost completely irrelevant to an AI discussion (except possible as an example of "well, here is one way you could do it"). Er, what? I thought I was a member of a European culture, but I don't think this is how I use the word "there". If I point to some direction while facing somewhere, and say "there", I mean... "in the direction I am pointing". The only situation when I'd use "there" in the way you describe is if I were describing some scenario involving myself located somewhere other than my current location, such that absolute directions in the story/scenario would not be the same as absolute directions in my current location. If this is accurate, then why on earth would we map this word in this language to the English "there"? It clearly does not remotely resemble how we use the word "there", so this seems to be a case of poor translation rather than an example of cultural differences. Yeah, actually, this research I was aware of. As I recall, the Native Americans in question had some difficulty understanding the Westerners' concepts of speaker-relative indexicals. But note: if we can have such different concepts of indexicality, despite sharing the same pointing digits and whatnot... it seems premature, at best, to suggest that said hardware plays such a key role in our concept formation, much less in the possibility of having such concepts at all. Ultimately, the interesting aspect of this entire discussion (imo, of course) is what these human-specific imp
Ok, but is this the correct conclusion ? It's pretty obvious that a SHRDLU-style simulation is not sufficient to achieve natural language understanding, but can you generalize that to saying that no conceivable simulation is sufficient ? As far as I can tell, you would make such a generalization because, While this is true, it is also true that our human senses cannot fully perceive the reality around us with infinite fidelity. A child who is still learning his native tongue can't a rock that is 5cm in diameter from a rock that's 5.000001cm in diameter. This would lead me to believe that your simulation does not need 7 significant figures of precision in order to produce a language-speaking mind. In fact, a colorblind child can't tell a red-colored ball from a green-colored ball, and yet colorblind adults can speak a variety of languages, so it's possible that your simulation could be monochrome and still achieve the desired result.
1Said Achmiz10y
I agree that Searle believes in magic [http://lesswrong.com/lw/hgl/the_flawed_turing_test_language_understanding_and/90rg], but "intentionality" is not magic (see: almost anything Dennett has written). This sounds interesting. Could you expand on this?
A list of references can be found in an earlier post in this thread [http://lesswrong.com/lw/i4z/welcome_to_less_wrong_6th_thread_july_2013/9j11].
0Swimmer963 (Miranda Dixon-Luinenburg) 10y
Welcome! Yeah. This, and the "existential angst" thing, seem to be common problems on LW, and I've never been sure why. I think that keeping yourself busy doing practical stuff prevents it from becoming an issue. That's fascinating! What research has been done on this! I would totally be interested in reading more about it.
Jurgen Streeck's book Gesturecraft: The manu-facture of meaning is a good summary of Streeck's cross-linguistic research on the interaction of gesture and speech in meaning creation. The book is pre-theoretical, for the most part, but Streeck does make an important claim that the biological covariation in a speaker or hearer across the somatosensory modes of gesture, vision, audition, and speech do the work of abstraction -- which is an unsolved problem in my book. Streeck's claim happens to converge with Eric Kandel's hypothesis that abstraction happens when neurological activity covaries across different somatosensory modes. After all, the only things that CAN covary across, say, musical tone changes in the ear and dance moves in the arms, legs, trunk, and head, are abstract relations. Temporal synchronicity and sequence, say. Another interesting book is Cognition in the Wild by Edwin Hutchins. Hutchins goes rather too far in the direction of externalizing cognition from the participants in the act of knowing, but he does make it clear that cultures build tools into the environment that offload thinking function and effort, to the general benefit of all concerned. Those tools get included by their users in the manufacture of online meaning, to the point that the online meaning can't be reconstructed from the words alone. The whole field of conversation analysis goes into the micro-organization of interactive utterances from a linguistic point of view rather than a cognitive perspective. The focus is on the social and communicative functions of empirically attested language structures as demonstrated by the speakers themselves to one another. Anything written by John Heritage in that vein is worth reading, IMO. EDIT: Revised, consolidated, and expanded bibliography on interactive construction of meaning: LINGUISTICS * Philosophy in the Flesh, by George Lakoff and Mark Johnson * Women, Fire and Dangerous Things, by George Lakoff * The Singing Neaderthals,
1Swimmer963 (Miranda Dixon-Luinenburg) 10y
Thanks! Neat.

Hi! I first saw LW as a node on a map of neoreactionary web sites. Which I guess is a pretty weird way to find it, since I'm not myself a neoreactionary and LW doesn't seem to fit the map. You have to stretch pretty far to connect some of those nodes.

Fortunately, I took a look at the Less Wrong community, and it's been really interesting to explore. I figured I should introduce myself, since I posted in another thread. I'm in my early 30's and I'm studying in the life sciences at the postgraduate level. I'm a Christian. I'm also a married father, and a veteran. So. Probably somewhat atypical (I peeked at the survey results.)

I'm excited by several of the big problems that seem to animate LW: minimizing cognitive bias day-to-day, optimizing philanthropy, and working through received ideology. I know zip about AI, but addressing existential risk is really interesting to me indirectly, as it relates to forecasting and mitigating mere catastrophes*, a challenge for wonks and technocrats and scientists (and everybody, of course). In fact, if anybody knows of LW'ers or other rationalists interested in policy problems of that nature I'd be super grateful for a pointer or a link.

In conclusion, I read ZeroHedge far too much, sometimes wear Vibrams, and am thrilled to meet all of you.

*is there a better word? My jargon is level 0.

That brings up some interesting questions. The last survey [http://lesswrong.com/lw/fp5/2012_survey_results/] placed self-identified neoreactionaries as a very small percentage of LW readership (scroll down to "Alternate Politics Question"). Progressivism appears to be the most popular political philosophy around here, with libertarianism a strong competitor; nothing else is in the running. That's not the first time I've heard LW referred to as a neoreactionary site, though; once might be coincidence, but twice needs explanation. With the survey in mind it's clearly not a matter of explicitly endorsed philosophy, so I'm left to assume that we're propagating ideas or cultural artifacts that're popular in neoreactionary circles. I'm not sure what those might be, though. It might just be our general skepticism of academically dominant narratives, but that seems like too glib an explanation to me.
Could this be explained by the base rates? Imagine a society with 10 neoreactionaries and 10000 liberals (or any other mainstream political group). Let's suppose that 5 of the neoreactionaries and 500 of the liberals read LessWrong. In this society, neoreactionaries would consider LessWrong one of "their" websites, because half of them are reading it. Yet the LessWrong survey would show that neoreactionaries are just a tiny minority of its readers.
That's a heck of a coincidence, but it would explain a perception among neoreactionaries. It wouldn't, however, explain perceptions among (to use your example) liberals; unless the latter spend a lot of time reading blogs from the former, they're probably going to be using an outside view, which would give them the same ratios we see in the survey. Out in the wild, I've seen the characterization coming from both sides. Although the graph in the ancestor is from a neoreactionary blog.
While I'm not sure what "neoreactionary" refers to specifically there are lots of reasons that certain types of liberals see LessWrong as reactionary: * A somewhat strong libertarian component * Belief in evolutionary psychology * Anti-religous (or generally the belief that beliefs can be right or wrong) * LessWrong's more technical understanding of evidence is incompatible with standpoint theory and similar epistemic frameworks favored by some groups of liberals. * Those older discussions around PUA where it's presented in a pretty positive light * Glorification of the enlightenment.
Viliam's explanation [http://lesswrong.com/lw/i4z/welcome_to_less_wrong_6th_thread_july_2013/9iyu] seems like a strong one to me, but doesn't explain the historical accident of (to use his made up numbers) half of neoreactionaries reading LW. I suspect that LW has a vibe of "actually think through everything, question your implicit assumptions, and follow logic to its conclusion." The neoreactionary believes that doing so ends up at the neoreactionary position- even if that is true for only 1% of people, that leads to a 10X higher concentration of neoreactionaries at LW. At the very least, it seems that LW has a strong tendency to destroy strong political leanings, and especially affection for popular government-supporting narratives.
The impression I got from looking at their graph is that a strong libertarian component is enough by itself. It wouldn't be the first time I've seen people consider libertarianism inherently very regressive. Edit: Originally I assumed that it was accusing Less Wrong of being neoreactionary, but looking a bit around the site it looks like they might be praising it.
I don't think that's a powerful enough explanation. Setting aside the differences between libertarianism and neoreaction, there are far more libertarian-leaning blogs than that graph can account for, and many of the missing ones are more popular than we.
I agree. It might be worth noting that in this thread [http://lesswrong.com/lw/kk/why_are_individual_iq_differences_ok/], the other thread where we just crossed paths, there are two different posters who blog at other nodes in that graph.

Hey everyone, I'm 26, and a PhD candidate in theoretical physics (four years in, maybe two left). I've been reading LessWrong for years on and off but I put off participating for a long time, mainly because at first there was a lot of rationality specific lingo I didn't understand, and I didn't want to waste anyones time until I understood more of it.

I had always felt that things in life are just systems, and for most systems there are more and less efficient ways to do the same things. Which to me that is what rationality is, first seeing the system for what it actually is, and then tweaking your actions to better align with the actual rules of the system. So I began looking to see what other people thought about rationality, and eventually ended up here. I lurked for years, and finally made the first step towards involvement during the LW study hall, which I participated in for several weeks as not_a_test5 during my working hours.

I was accepted last year into one of the CFAR workshops with an offer for about 50% reduction in fees, but unfortunately for a graduate student it was still difficult for me to justify the cost when I am on a fixed income for the next few years a... (read more)

CFAR is holding a workshop in New York [http://rationality.org/apply/] on November 1-4 (Friday through Monday).
Just wondering what your area of research is. Eliezer's point is that his QM sequence, resulting in proclaiming MWI the one true Bayesian interpretation, is an essential part of epistemic rationality (or something like that), and that physicists are irrational at ignoring this. Not surprisingly, trained physicists, including yours truly, tend to be highly skeptical of this sweeping assertion. So I wonder if you ever give any thought to Many Worlds or stick with the usual "shut up and calculate"?
My research is in quantum optics and information, more specifically macroscopic tests of Bell's inequality and applications to quantum cryptography through things like the Ekert protocol. I didn't realize that the quantum mechanics sequence here made such conclusions, thanks for pointing that out, maybe I'll check it out to see what he says. I've given some thought to many worlds, but not enough to be an expert, as my work doesn't necessitate it. From what I know, I'm not so convinced that many worlds is the correct interpretation, I think answers to the meaning of the wave function collapse will come more form decoherence mechanisms giving the appearance of a collapse.
Forgive my ignorance, but isn't that the official many-world's position - that decoherence provides each "you" with the appearance of collapse?
Decoherence [http://en.wikipedia.org/wiki/Quantum_decoherence] is a measurable physical effect and is interpretation-agnostic. "Each you" only appears in the MWI ontology. pan did not state anything about there being more than one copy of the observer as a result of decoherence.
That makes sense; are you a physicist, too?
Trained, not practicing.

Hi, I'm Chris Barnett.

I encountered HPMOR when I met Christopher Olah at Chez JJ, Mountain View in April 2012 during a networking expedition to Silicon Valley. I read for approximately 3 days straight. HPMOR took the place of Ender's Game, which I'd only read a few weeks before, as my favourite fiction.

I joined the Melbourne LessWrong community in early 2013 and finished reading the sequences soon after. My favourite sequences are Epistemology, Quantum Physics and Words.

I started the first rationalist sharehouse in Melbourne with Brayden McLean, Thomas Eliot and Allison Rea in June 2013, completed the first Melbourne CFAR workshop in February 2014 and moved to Berkeley CA at 1pm on March 6th 2014 (via timezone teleportation :P).

I'm in the process of deciding where my time would best be spent to maximize the expected goodness of the future. I still have much confusion about how to read the output of my utility function for far future scenarios involving AI, brain upload, mind copying and consciousness-containing simulations, but I have a few heuristics such as less suffering is better, more exploration of possibility space is better, retention of human values in general (such as fre... (read more)

I'm Katy, I'm 26, I have a 7 month old baby (I feel that's important because it heavily affects my current ability to think/sleep/eat/do anything) and a husband and ... well, I never really thought about rationality until I came across Less Wrong.

I grew up always ... wanting more. I believed in god, for a while, until I realised I was just talking to myself. I suffered from bipolar disorder (mainly depressive) from my early teens until ... well, until I became pregnant, actually, when it mysteriously disappeared. I wanted to meet people who understood, who thought deeper, who questioned, who wondered. I came across Terry Pratchett, and I found his ideas within stories to be so wonderful, but met few people who had read (or enjoyed) his writing, and even fewer who ever found the concepts of "how" and "why" as intensely interesting as I did.

I studied a lot of different things at university - English, history, Antarctic Studies (I live in Australia so there was a course down in Tasmania), maths, physics, business ... but most of my learning has been alone, through books or the internet or waking up at 2am and thinking "I wonder why that happens" and then ... (read more)

1Swimmer963 (Miranda Dixon-Luinenburg) 10y
Welcome! You can probably contribute more than you realize.
Thanks! I hope so, in time - I just think it's wiser to watch and learn so that I can understand how LW works and what specific terms and concepts mean before jumping in with what I think I understand!

Hello! I'm Alex. I'm an undergrad currently studying economics and finance in the Bay Area. I think I first heard about Less Wrong on TVTropes, of all places, which lead me to HPMOR and then here. I bookmarked the site and forgot about it until pretty recently, when I came back and started reading articles and comments. I'm currently reading through the Major Sequences.

I'm very interested in economics and game theory, which defintely has a lot of overlap with rationality and behavioral science. Recently I've been learning computer programming as well. I guess I started to identify as a rationalist a few years ago, but there was never one set moment for me - it's something I think I've always valued. I love to learn and read and I suppose ideas involving rationality and cognition was just something that stuck out to me as interesting.

Other than that, I'm a big fan of Major League Baseball, and lately I've been attempting to write and record music. I'm definitely glad I found LW and am looking forward to reading more and hopefully being an active community member.

Also, I'm noticing quite a few similarities between the commenting and profile system here and the system on Reddit... anyone know if that was intentional?

Hi Alex, I'm Alex! Less Wrong's code is based off of Reddit's system. Reddit made their code base open-source in June of 2009; Less Wrong then forked it. [http://blog.reddit.com/2009/03/lesswrong-coolest-use-of-reddit-source.html]

Hi there!

I found HPMoR via TVTropes and then found LessWrong via HPMoR. I decided to hang around after reading the explanation of Bayes Theorem on Eliezer's personal site and finding it quite nice. Also, it matched up with how I thought of Bayes's theorem. You could say that I got attracted to LW by confirmation bias. :)

On a more useful note, I got interested in rationality/etc. through a somewhat convoluted path. I got introduced to Bayes Theorem via Paul Graham when I built a website filter for a science fair project.

My reading material also contributed heavily. I've also always been a fast and constant reader so discovering the (FREE!) interlibrary loan offered by the University of California was a boon. Major nonfiction books that affected me were cognitive science stuff (especially Dan Ariely) and books on how things/processes/systems work I distinctly recall re-re-re-checking out a book on landfills and waste management in elementary school because it was long enough to be somewhat thorough and had enough photos to be interesting. Major fiction influences include books by Thornton Burgess, the Redwall series, and David Brin. I got introduced to the concept of fanfiction by th... (read more)


Hello, Less Wrong users. My internet handle is Jen, and I'm here because the conversations are interesting and this feels like the natural next step to reading the sequences (still in progress, but I'm getting through them alright) and HPMOR (caught up).

I'm a seventeen-year-old high school senior in the Southern California area. My most notable interests are anime, economics, evolutionary psychology, math, airsoft (and real guns), and possibly something important that I'm forgetting but that should be mentioned. I grew up speaking Spanish and English, but the latter is the only one I'm fluent in. I'm currently in my fourth year of Japanese, and I know enough for conversation, but my Spanish is still better because of early acquisition and the like. One thing I should mention ahead of time is that my ADHD makes it difficult for me to focus on writing something for long periods of time, so I stop posts a lot to do something else and thus what appears below may seem somewhat fragmented.

I learned about this community through a friend on another website, and when I learned about HPMOR a couple of months ago, I read through it in about two weeks, which says something when you learn that ... (read more)

Hello and welcome to LessWrong! No need to apologize for your writing. Seems clear and succinct to me. Glad to see you've been enjoying the literature so far. Maybe, you'll have a little of your own to contribute eventually. And yes, while Bayes' Theorem is used somewhat for a "gate keeper," the Sequences are still highly relatable and not as intimidating as some people make them out to be. Since you live in Southern California, you're right near the heart of LW territory. The Bay Area is a particular hive of LW activity. Since you're still in high school and under 18, I don't know how your age affects your ability to participate, but in a year or so, you might consider checking out your local LessWrong meetup [http://wiki.lesswrong.com/wiki/Less_Wrong_meetup_groups] or a CFAR workshop [http://rationality.org/]. They're both good fun, great learning experiences, and fine ways to socialize with fellow rationalists. Glad to have another polylingual on board. Our range of the languages can sometimes be a little drab. Anyway, glad to have you join the conversation! Hope to see you around.

Hi LW, My name's Olivier, I'm a 37-year-old Canadian currently living in Ottawa. My background is varied: I have a BA in Communication Studies and an MPhil in Japanese Studies but also a DEC (some special Quebec degree equivalent to the last year of high school and first of university in the rest of Canada and the USA) in Natural Sciences. I've owned a business, worked in cultural media and am now a public servant working in immigration.

I've been interested in AI, existential risks, intelligence explosion et al. for a number of years, probably since finding Bolstrom's paper on Simulated Reality.

I'm not 100% sure how I found LW, but it probably was while browsing for one of the topics above.

I've considered myself a rationalist for as long as I can remember, though I've long called it (rather naively?) "realist". Also being an existentialist, I try to bring these beliefs/convictions into practice in my work and how I raise my children (we'll see how that turns out!)

Through browsing here, I'm glad to find community that appears in between rigid academia and sensationalist media.

Anyhow, I'll most likely lurk a lot more than I post. Having three young kids leaves me with little time, and a sleep-addled, rather incoherent brain.

Thanks for reading!

Hello and welcome to LessWrong! Wow! That's quite the background. Sounds like you enjoy to dip into each field. A useful virtue to have. You'll find plenty of people here whose interests extend to every field they can devour. I'm sure you'll have an interesting perspective to bring to the conversation! AI, existential risks, and intelligence explosion are definitely bolded topics around LW. We're something of a sister organization to MIRI [http://intelligence.org/], the Machine Intelligence Research Institute. Don't know how familiar you are with them, but if AI interests you, I'd highly suggest giving them a look-see. Quite a few active LWers have worked with or at MIRI before, so cross-pollination is frequent. Sounds like you've already started the work of trying to apply rational techniques in your life. Good on ya! Many of us here are always working to improve what some call "the martial arts of rationality" and make our own lives a little better planned, a little better exectued. We'd love to hear some of your experiences. Especially with kids! Now that's a problem that never gets solved! We're certainly glad to have you, and if you feel like joining the conversation, hop right in! You might check out the latest Open Thread [http://lesswrong.com/r/discussion/lw/kq3/open_thread_1117_august_2014/] for some casual talk. It's a good place to start posting so you can get a feel for the community and its standards, and a great place to ask questions. Even though its an open thread, the conversation is serious and can even get pretty heated. If you're interested in a little (lie: a LOT) reading material, you can check out the Sequences [http://wiki.lesswrong.com/wiki/Sequences], the main collection of LW posts covering and analyzing some of the most important topics on LW. Whatever you do next, we're glad to have you!
Welcome! Just in case you haven't noticed yet, there's a Less Wrong meetup in Ottawa. Sequences rec seconded, they're what formed the initial kernel of the Less Wrong community. There are many of them, so take them at a comfortable pace.
Thanks guys! A meetup would be great - I'm new to the area and don't know too many people here. I'll try and slowly go through the sequences as recommended... Definitely looks interesting. I'm half-way through Bolstom's Superintelligence right now (like most of the planet, it would seem!), so I'll need more material soon! Rationality with kids... It works and it doesn't. A recent example: my son (5) is somehow afraid of zombies. I've been trying to have him look at this rationally: as he ever seen zombies in real life? Does he know anyone who has? Zombies often appear in stories with other mythical creatures: are those real? If they're unreal and only appear in his dreams, what could he do about it? Maybe tell himself zombies don't exist, so he must be dreaming? I am proud to say he has applied that last technique and told me that when they showed up, he knew then weren't real. Problem solved? Partly. I still need to go through that same reasoning every night...

Skyler here, a 21 year old technology student. Born and raised in the backwoods of Vermont to ahem philosophically diverse parents, was encouraged to read pretty much every philosophical book the library had except for Ayn Rand. So naturally I gravitated towards that as soon as I became enough of a teenager, but apparently completely missed the antagonism towards non-geniuses and couldn't for the life of me figure out why I seriously disliked every objectivist I met.

About two years ago, I had a professor who introduced me to HPMoR, which I enjoyed immensely. It took me around a month to move to the sequences. They seem to have had the curious property of seeming perfectly obvious, like someone simply expressing what I already knew just in better words, and while a lot of them do fall close in broad subject to things I'd written about before, the only use I'd had for bayesian statistics prior to reading them was spam filters. (And then the author's notes pointed me to Worm, which consumed a month or two.)

A couple of weeks ago however, I encountered a post on SlateStarCodex (which I'd been reading after stumbling upon it through unrelated browsing) about trans people, and somehow ar... (read more)

Also, I don't know if "Typical mind and gender identity" [http://slatestarcodex.com/2013/02/18/typical-mind-and-gender-identity/] is the blog post that you stumbled across, but I am very glad to have read it, and especially to have read many of the comments. I think I had run into related ideas before (thank you, Internet subcultures!), but that made the idea that gender identity has a strength as well as a direction much clearer.
A combination of that post and What universal human experiences are you missing without realizing it? [http://slatestarcodex.com/2014/03/17/what-universal-human-experiences-are-you-missing-without-realizing-it/] actually. I would say that I am strongly typed as male, strong enough that occasionally I've been known to get annoyed at my body not being male enough. (Larger muscle groups, more body hair, darker beard, etc.) Probably influencing this are the facts that Skyler is the feminine form of my name, and that puberty was downright cruel to me. As you say, it's not common to think of being strongly or weakly identified with your own sex, rather than just a binary "fits/doesn't fit" check.
I'm afraid I haven't been active online recently, but if you live in an area with a regular in-person meetup, those can be seriously awesome. :)
Meatspace meetups sound like a good deal of fun, and possibly a faster route to being part of the community than commenting on articles that I think I have something to add. Downside is, I'm currently in Rochester New York, and unless I'm misusing the meetups page somehow, looks like the closest regular meetup is in Albany. That's a long bike ride. :) If anybody is in Rochester, by all means let me know!

Hi. I'm Baisius. I came here, like most, through HPMOR. I've read a lot of the sequences and they've helped me reanalyze the things I believe and why I believe them. I've been lurking here for awhile, but I've never really felt I had anything to add to the site, content wise. That's changed, however - I just launched a blog. The blog is generally LW themed, so I thought it appropriate. I wouldn't ordinarily advertise for it, but I would particularly like some help on one of the problems I explored in my first post. (see footnote 3)

One of the things that's bothered me about PredictionBook, and one of the reasons I don't use it much, is that its analysis seems a bit... lacking. In the post, I tried to come up with a rigorous way of comparing sets of predictions to see which are more accurate. I did this by looking at the distribution of residuals (outcome - predicted probability) for a set of predictions. The odd thing was that when I looked at the variance, the inverse of the variance showed some very odd patterns. It's all there in the post, but if anyone who knows a bit more math than I do could explain it, I'd really appreciate it.

Welcome! For assessing prediction accuracy, are you familiar with scoring rules [http://en.wikipedia.org/wiki/Scoring_rule]?
I wasn't thanks. I'll try to read that sometime when I get a chance. At first glance though, I'm unsure why you would want it to be logarithmic. I thought about doing it that way too, but you then you lose the meaning associated with average error, which I think is undesirable.
So, let's say you want a scoring rule with two properties. You want it to be local: that is to say, all that matters is the probability you assigned to the actual outcome. This is in contrast to rules like the quadratic scoring rule, where your score is different depending on how the outcomes that didn't happen are grouped. Based on this assumption, I'm going to write the scoring rule as S(p), where S(p) is the score you get when you assign a probability p to the true outcome. You also want it to play nicely with combining separate events. That is to say, if you estimate 10% of it being cloudy when it actually is, and 10% of it being warm outside when it actually is, you want your score to be the same as if you had assigned 1% to the correct proposition that it is warm and cloudy outside. More succinctly: S(p)+S(q)=S(pq). If you add in the additional caveat that some scores are not 0, then you are forced by the above statement to a logarithmic scoring rule. Interestingly, you don't need to include the requirement that it be a proper scoring rule, although the logarithmic scoring rule is proper.

I'm Anthony. I found out about Less Wrong from Overcoming Bias, and I found out about Overcoming Bias about 2 years ago when Abnormal Returns, which is like a sampler of all kinds of posts on the econ-blogsphere, linked to Overcoming Bias.

I had previously decided that the singulatarians were crazily optimistic. I thought they were all about the future being unimaginable goodness all the time. I guess that was my interpretation of Kurzeil. I thought they were unrealistic about the nature of reality. I don't believe that the singularity will hit in a few decades, at least I don't understand the arguments enough to think that yet, but it is an interesting topic

I used to be part of an Objectivist campus club at the University of CU-Denver. And then an Objectivist magazine promoted the idea of nuking Afghanistitan in response to 9/11. And also I discovered Michael Shermer's "Why People Believe Strange Things", and specially the chapter calling out Objectivism as a cult. I fought against the idea of Objectivism being a cult for a long time, but then I started to be convinced, and I eventually abandoned Objectivism completely.

But reading HPMOR, the sequences and some of the oth... (read more)

Good for you. Checking multiple sources is very rational :) If you get stuck, the Freenode ##physics IRC channel often has physics undergrad and grad students around to help with the technical stuff, though discussing interpretations is generally not encouraged.
I will definitely check that out. Thanks. My other thought is to also get a linear algebra book that covers infinite dimensional vectors.
This is useful for, say, the hydrogen atom or the simple harmonic oscillator, but you can learn a lot just from the spin 1/2 quantum mechanics, which is quite finite-dimensional. It is sufficient for all of quantum information, EPR, Bell inequalities, etc. If you are interested in "quantum epistemology", Scott Aaronson's Quantum Computing since Democritus [http://www.amazon.com/Quantum-Computing-since-Democritus-Aaronson/dp/0521199565] is an excellent read and would not overtax your math skills.

I'm Tom, 23 year old uni drop out (existential apathy is a killer), majored in Bioscience for what its worth. Saw the name of this site while browsing tvtropes and was instantly intrigued, as "less wrong" has always been something of a mantra for me. I lurked for a while and sampled the sequences and was pleased to note that many of the points raised were ideas that had already occurred to me.

Its good to find a community devoted to reason and that seems to actually think where most people are content not to. I'm looking forward suckling off the collective wisdom of this community, and hopefully make a valuable contribution or two of my own.

Hello and welcome to LessWrong! We have something of a crosspollination with tvtropes as well as a few other sites. The similar "archive diving" structures probably don't hurt. Glad you decided to join in! The site always needs some bioscience to collaborate with our high computer science population. Look forward to seeing your contributions.

Hello, I'm Ary. 24 going on 25 mostly agender female-presenting asexual. I've been doing a lot of self-improvement and 'soul'-searching over the past few years and finally stumbled across HPMOR while burning my way through HP fanfiction. From there, it was only a matter of looking at the author page for it to find links here to LessWrong.com. For the last three weeks I've been reading my way through the Sequences, starting with the Core Sequences.

Late last week I managed to start on the How to Actually Change Your Mind sequence, which is proving to be a interesting and challenging read. Today I reached the Belief in Self-Deception post and started to feel my mind beginning to really spin. Having continued past that, still thinking, it seems that for far too long I've been professing my beliefs without believing. It may take a bit before I manage to shuck the habits brought on by that line of thinking, but that's the point of reading these - breaking bad mental habits and learning to think better and stronger.

A lot of that desire is brought on from having read and re-read (multiple times) HPMOR and developing a need to be more like Harry. Reading the Sequences is also helping to reg... (read more)

Hi. I've actually been lurking here for a couple months now, but I've recently started actually making comments, so I figure this is probably the right time to introduce myself. (Also, I only discovered this post this morning.)

Since I'm not great at expressing my thoughts in an aesthetically-pleasing fashion without the use of lists, I suppose from here I'll just go down the list of suggested topics of introduction from the beginning of the post.

Who I am: The name I generally go by online is Mister Tulip. I'm sixteen years old, but getting older at a rate of approximately one year per year. Thanks to the conveniences of homeschooling, I have far more free time than seems to be typical for my age-range, which I expend on a large-feeling collection of time-sinks which isn't actually particularly large whenever I write it down.

What I'm doing: Receiving a general education from my father, attending an introductory psychology course at the nearest community college once per week, and spending my exorbitant amounts of free time on anything which interests me enough to occupy it. Among my time-sinks are keeping track of two large fandoms (My Little Pony: Friendship is Magic and Homestuck)... (read more)

I'm glad Luminosity was a stepping-stone on your meander here :)

Hi! I've been lurking around on the blog. I look forward to actively engage from now. Generally, I'm strongly interested in AI research, rationality in general, bayesian statistics and decision problems. I hope that I will keep on learning a lot and will also contribute useful insights for this community as it is very valuable what people here are about to do! So, see you on the "battlefield". Hi to everyone!

Hi, I've been lurking for a while. I haven't yet read most of the sequences, since I find the style not so much to my liking. I prefer textbooks, so I'll probably go out and get the textbooks on this list or this one instead. I read somewhere on this site that Thinking and Deciding is pretty much the sequences in book form. I did read HP:MOR though - brilliant!

In the meantime, I've read a decent amount on LW-related subjects, including the following books on rationality:

  • Thinking, Fast and Slow by Daniel Kahneman
  • Everything Is Obvious Once You Know the Answer by Duncan Watts
  • The Righteous Mind by Jonathan Haidt
  • The Signal and the Noise by Nate Silver
  • How to Lie With Statistics by Darrell Huff
  • Thinking Statistically by Uri Bram

Another interest is futurism, on which I've read the following:

  • The Singularity Is Near by Ray Kurzweil
  • Abundance by Peter Diamandis
  • The Future by Al Gore
  • The New Digital Age by Eric Schmidt and Jared Cohen
  • Big Data by Victor Mayer-Schonberger
  • Approaching the Future by Ben Hammersley
  • Radical Abundance by Eric Drexler

I'm also very interested in positive psychology and behavioral change. Good books I've read on this include:

  • Flourish by Martin Seligman
  • Ha
... (read more)
So, my review [http://lesswrong.com/lw/cb1/thinking_and_deciding_a_chapter_by_chapter_review/] of Thinking and Deciding claims that T&D is a good introduction to rationality. One of the comments there is a link to Eliezer's comment [http://lesswrong.com/lw/12d/recommended_reading_for_new_rationalists/wyb?context=2#comments] that Good and Real is basically the Sequences in book form. The two are about different topics- T&D is about the meat of rationality (what is thinking, biases, hypothesis generation and testing, values, decisionmaking under certainty and uncertainty), whereas G&R is about the philosophy of reductionism, focusing on various paradoxes, like Newcomb's Problem. For reasons that I have difficulty articulating, I found G&R painful to read, but I appear to be atypical in that reaction. (I liked the Sequences, and so if you disliked the Sequences my pain might be a recommendation for G&R!) A primary value of the Sequences, in my opinion, is the resulting philosophical foundation- many people come away from the Sequences with the feeling that their views haven't changed significantly, but that they have been clarified significantly- which I don't think one gets from T&D (whereas I do think that T&D is much more effective at training executive-nature / facility with decision-making than the Sequences).
Thanks. I already had Good & Real on my reading list, but based on this I think I'll bump it up to higher priority.
On second thought, I might as well post my career deliberations here, and if it generates a lot of comments (I hope) then I'll move it to a new post as recommended. Not sure it's correct protocol to reply to my own comment, but I'll do it anyway. So here's my career thoughts: As I said, I'm currently working for a small company in a business development capacity. It's not really the type of work I enjoy, so I'm considering going back to school to follow my dream of becoming a researcher. However, I'm very concerned about the time commitment involved. My current work allows me lots of wonderful free time to spend on family, friends, hobbies, and leisure activities. This kind of lifestyle is very important to me, and if becoming a researcher means giving it up then I'd rather stay where I am or look for a secondary alternative. Anything more than a standard 40-hour week is pretty much off-limits to me. (OK, maybe 45 hours if absolutely necessary, but definitely not more than that.) That includes all studying time, all online or offline networking time, and all other time related to study or work. On the other hand, I'm willing to work hard and my current financial situation allows me to work for relatively low pay (30-40K, maybe even a drop less). Also, I'm willing to push off earning any money at all while I go back to school to earn my degree. I'm also willing to take out loans if necessary - and it'll probably be necessary, since I don't have more than a couple of introductory college classes under my belt. The standard research career seems to involve getting a PhD and then moving into an academic position or joining an independent research institute. I've been told contradictory things about how much time commitment is required for academic jobs of this type. The general consensus on the internet seems to be that a research career is pretty much all-consuming and the work will take up at least 50-60 hours per week. Some of the academics I know concurred with

Hello. My name's Graedon. I'm 16, and I've got absolutely no idea of what I'm doing.

First off, I probably ended up on this site the same way a lot of people did: through MoR. I started reading it for fun, but soon the cool sciency stuff started to appeal more than the cool magicy stuff. I followed the link to LessWrong.com, and here I am.


That's pretty much it.

Hi, everyone. I'm Lawrence and I'm a college freshman. I like to read, program, and do math in my spare time.

I grew up in the Bay Area with science and religion as my two ideals. My family was religious and went to church every Sunday, but at the same time they put a strong emphasis on learning science - by the time I was in fourth grade, the amount of science books my parents bought me (and that I read) filled an entire bookshelf. I loved religion because I felt like it gave meaning to the world, teaching us to be kind and to respect one another. But, perhaps paradoxically, that made me love science as well, for science gave us medicine, technologies, and other ways to help the poor and heal the sick, things that God commanded us to do.

My faith in religion took a hit in 5th grade, when a close family member was diagnosed with cancer. Neither the prayers of our Christian friends nor the medicine of her doctors helped. We moved to China to pursue alternate treatment, but in the end nothing could save her, and she passed away. I pleaded with God to bring her back, to enact some miracle. No miracles happened. Some of our Christian friends told us that it was all God's plan, and that... (read more)

Hello and welcome to LessWrong! Thank you for sharing your story. You're passion is quite clear and I'm glad you've decided to join in the conversation. Your drive is impressive. And infectious. It's the sort of energy we (or, at least, I) feed off of around here. You will definitely find people who share you're need for desperate action. I'm curious what you're current plans after college are. Do you have an idea what it is you want to do with your skills and knowledge? You seem to already have the "get rich" thing sewn up so that it's no longer you're main goal. Have you looked into some of the sister organizations LW associates with? It sounds like you're the type who likes to get involved, so a CFAR [http://rationality.org/] workshop or MIRI [http://intelligence.org/] internship might be something you would get a lot out of. There are also LessWrong Meetups [http://wiki.lesswrong.com/wiki/Less_Wrong_meetup_groups], which are great for meeting other LWers, having some good discussion, and gaining a little fun on the side. Glad to have you join the conversation! Hope to see you around.
Thanks! Unfortunately I'm not sure if I'm good enough at math for an MIRI internship. Also, I don't think there are any CFAR workshops in my area, especially any during break. :P I'm not sure about what I'll do after college - I've looked through most of the 80k Hr career options, but still can't decide between earning to give via quantitative trading/consulting/investment banking, tech entrepreneurship, and research.

Hi, I'm Ian. I am a 32 year old computer programmer from Massachusetts. My main interest (in computer science) is in the realm of computational creativity but is by no means my only interest. For half my life, I've been coming up with my own sets of ideas - way back when it was on Usenet - some ideas better than others. Regardless of the eventual proven validity of my ideas, I find coming up with original ideas one of the primary motivators in my life. It is an exercise that allows me to continuously uncover beliefs and feelings and uncharted territory that wouldn't be possible for me to explore otherwise. Also, I find it remarkably difficult to find people to share and dissect my ideas with. Generally, people either tell me that I'm smart (I'm not particularly smart) or weird (I'm not particularly weird). In either case I find most people also don't want to continue talking about why wasabi and thunder are the same thing...or the relationship between creativity, intelligence, primes and small worlds...or why there is no such thing as a question...or why I'm a non-practicing atheist at the moment. What I hope to get out of this community is disagreement, agreement, new ideas, a reshaping of old ideas, friends, and above all, to know that other people in this world understand my ideas (even if they disagree with them). I hope to give this community some ideas they have never thought of.

Hello and welcome to LessWrong! I admire your reasons for joining. It is easy to find a group or circle that does not challenge you and then rest on your laurels. Seeking out disagreement and criticism is a hard first step for a lot of people. But don't worry... you will certainly find both here! Not that that is a bad thing. I see you've already added to the Discussion [http://lesswrong.com/r/discussion/new/] forum. Good on you for diving in and starting some new conversation. If you have some ideas you want to share and get critiqued but feel they are not fully formed enough for a post of their own, try the Open Thread [http://lesswrong.com/r/discussion/lw/kkl/open_thread_july_2127_2014/]. Even Open Thread conversations can be quite engaging and constructive (and heated! Don't forget heated). Also, I don't know if you've read any of the LW literature people tend to reference, but, given your interest in refining your ideas, this [http://wiki.lesswrong.com/wiki/Reductionism_(sequence]) set of posts might interest you.
Thanks for the guidance. It can be intimidating exposing your ideas to a new set of people. I've been reading things here on LW off and on for roughly a year. There is quite a bit of jargon on this site and I've been reading through as many sequences as I have time for to try and fill myself in. I find that even concepts I'm familiar with tend to have sub-context here that doesn't quite allow me to fully understand some of the ideas being discussed. I have a fairy good grasp of map versus territory for example, but my understanding comes by way of The Precession of Simulacra by ‎Jean Baudrillard, where in that book he argues that the territory no longer exists, and only the map is real. That is quite different from the arguments I've seen here postulating that we can somehow gain access to the true underlaying territory. Regardless, I expect that with enough reading, I'll be able to contribute. I was a chef for 17 years, so heated debates don't intimidate me I have a thick skin. I ask that people understand the ideas I have - not agree with me. I will give others the same curtesy. Again, thanks for the welcome. I'll check out the links. Cheers.

(Aside: I'm trying to become more concise and articulate in my writing, so I welcome anyone and everyone to critique my postings. I know this post is long-winded when compared to its neighbors. I left it long since it took me a number of words to relate where I came from, which I imagine to be more interesting than the TL;DR version, which goes something like, "My name is Ben. I used to be a devout Christian, then I was drug-addled and irrational in myriad ways. Now, I know some mathematics, but not a ton, and I'd like to learn more of the math I like and continue working on thinking less irrationally." )

My name is Ben. I'm 23 years old, and I live in the southeastern USA. I moved back here to attend university after spending a few years working on the west coast. Perhaps you've had a friend who had another friend, and this second friend turned your friend on to the idea that some of this or that would teach them something about this. I've been this person, and my road to rationality began with going a little loopy after a little too much of this, which came out of this.

I grew up in Mississippi. I was nursed on Jesus, Calvin, hellfire, brimstone, and Coca Cola. ... (read more)

Interesting stuff. FYI, you're not the only LWer I know of who has experienced apparently permanent mental problems as a result of drug use. And reading drug-related subreddits, I've noticed that everyone seems really stupid. So yeah, everything in moderation.

Hello! I'm a 19 year old woman in Washington state, studying microbiology as an undergraduate. I was introduced to the "scene" when a friend recommended HPMOR in high school. I was raised in an atheist household with a fairly strong value on science, so it was novel if not mind-blowing- but still encouraged me to think about the way I think, read some of the Sequences, and get into Sam Harris and Carl Sagan. At college I began reading the rest of Less Wrong, and some related sites, and began identifying as a rationalist.

(Well, let's be honest here- I also moved from a math-and-science-oriented high school to a very liberal college, where my social identity changed from "artsy and literary" to "science-y and analytic". I would be lying if I said that trying to live up to it wasn't a compelling factor!)

LW and 80,000 hours also motivated me to change several of my long-held beliefs (at the moment, I can think of immortality and, well, er, most areas of rationality, which I guess is expected), and re-evaluate my career plans- changing my intended focus from environmental research or emerging diseases, to neglected tropical diseases (if this happens to be anyone's area of expertise, I'd be interested to hear!)

Anyways, I've been integrating the website into my head for some time now, and, equipped with the moniker of my favorite family of wasp, think it's about time to (begin, very slowly, to) integrate my head into the website. Nice to be here!

Welcome to LW!

Hi folks

I am Tom. Allow me to introduce myself, my perception of rationality, and my goals as a rationalist. I hope what follows is not too long and boring.

I am a physicist, currently a post-doc in Texas, working on x-ray imaging. I have been interested in science for longer than I have known that 'science' is a word. I went for physics because, well, everything is physics, but I sometimes marvel that I didn't go for biology, because I have always felt that evolution by natural selection is more beautiful than any theory of 'physics' (of course, really it is a theory of physics, but not 'nominal physics').

Obviously, the absolute queen of theories is probability theory, since it is the technology that gives us all the other theories.

A few years ago, during my PhD work, I listened to a man called Ben Goldacre on BBC radio, and as a result stumbled onto several useful things. Firstly, by googling his name afterwards, I discovered that there are things called science blogs (!) and something called a 'skeptic's community.' I became hooked.

The next thing I learned from Goldacre’s blog was that I had been shockingly badly educated in statistics. I realized for example, that science and ... (read more)

Welcome! Where are you in Texas?
Thanks for the welcome. I'm in Houston.

Hello, my name is Jonas and I'm currently working as a software engineer.

I happened to learn about biases in decision analysis class at university and was hooked instantly. It was only later that I learned about LW. I'm very interested in not just learning about rationality on a theoretical level but actually living it out to the fullest.

I'm very thankful to LW for improving my life so far, but I guess the best is yet to come.

Hello, I stumbled upon LW a few months ago. Some of the stuff here I find extremely interesting. Really like the quality of the articles and discussions here. I studied math and engineering, currently working as a s/w developer, also very much interested in economics and game theory.



Hi. I have a pseudonymous account that I use most of the time, but I want to post something to Discussion in my real name. Can I please get 2 karma so I can post that? Thanks! I'll delete this post afterwards.

[This comment is no longer endorsed by its author]Reply

Hello all! My name is Will. I'm 21 and currently live in upstate New York. A bit about myself:

At an early age, I remember I was thinking in my head, and I caught myself in a lie. I already knew that it was wrong to lie to other people, though I did it sometimes, but I could not think of any good reason to lie to myself. It was some time before I really started to apply this idea.

My parents divorced when I was ten, and my mother discovered that she had a brain tumor around the same time. In the face of this uncertainty and unpleasantness, my mother turned ... (read more)

I'm Griffin. I am 17 and sending in my first application to college today! (relevance? maybe)I suppose one reason I am signing up for an account now is that all these wacky essays have made me want to write more about myself.

Things that led me to Less Wrong: well I guess when I first found my way here it was to the wiki article on some religious topic and I was like, "hmm a hate website. How curious." because I had that thing where I knew hate websites existed but didn't really connect it to reality. In any case, I closed the page and went on doi... (read more)

I'm Thomas, 23 years old, from Germany. I study physics but starting this semester I have shifted my focus on Machine Learning, mostly due to the influence of lesswrong.

Here are a few things about my philosophical and scientific journey if anyone's interested.

I grew up with mildly religious parents, never being really religious myself. At about 12 I came into contact with the concept of atheism and immediately realized that's what I was. Before, I hadn't really thought about it but it was clear to me then. For a long time I felt a bit ashamed of not believ... (read more)

Welcome! I am also basically a newcomer here. I'd suggest not waiting to read all the sequences before you contribute. The worst that happens is that someone corrects you, right? I've had a few interesting discussions and I'm still not quite done reading the main line. Was your dissolving of free will different from the one presented in the Quantum Physics sequence?
Hi! I can't seem to find a discussion of free will in the Quantum Physics sequence. I only know this: http://lesswrong.com/lw/of/dissolving_the_question/ [http://lesswrong.com/lw/of/dissolving_the_question/] (which demonstrates the method I was talking about).
See this wiki page for links to discussion of Free Will in the sequences: http://wiki.lesswrong.com/wiki/Free_will [http://wiki.lesswrong.com/wiki/Free_will]

Hello, I am a human who goes by Auroch, VAuroch, or some variation thereon on most internet sites. I have what I consider a healthy degree of respect for how easy it is to attach an online name to a meatspace human, so I prefer to avoid providing information about myself. (Some might consider this paranoia. I would hope that such people are in shorter supply here.) I will say that I am a recent college graduate in the Pacific Northwest, who majored in Math/Theoretical Computer Science.

I have found LessWrong repeatedly, and have for most of its history occa... (read more)

I can't remember seeing any consequentialist argument for using one's own real name on the Internet; all the ones I've seen are about virtue ethics, amounting to "if you use a pseudonym you're a [low-status person]".
There are situations where it's useful to use a real name; it's come up for me in directly programming-related projects, where having my name attached to commits is useful for resume purposes, and having the same name attached to the commits as the comments gets one taken seriously. And if I ever am getting a game published, naturally I'll want to promote it using my real name on BoardGameGeek, etc. But even then, separating the various personae into different identity chains is useful.
There you go. Perfectly consequentialist. I follow much the same practice as VAuroch, of course, but this argument sprung into my head fully-formed on reading your comment.

My name is Alexander Baruta. People call me confident, knowledgeable, and confident. The truth behind those statements is that I'm inherently none of those. I hate stepping outside my comfort zone; as some of my friends would say "I hate it with a fiery burning passion to rival the sun". As a consequence I read a ton of books, I also have only had one good ELA teacher. My summer school teacher for ELA 30-1 (that's grade 12 English for those of you outside Canada), I'm in summer-school not because I failed the course but because I want to get ahea... (read more)

Welcome! You should consider breaking this post up into paragraphs. There's just too much unstructured text for me to want to read more than a few lines.
Right, Paragraphs. Knew I was forgetting something!
Pratchett and Gaiman co-authored a book called 'Good Omens'. I highly recommend it.
I've already read it thanks. To anyone else reading this 'Good Omens' is thoroughly funny and a all around good read.
Interestingly, my first reaction to this post was that a great deal of it reminds me of myself, especially near that age. I wonder if this is the result of ingrained bias? If I'm not mistaken, when you give people a horoscope or other personality description, about 90% of them will agree that it appears to refer to them, compared to the 8.33% we'd expect it to actually apply to. Then there's selection bias inherent to people writing on LW (wannabe philosophers and formal logic enthusiasts posting here? A shocker!). And yet... I'm interested to know, did you have any particular goal in mind posting this, or just making yourself generally known? If you need help or advice on any subject, be specific about it and I will be happy to assist (as will many others I'm sure).
Actually I had multiple reasons for posting this. Firstly it's to make myself known to the community. As an ulterior motive I have trouble with being open with others and connecting (although I suspect that this is a common problem) and I want to get over my fear of such.

Hi all, I’m a social entrepreneur, professor, and aspiring rationalist. My project is Intentional Insights. This is a new nonprofit I co-founded with my wife and other fellow aspiring rationalists in the Columbus, OH Less Wrong meetup. The nonprofit emerged from our passion to promote rationality among the broad masses. We use social influence techniques, create stories, and speak to emotions. We orient toward creating engaging videos, blogs, social media, and other content that an aspiring rationalist like yourself can share with friends and family member... (read more)

Hiya I'm Oliver, I'm 21 and I'm here because I want to be stronger.

I've got a degree in Engineering, £600 and a slowly breaking laptop which I would send off to get fixed if I could do without the internet for the time that would take. I am, in essence, a shattered mass of broken stereotypes. I am a breakdancing, engineering, rock climbing, food roasting, anime watching, arrow-shooting intelligent fool from near London, UK. At the minute I'm living near Bath and I'm trying to force myself to look for engineering work: hopefully biotech, probably something ... (read more)

Hello and welcome to LessWrong! Bravo! It all starts with finding the crack in the lens. Now comes the hard, fun, terrible, numinous part of living better than before. Since you've already bootstrapped yourself through the sequences, you might want to consider branching out into real space. I say this because it sounds like you're looking for the practical, real, hands-on experience. A LessWrong meetup [http://wiki.lesswrong.com/wiki/Less_Wrong_meetup_groups#London.2C_UK], such as the one held near London, might be the very thing you need. A group of like-minded people, engaging in rationality exercises, swapping notes, and basically helping each other get a little bit stronger and feel a little bit better. You might also be interested in the Rationality Diary [http://lesswrong.com/r/discussion/lw/l4c/group_rationality_diary_october_1631/]. It's a good place for starting out tallying yourself, making a record of the real behaviors you've done, the real plans you've made, the real successes and failures you've had.It's a useful tool for keeping yourself honest and seeing how far you've come. And, of course, if you'd just like to participate in the discussion... well, there's certainly a place [http://lesswrong.com/r/discussion/lw/l3x/open_thread_oct_13_oct_19_2014/] for that too. Glad to have you join the conversation! Hope to see you around soon.
Thanks for the welcome and the useful links, you're right about the tendency towards meetups which is why I'm going to the one in Bath [http://lesswrong.com/lw/l3q/meetup_bath_introduction_and_predictionbook/] tomorrow. The rationality diary seems like it should be a useful and interesting addition to my attempts at self improvement, so cheers. Edit: That open thread is fascinating. I've never seen a community with such a high standard of discussion in real time. Even bestof archived depthhub threads don't touch it. I am going to have to think very carefully about any comments I might make to avoid accidentally eternal septembering this place. I can see I will also have to limit how much time I spend in such threads. I can fully imagine spending an inordinate amount of time on them learning fascinating things.
Hey Oliver, The Bristol EA society [https://www.facebook.com/EffectiveAltruismBristol] meets pretty regularly (weekly/fortnightly), which might also be of interest if you are in the Bristol/Bath area. Welcome and I'll see you at the Bath meetup!
See you on tuesday

Hi, my name is Joe. I live in North Jersey. I was born into a very religious Orthodox Jewish family. I only recently realized I how badly I was doublethinking.

I started with HPMOR (as, it seems, do most people) and found my way into the Sequences. I read them all on OB, and was amazed at how eloquently someone else could voice what seems to be my thoughts. It laid out bare the things I had been struggling with.

Then I found LW and was mostly just lurking for a while. I only made an account when I saw this post and realized how badly I wanted to upvote som... (read more)

Hey everyone,

This a new account for an old user. I've got a couple of substantial posts waiting in the wings and wanted to move to an account with different username from the one I first signed up with years ago. (Giving up on a mere 62 karma).

I'm planning a lengthy review of self-deception used for instrumental ends and a look into motivators vs. reason, by which I mean something like social approval is a motivator for donating, but helping people is the reason.

Those, and I need to post about a Less Wrong Australia Mega-Meetup which has been planned.

So pretty please, could I get the couple of karma points needed to post again?

And we're in action! http://lesswrong.com/lw/k23/meetup_lw_australia_megameetup/ [http://lesswrong.com/lw/k23/meetup_lw_australia_megameetup/]

Hi! My name is Daniel. I'm an undergraduate student, currently studying physics and mathematics at the Australian National University. I discovered Less Wrong about two years ago, and I've been regularly lurking ever since. I'm starting a meetup in Canberra - see http://lesswrong.com/meetups/wc. I hope that I see some of you there!

Hi LWers!

I'm a 37 year old male. I work from home as an engineer, primarily focusing on FPGA digital logic work and related C++, with a smattering of other things. I'm a father to two young children, and I live with my little family on a small farm in central Delaware. I've always been a cerebral sort of guy.

I can't remember exactly how I came to LW - I may have heard it mentioned in a YouTube video - but finding it felt somehow like coming home. The core sequences have become some of my favorite reading material. LW was my first exposure to many of t... (read more)

Hi! My name is Tobias. I'm from Munich in Germany, male, 24 years old, and currently doing a Master's degree in physics at LMU Munich. I'm doing okay to good in my studies, but I still struggle with procrastination in particular (though things have gotten better) and low motivation. In particular, while I like physics in the abstract, I don't particularly enjoy the reality of studying physics at a university. Most importantly, I'm totally unambitious, and not satisfied with that. I'll be finished with my studies in ~1.5h years, so I'm currently trying to p... (read more)

Well, various incarnations of Many Worlds/Mathematical Universe/String theory landscapes/Boltzmann brains are popular both here and in many Physics circles. While I don't hold much stock in any of those, there are surely some tenure profs in physics departments around the world who would take a sucker grad student willing to spend 4-6 years on something like that.

I'm NIH, I'm 17, and I discovered this site through HPMOR in late 2010.

At that time I read "The Problem With Too Many Rational Memes", closed the tab and forgot about it for two years. In spring 2012, I discovered that there was a new arc for HPMOR, read it and decided that some of EY's other works might be worth reading. Over the summer I began to lurk heavily, culminating in me reading the "Blog posts 2006-2010" EPUB from start to finish in November, which led to me registering.

I'd like to make a prediction of High (80%) confidence th... (read more)

I'm Alex, an American male doing undergraduate studies in Physics and Computer Science. Two years ago, I stumbled upon HPMoR, and made my way to this site shortly after. I've been lurking since, and in that time, I've seen top-level posts that have convinced me to abandon my half-formed theism, try out the pomodoro method (results still pending), and police myself for biases. I'm interested in lifehacking (though I acknowledge that I have a great deal of inertia in that area), and will be trying Soylent at some point in the next few months.

Hey there LW!

At least 6 months ago, I stumbled upon a PDF of the sequences (or at least Map and Territory) while randomly browsing a website hosting various PDF ebooks. I read "The Simple Truth" and "What do we mean by Rationality?", but somehow lost the link to the file at some stage. I recalled the name of the website it mentioned (obviously LessWrong) from somewhere, and started trying to find it. After not too long, I came to Methods of Rationality (which a friend of mine had previously linked via Facebook) and began reading, but I... (read more)

Hello everyone!

I'm going to try and write this incrementally, i.e. with frequent edits, so any replies I get might not all be referencing the same post.

To start off with, my username, Gondolinian, refers to the fictional city of Gondolin. It holds no special significance to me, I just needed a username, and I thought it sounded cool.

I've been a lurker here and on other rational blogs (primarily SSC) for over a year, but I've just now gotten brave enough to setup an account and start posting. I've also read all of HPMOR so far, Worm, all of Saga of Soul s... (read more)


The name is Daniel. I'm 22, coming out of college and running into the problem that there aren't that many people out there who get as excited as I do about epistemology, evolutionary theory, and interdisciplinary science as I do. I ended up coming here because I'm beginning to suspect that the longer I spend not talking about my ideas with other people (see: reality checks), the more likely they are to spiral off into flights of fancy. And nobody wants that. Plus I feel like in the day-to-day life, there's so little opportunity to really engage in pr... (read more)

Hi everyone!

I'm John Ku. I've been lurking on lesswrong since its beginning. I've also been following MIRI since around 2006 and attended the first CFAR mini-camp.

I became very interested in traditional rationality when I used analytic philosophy to think my way out of a very religious upbringing in what many would consider to be a cult. After I became an atheist, I set about rebuilding my worldview and focusing especially on metaethics to figure out what remains of ethics without God.

This process landed me in University of Michigan's Philosophy PhD progra... (read more)

You are welcome! And Don't Be Afraid of Asking Personally Important Questions of Less Wrong [http://lesswrong.com/lw/l5w/dont_be_afraid_of_asking_personally_important/]. I understand that you might not want to give details but I'm unclear what information I might provide. Maybe you could drop a few hints. You might also look at the Baseline of my opinion on LW topics [http://lesswrong.com/lw/ii5/baseline_of_my_opinion_on_lw_topics/].
You're right that I was being intentionally vague. For what it's worth, I was trying to drop some hints targeted at some who might be particularly helpful. If you didn't notice them, I wouldn't worry about it. This is especially true if we haven't met in person and you don't know much about me or my situation.

My name is Evan Gaensbauer. I'm starting an account on the new effective altruism forum with the same name, and I intend to post both here and there more frequently in the future. Additionally, I may write material for one site that is tangentially of interest to the readers on the other site. So, I want everyone to match what I write on different sites with me as the author. Some notable facts about me:

  • I live in Vancouver, Canada, where I help organize some of the effective altruism, and rationality meetups.
  • I'm an alumnus of the July 2013 CFAR workshop.
  • I'm a member of 80,000 Hours.
Hello and welcome to LessWrong! Glad to have a new altruist join the conversation, and it sounds like you have already gotten quite involved. Great. I'm definitely looking forward to seeing what sort of experiences and views you bring to the table. Since you're in Vancouver, do you know of the LW meetup [http://wiki.lesswrong.com/wiki/Less_Wrong_meetup_groups#Vancouver] they have there? If you haven't attended, it may be worth looking into and visiting. It's a great way to network and just mingle with other rationalists. I had not heard of 80,000 Hours before your post. Seems interesting. Thanks for introducing me to a new group I did not know about! Anyway, glad to have you join us. Look forward to seeing you in the conversation!

Thanks for the welcome.

I had a previous Less Wrong account under the username eggman. I got one with my full name to sync with my username on the new effective altruism forum, as I intend to post more frequently on both that site, and Less Wrong, and I figured it'd make sense for everyone to know my common identity so they can connect different ideas written on difference sites, or with my public identity.

I sometimes organize the LW meetup in Vancouver, and it's going fine.

I'm Imma, recently graduated from university (mix of physics and chemistry) and I self-identify as effective altruist . I'm not very familiar with LW material but want to gradually improve my rationality. I consider attending the CFAR workshop but have to prioritize this to donating the money to effective charities.

I'm involved in a combined EA/LW meetup group in Utrecht (Netherlands). We have biweekly events which I'm planning to announce on LW as well.

Hello and welcome to LessWrong! Sounds like you're already getting your feet wet! That's great. Always glad to have new members who actively participate in the real world (helps with the "effective" part of "effective altruism.") If you ever do get a chance to attend a CFAR workshop, you'll have plenty of people here to talk with about the lessons and ideas you come across. The CFAR and LW communities are strongly connected (as you can guess), so there's plenty of cross pollination of ideas. So you're already part of a meetup? Awesome! Feel free to list it on the meetup [http://wiki.lesswrong.com/wiki/Less_Wrong_meetup_groups] page. It never hurts to spread the word about your local meetups, and some LWers may not even realize they live right down the road from an active group. If you're interested in checking out some LW materials, the Sequences [http://wiki.lesswrong.com/wiki/Sequences] make for some good reading. Since improving yourself interests you, consider reading Alicorn's Living Luminously [http://lesswrong.com/lw/1xh/living_luminously/] or lukeprog's The Science of Winning at Life [http://wiki.lesswrong.com/wiki/The_Science_of_Winning_at_Life]. Both cover some useful ideas for self improvement and instrumental rationality. Given your background and the steps you're already taking to get involved, I'm sure you'll have some very interesting things to share with the community before too long. Glad you've decided to join! Hope you enjoy your time and come away better than you were before.
All your [] and () are switched.
Thanks! Fixed.
Thank you for your reply. I hope I will have time to go through the sequences, there is now some ethics stuff on my reading list. Our meetups will be announced on LW as well and I invite everyone to come! (If you live far away it might not be worth the travel cost, but you're welcome anyway)

Hi. I'm Tom. Long time rationality proponent.

I have met interesting people through less wrong and brighterminds, and just discovered this website.

What got me here was seeing this reference to Lesswrong in popular media:


Hello and welcome to LessWrong! You will certainly meet some interesting folk here. The best way to start would be to head on over to the Discussion [http://lesswrong.com/r/discussion/new/] board. That's where the day-to-day conversations of LW take place. It's also the best place to get a feel for the community's attitudes and standards. I'd definitely suggest lurking a bit. Then, you can observe the conversations of other LWers, and, when you're bursting to join in, add to the comments. Another great place to start is on the latest Open Thread [http://lesswrong.com/r/discussion/lw/koi/open_thread_august_4_10_2014/]. Open Threads are places for casual conversations and questions, though that doesn't stop the conversations from developing into weighty discussions or intense debates. If you have anything you want to ask or say, it's a good place to start. It'll also give you practice if you want to one day create full fledged articles of your own. If you're interested in diving into some of the literature here at LessWrong, you'll find the Sequences [http://wiki.lesswrong.com/wiki/Sequences] brought up again and again. These are a (LARGE) collection of posts, mostly by user Eliezer Yudkowsky, covering a variety of topics but centered around the art of rationality and related issues. Very helpful reading for the interested, but it can be a little overwhelming, when you're just joining. If you want a little taste of LessWrong literature, check out the linked articles on the About [http://lesswrong.com/about/] page or some friendly guides to the Sequences. (XiXiDu's [http://lesswrong.com/r/discussion/lw/66u/rewriting_the_sequences/4cc0] and Benito's [http://lesswrong.com/user/Benito/] are often recommended). These can ease you into the literature and let you find the parts that most interest you, without overwhelming you with details, discussion, and references. If Roko's Basilisk interested you, I'd suggest checking out the Yudkowsky paper, Coherent Extrapolated Vo
Yay for publicity :) Welcome to LessWrong! What's brighterminds?
Welcome. I guess any publicity is good publicity. Hope you had a laugh.

Hi guys, my name is Luka, and I'm 20. I study physics at University of Vienna.

I follow LW since February, and I went probably through all core sequences, and good chunk of the rest. I did not gained too much, because I was kinda always eager to argue with good arguments and resistant to bad arguments, even from elder (which brought me into trouble quite a few times). My biggest win is that I remained strong in the moment when I started to fall: I started drowning in irrationality (because of lack of rational people in my surrounding), and started using pas... (read more)

Hello and welcome to LessWrong! Glad to hear you've already started digging in to some of the literature and found it to your liking. Yes, it's easy, when you have no community that encourages improvement, to fall into passwords, caches, and generally "not thinking." We can even forget to hope that we can make things better, as you've discovered. I'm sure you'll find plenty of people who can relate here and who are glad to help each other not fall back into those habits. Since you seem to have such a focus on self-improvement and applying rationality to personal habits, don't hesitate to write about your experiences using rationality or your own personal improvements. Personal anecdotes are, of course, not verifiable experiments, but they are still experiences. The Group Rationality Diary [http://lesswrong.com/r/discussion/lw/kiz/group_rationality_diary_july_1631/] may interest you in that regard. You can share your own experiences, see what others have done, discuss personal habits and experiments. If you'd like a bit more discussion, you can go to the Open Thread [http://lesswrong.com/r/discussion/lw/kkl/open_thread_july_2127_2014/] or make a new Discussion [http://lesswrong.com/r/discussion/new/] post, though you might want to save that latter option for a more developed, researched topic. Starting in the Open Thread will not only help give you a chance to experience LW conversation and habits, but it can also help develop an idea you have before you present it as a full post. Applied rationality, or, as some refer to it around here, "the martial art of rationality," is one of our big projects of interest. It's right there in the title of the blog itself after all. We want to improve our abilities to improve the world. So we sharpen each other, and we develop new methods, find new discoveries, perform new experiments on using our tool kit in the larger world. We certainly welcome a new voice and new perspective to the conversation. Given your wide background,
Thank you for warm welcome and thorough information!

Hi, my name’s Charlie. I’m a 33YO Aries who enjoys long walks on the beach…

Oop, wrong script.

I’ve been lurking for years, but just started posting (nothing real, just $.02 and quotes really) so I figured I should write an intro so I won’t feel bad actually contributing.

Perhaps the most important thing to know about me is that I am the happiest person I have ever met, as far as I know. I have more money than I intend to spend, a very good head on my shoulders, and no known health problems. I just quit my job a few months ago. I know of no way my lif... (read more)

I suggest re-reading them. For a while I've been meaning to do a PSA post on the subject. I read the sequences once, in thematic order, then recently went back and re-read them in chronological order. I have to say I got a lot more out of them this time, now that I know where EY was heading with the entire project (and reading them in the order posted is much better imho than organized by topic).
Especially because they're enjoyable to read. I've been listening to the audio as it came out but the different ordering sounds great.

I'm Daniele De Rossi. I stumbled on Lukeprog's old site and thought the problems he was talking about :-rationality , friendly AI , psychology of adjustment , were all really interesting to me , so I followed him here. I'm interested in productivity stuff now primarily. I need to manage my time better and get more done.

Nice to meet you, person with above-average intelligence. My name is Optimal, because I am always seeking optimal outcomes. I'm 16 years old and currently enrolled in an online high school that provides me with an exceptional degree of educational freedom. I've been lurking around here for a few weeks, but I just now decided to join in because I could use some serious life advice.

Based on the contents of the article above, and on other discussions I have observed, I think it would be better to explain and discuss my situation in a discussion. Actually, I'... (read more)


Hello. I am a librarian of the public sphere. With my education recently completed, I hope to expand into other spheres of information work while I am still young. I am 24 and have spent a fourth of my life serving the public in libraries. I have built collections, websites, programs, and physical rooms for my libraries. I know I do not have to explain the joys of a library here. My goal since first learning to learn has always been to improve the world by offering it the very thing that improved me. If we are all finding ways to save the world, then I fou... (read more)


My name is Tim. I'm a neuroscience researcher and swing dance teacher living in NYC.

I originally found out about LW via one or two friends who occasionally shared LW posts with me. I didn't get into the site too much, but I did eventually come across HPMOR, and thought it was awesome. At one point, one of the author notes mentioned that CFAR would be putting on workshops in my area. I checked those out and they seemed very high-value, so I attended. That was in November. Since then I've been getting involved with the real-life LW community in New Yor... (read more)

Hi, my name is Robert McIntyre. I'm a graduate researcher at MIT studying AI. I am also a volunteer for the Brain Preservation Foundation (http://www.brainpreservation.org/) You can vote for us to win charity money here (http://on.fb.me/15XFdTG).

Hi! I'm Ciara (pronounced like Keara-Irish spelling is very muh irrational!) I've actually been a member of less wrong for a little while-I discovered it through HPMOR. I've always liked academics, challenging books, and Harry Potter, so I joined Less Wrong. I am a little ashamed to admit that I was quite intimidated by the sheer intellect and extraordinary thoughts that came from so many members all around the world. So, I took a little break after starting with the basics of rationality and am now a very different, though still amateur rationalist, pers... (read more)

[Meta comment: In the welcome post, the links to the open threads link to two different tags, with different time dates. This is confusing. One of them hasn't been updated since 10/2011. If you fix this, you might have to do the same in the template for creating new welcome threads. Also, I think the same issue exists elsewhere on the site, e.g. in the Less Wrong FAQ.]

Thanks for the heads up. Post fixed. Template fixed. I've replaced the single, different links with two links, each pair covering Main and Discussion open threads. If anyone knows a way to use one link to get both Main and Discussion open threads, please comment here and PM me.

Hello. I'm a typical geeky 20-something white male who's interested in science and technology. I'm a Bachelor in economics and business. Not a native English speaker.

From the time I was 12 I've spent most of my time surfing around the internet reading about interesting things and generally wasted my time and being alone. A few years ago I was really depressed and had a plan for suicide. Once in a while I've done something actually useful. That's my life in a nutshell.

I have always thought of myself as somewhat rational in the traditional sense when I'm not... (read more)


Because this thread hit 500 comments, I've posted a new one here. (In Main, but not yet promoted.)



I've been a part of LW before, but left when I felt that I no longer had more to give or receive from the community. This wasn't a falling out. Just maintaining a minimal life style. However, recent developments in my life, including the possibility of working in the Bay Area, have given me reason to come back. I hope to be as beneficial to the community as it has been to me.

See you around.

Hi, LessWrong community!

My pseudonym is Ilzolende Kiefer. I'm a HS student, autistic, and (as is typical for users of this site) an atheist. I've been lurking on this site for a while, and before that I was reading other books about cognitive bias and whatnot.

I think I got into rationality for 2 reasons: having a scientist parent, and dealing with school psychologists of questionable quality. (The autism wasn't a big enough deal to require an autism-specific therapist, but it wasn't equivalent to neurotypicality.) The first reason is straightforward. The s... (read more)

Does anyone know where the most recent version of the welcome thread is? I searched and searched for keywords like "welcome" and "introduction" / "introduce". Do you not use welcome threads anymore?

This is the most recent welcome thread. See the bit about reaching 500 comments in the small print at the bottom of this post.
The wiki has a page on Special Threads [http://wiki.lesswrong.com/wiki/Special_threads] which tries to point to the most recent of various threads. According to that, this is the most recent introduction thread.

My name is Joshua. I am 29 years old. After lurking for a while, I have decided to begin participating.

I have little training in mathematics or computer science. Growing up, mathematics always came easy to me, but it was never interesting (probably because it was easy, in part). Accordingly, I completed a typical high school education in mathematics by my freshman year and promptly stopped. In college, the only course I took was college algebra, which I completed for the sake of university requirements. I now regret ending my mathematical education and hav... (read more)

Hi Joshua, welcome! Regarding your quantum questions, you can post them to the open thread [http://lesswrong.com/r/discussion/lw/k13/open_thread_april_8_april_14_2014/].

Hello everyone, I've graduated in computer science this summer and I'm very much interested in philosophy and ethics (besides rationality, of course). I've stumbled upon LW through friends and found much of the content here to be eye-opening and fascinating. I'm still working my way through the core sequences, so don't expect any meaningful contributions soon – but, as rationalists, you should always be ready to be surprised! :-)

Hi, ismeta here.

I came to Less Wrong via a circuitous route, betwixt and between unordered Sequence posts, HPMoR, Overcoming Bias articles, and XiXiDu's critiques, all consumed during marathon procrastination sessions. My opinion of the community has lurched ungainly from one extreme to another, and now resides somewhere in the vicinity of 'cautious admiration'. I have reserved judgement on most of the transhumanist / singularitarian issues that are discussed on LW as yet (citing ignorance), though I should probably throw in an early disclaimer to the effe... (read more)

My name is Mathieu. One of my friend recommended me to read the main sequences a couple of months ago. I've read one third of them so far and I really like them. Now I want to get more involved in the LessWrong community than just reading the main sequences. I've just posted my first article. It's about a cryonics presentation I will do on Monday.

I wish there was a class about rationality at the beginning of high school (I'd remove any courses to add one about rationality). Otherwise we keep learning things without knowing how our brain works (especially t... (read more)

My name is Izaak. I stumbled across HPMOR one weekend while staying in a hotel room. I didn't sleep that night. I've read through most of Less Wrong, and some of the stuff on the other sites like Overcoming Bias. I'm a high school senior who will probably major in Comp Sci in college.

I've found the stuff on this website truly useful, but I have a question; I am currently in the IB Diploma Programme, and they have this class called TOK (Theory of Knowledge, it's truly awful, it has very little actual epistemology), but I have to do a final presentation on a... (read more)

A friend of mine did IB in high school, but I don't have much personal experience. I'd be happy to talk about presentation ideas. My standard advice for short-form presentations is to try to paint a picture that something more is possible; I've found Bishop and Trout's Epistemology and the Psychology of Human Judgment [http://lesswrong.com/lw/5vs/epistemology_and_the_psychology_of_human_judgment/] to be a good example of this. The book basically outlines the case that psychology can inform philosophy, and that coming up with superior algorithms for actual practice is better than debating labels. The inferential distance [http://wiki.lesswrong.com/wiki/Inferential_distance] to actually explain rationality is much longer than 20 minutes, but it seems like 20 minutes is enough time to explain that rationality exists.

Hi. I'm Gunnar. I'm from Germany. I'm lurking lesswrong since July 25th.

How did I become a rationalist? I always was. Or at least I continuously became.

I had a scientific interest as a child. My curiosity was satisfied by my parents with answers, experiments, construction toys and books, math courses and later boarding school (this was in germany when there was a hype on talent advancement).

I must have been eleven or twelve when I had one of the strongest aha moments I remember: The realization of the concept of continuous functions. That a relationship li... (read more)

Willkommen! :-) Wo in Deutschland steckst Du denn?
In Hamburg. Und da gehe ich auch nicht weg.
Do go on...


I am a 23 year old male named Corey, though I prefer to go by the alias Kavrae in any online discussions. This allows me to keep a persistent persona across all sites or games I may join. If you happen to come across this alias elsewhere, there is a high probability that it is the same person. Please be kind in judging such findings though, as I have gone through a bit of a mental overhaul in the last few months. I would also like to apologize in advance if this gets a little lengthy; that seems to be a trademark of my posts lately.

I should proba... (read more)

I read physics fora for just that effect. Some of it could as well be an elaborate VXJunkies [http://www.reddit.com/r/vxjunkies], for all I can tell.

Thou Shalt Not Anthropomorphize Unspecified Points In Mind Design Space.

My name is Forrest. I'm 20 and studying undergraduate Physics and Computer Science at the University of Maryland. About two years ago, one of my friends introduced me to HPMoR and I was instantly hooked. A few months ago, before the final plot arc came out, I decided I was tired of waiting for HJPEV and came here to learn about the Methods of Rationality themselves from the source. I spent a few months lurking, read many of the sequences, and now decided to actually go about making an account. So, here I am!


Hello again. Used to post as "ZoneSeek" but switched to my real name. I'm from the science/science fiction/atheist/traditional rationality node, got linked to LW years ago through Kaj Sotala back in the Livejournal days. I have high confidence that I am the only LessWronger in the Philippines.

You know, a feature it would be nice to have on LessWrong is a namechange feature. I too have had thought about moving over to my real name, but that is painful, you know? I'd have to start over from complete scratch. I guess it wouldn't be so bad, I've only been posting here for a year, and the pain will only get worse the more I put it off, but it would be much nicer if there were a button I could click to just change my username. Yes, put on it some safeguards, like have it say on my userpage what my username used to be, and maybe even have it cost karma or something, to prevent it from being overused.

Of course the real problem is that someone needs to actually go and make the changes in the code, and that takes work. There likely are higher priority changes just waiting vainly for someone to implement them, as TrikeApps does not have the manpower or resources to work on LessWrong save once in a blue moon. So it's unlikely this will happen in the foreseeable future. But if someone sees this, and wants to implement it, go ahead! I'm sure quite a few people would appreciate it.

"Show my real name" is a feature under current development, as of about 2 weeks ago.

3Paul Crowley10y
That is wonderful news - thank you! It sounds like we will have both usernames and real names, and both will be displayed, which is exactly as it should be. Thank you Tricycle!


Actually, I am no stranger to this site; I have been a sporadic fly-on-the-wall here since early 2011, when I found out about you guys through gwern's personal webpage (to which my interest in nootropics, n-backing, and spaced repetition had led me). I've made several desultory stabs at the sequences; I think I've read most of them twice over, but some I've abandoned and some I've never touched. I started HPMoR reluctantly, found I couldn't put it down, and finished it in a single sitting. Lately I've been pretty swamped with work, but I've been tryi... (read more)

Hello. My name is Avi. I am an 18 year old Orthodox Jewish American male.

I found out about LessWrong through HPMOR. I was very impressed by the quality and consistency of the writing.

I'm partly through the sequences (in middle of the quantum one currently) and I have a lot to say on much of what I've seen, but I decided not to post too much until I've finished all the sequences. Most of what I've seen seems correct, and then there's posts here and there that I think have logical errors.

I was a little disappointed that most of my comments got voted down (I'm at -3 Karma now) . Can anyone tell me why?

Welcome, Avi!

It looks like I downvoted three of your previous comments. Sorry about that (not really, it had to be done). Here is my reasoning, since you asked:

  • Your comment on AI avoiding destruction suggested that you neither read the previous discussion of the issue first, nor thought about it in any depth, just blurted out the first or second idea that you came up with.

  • Your retracted FTL question indicated that you didn't bother searching online for one of the most common questions ever asked about entanglement. Not until later, anyway. So the downvote worked as intended there.

  • Your comment on the vague quasi-philosophical concept of superdeterminism purported to provide some sort of a proof of it being not Turing-computable, yet did not discuss why the T.M. would not halt, only gave some poorly described thought experiment.

I am sorry you got a harsher-that-average welcome to this forum, I hope your comment quality improves after these few bumps to your ego.

I'm partly through the sequences (in middle of the quantum one currently)

Good for you. Note that the Quantum sequence is one of the harder and more controversial ones, consider alternative sources, like Scott Aar... (read more)

Joining these forums can serve as something of a reality check to gifted young people; they may be used to most any half-baked thought still being sufficient to impress their environment. Rarely is polish needed, rarely are "proofs" thoroughly nitpicked. Getting actual feedback knocking them off of their pedestal ("the smartest one around") can be ego-bruising, since we usually define ourselves through our perceived strengths. Ego-bruising, yet really, really important for actual personal and intellectual growth.

Blessed be the ones growing up around other minds who call them out on their mistakes, intellects against which they can grow their potential.

(I don't mean this as applying specifically to Avi, but more as a general observation.)

Yep. I'll put it even more directly. Smart people growing up in environments where most people around them are less smart tend to develop a highly convenient habit of handwaving or bullshitting through issues. However when they find themselves among people who are at least as smart as they are and some are smarter, that habit often leads to problems and a need for adjustment :-)
Does that go both ways? That is, can I "nitpick" other people's comments and posts? Also, if I find a typo in a post (in the sequences so far, I've spotted at least 2), is it acceptable to comment just pointing out the typo?
Why not PM them first?
This is my own practice. My reasoning is that pointing out a typo is of no enduring interest to other readers, and renders the comments section less valuable to other readers; so if it's convenient to contact the author more quietly, one should.
Yes. I recommend using ctrl-f to ensure no one else has already pointed out that typo.
Of course you can. Whether it's wise to do so is an entirely different question :-D
Yep, been there, have a bruised ego to show for it.
I don't think I would have minded as much if there would have been comments explaining why they thought I was wrong. It was the lack of response that bothered me. (And what's with this "You are trying to submit too fast"? I'm not allowed to post too many comments in a row?)
Yes. If I remember correctly, LW also implements some form of slow-banning (the amount of time required between your comments depends on your total karma), but I may be recalling a feature request as an implemented feature.
I thought it was caused by having a lot of recent posts downvoted.
From your post that you linked: "Instead I may ask politely whether my argument is a valid one, and if not, where the flaw lies." I think that's what I did on my FTL comment. (Incidentally, I had looked online and found several different versions of an experiment that said the same as I did in different ways, but the answers didn't explain well enough for me). I actually spent at least an hour reading through the comments on that AI post, and decided that the previous discussion wasn't enough for my idea. I'm not too good at anticipating which part of my arguments people will disagree with or not understand, so that may be why I don't explain fully. I was hoping for a response that I could then see what's missing and fill it in. It's usually better explained in my head than I write down.
I read most of the posts offline in ebooks. That means I don't see the comments unless I then go online and look. Is there a set of ebooks that includes comments? (For all I know, most of my ideas have already been said and refuted.) And is he perfect?
I don't know, but sounds like a good idea. Would be rather Talmudic in spirit. Unfortunately, most of the comments are fluff not worth reading, and separating the few percent that aren't is not that easy. Maybe pick the threads with top 10 comments by karma or something. Oh, far from it. I think that some of his statements are flat out wrong, but I only make this determination where either I have the relevant expertise or several experts disagree with him after considering his point in earnest.
Don't many experts disagree with him on his MWI view on quantum mechanics?
Also note that replacing "Everett branches" with "possible worlds" works in 99% of the decision-theoretic arguments Eliezer makes, so there is no need to sweat MWI vs other interpretations. I would be more interested to hear your opinion on the Trolley problem, the Newcomb's problem, and the Dust specks vs Torture issue. Assuming, of course, that you have studied it in some depth and went over the various arguments on both sides, the process you must be intimately familiar with if you have attended a yeshiva.
I've seen Newcomb and Dust specks vs Torture but not Trolley (although I've seen that one before in other places). Which sequences do I need to finish for those? If the trolley one is the same as the "standard" version, then it's fairly trivial within the framework of Orthodox Judaism (if I'm allowed to bring that in), because of strict rules about death. I'll elaborate further when I'm up to the question. The other two are a lot more complicated for me.
Yes, the standard Trolley problem, sorry. For more LW-specific problems, consider Parfit's hitchhiker [http://wiki.lesswrong.com/wiki/Parfit's_hitchhiker]. Of course you are allowed to bring it in. And, unless you insist that it is the One True Way, as opposed to just one of many religious and moral frameworks, you probably will not be judged harshly. So, by all means!
So according to Orthodox Judaism, one is not allowed to (even indirectly) cause a death, even when the alternative is considered worse. The standard example is if you're in a city and the "enemy" demands you hand over a specific person to be killed (unjustly), and says if you don't do so, they will destroy the whole city and everyone will die (including that person). The rule in that situation is that you aren't allowed to hand them over. Accepting that as an axiom, the trivial answer to the trolley situation is “don't do anything”. Maintain the status quo. You cannot cause a death, even though it will save ten other people. Parfit's hitchhiker also appears trivial. It seems to assume I place no value on telling the truth. As I do, in fact, place a high utility on being truthful (based on Judaism [http://halachafortodaycom.blogspot.com/2013/02/archives-hilchos-midvar-sheker-tirchak.html]) , my saying "Yes" will translate into a truthful expression on my face and I will get the ride. Note: I got the link from searching for "midvar sheker tirchak", which is the Bible's verse that says not to lie, roughly translated as "distance yourself from falsehood. On another topic, if I think that it is the “One True Way”, but don't say that, is that OK?
Thank you, I appreciate your replies. Hmm, I see. So, a clear and simple deontological rule. So, if you see your children being slaughtered in front of you, and all you need to do to save them and to kill the attacker is to press a button, you are not allowed to do it? Also, does this mean that there cannot be Orthodox Jewish soldiers? If so, is this a recent development, given that ancient Hebrews fought and killed without a second thought? Or is there another reason why it was OK to kill your enemy in King David's time, but not now? Right, ethical systems which value honesty absolutely have no difficulty with this. But is this a utilitarian calculation or an absolute injunction, like in the previous case, where you are not allowed to kill, no matter what? Or is there some threshold of (dis)utility above which lying is OK? If so, what price demanded by the selfish driver would surely cause a good Orthodox Jewish hitchhiker to attempt to lie? First, note that I do not represent LW in any way and often misjudge the reaction of others. But my guess would be that simply stating this is not an issue, but explicitly using this belief in an argument may result in downvoting. This community is mildly hypocritical in this regard, as people who push their transhumanist views here as "the best/objective/universal morality" (I am exaggerating) can get away with it, but what can you do.
I may not have given enough detail. The prohibition against killing is specifically innocent people. There is a death penalty for many crimes, including murder (although not as far as EY seems to think. He once said [http://lesswrong.com/lw/i8/religions_claim_to_be_nondisprovable/] that the Bible gives the death penalty for crossdressing. Evidence [http://www.mechon-mamre.org/p/pt/pt0522.htm] suggests otherwise. But that's another topic.) So: Assuming this attacker is the one killing or threatening to kill your kids, you are allowed to kill him (although you are supposed to try to injure them if killing isn't necessary to stop them). You wouldn't be allowed to kill someone else who is innocent, even to save many people. I don't know if you're familiar with the current debate in Israel over the draft? It's not really related, though. Again, the “ancient Hebrews” fights, were usually either to reclaim parts of Israel which belonged to them from the gentile nations that were inhabiting them, or to defend themselves against attackers. In both scenarios, the “victims” weren't innocent. For some more info, see here [http://judaism.stackexchange.com/questions/36521], here [http://judaism.stackexchange.com/questions/34282/whats-the-reason-for-conquering-the-different-nations-in-joshua/], and here [http://judaism.stackexchange.com/questions/22452/killing-civilians-in-a-defensive-war/]. (By the way, I just saw this [http://judaism.stackexchange.com/questions/10062/is-it-better-to-kill-1-person-or-let-5-die] while looking up that last link, which (mostly) confirms what I said about the Trolley problem. I realized after I posted that answer yesterday that I could conceive of a case that would work for me, in the spirit of the Parfit's hitchhiker example. Namely, if I knew that when I got to town there would be someone who's life I could save, but only with $100. (Also assuming that I've got only $100 cash total). That person's life would take precedence over telling the tru
OK, that makes more sense. Seems like a flimsy excuse to slaughter babies. Though I suppose the Amalekite case can be somewhat justified by an uncharacteristically utilitarian calculation on God's part if Amalekites presented an x-risk to Hebrews. But that is not how the issue is usually presented. From your link: ...so they wiped out every woman and child? In any case, this inference seems like an extreme case of motivated cognition [http://wiki.lesswrong.com/wiki/Motivated_cognition]: "what we did was right, therefore they must have done something wrong even if we have no records of what they did". Further reading of your links provides a fascinating insight into how far this motivated cognition can lead otherwise very smart people. That it is indeed a case of motivated cognition can be trivially shown by transplanting the question into a modern setting and asking under which circumstances it would be ok to wipe out a whole people today. The answer is clearly "none" (I hope). Yet what (ostensibly) happened then has to be justified at any cost, or admit that Saul and Samuel were little better than Hitler and Pol Pot. Or that human ethics has evolved and what was acceptable back then is a high crime now.
Eh, I take back the unnecessarily emotionally charged reference to the iconic supervillains.
What happens if instead of "causing" a death, you're doing something with some probability of causing a death? For instance, handing someone over to the enemy results in a 99% probability of them being killed by the enemy. What if it's only 10%? What if the enemy isn't going to kill him, but you need to drive through a war zone to give him the prisoner, and driving through the war zone results in a 10% chance of the person being killed? What if the enemy says that he's going to kill one person from his jail no matter what, and he puts the person in the same jail (so that instead of 1 person being killed out of 9 in the jail, 1 person is killed out of a group of 10 that includes the new person, thus increasing the chance this specific person is killed, but not increasing the number of people killed)?
I think that a 99% probability would be the same as 100% for this purpose. A “doubt of death” is considered as strong as a definite death in general. In the war zone example, I think (with a little less confidence) a 10% would work the same. You simply don't take into account the potential benefits, when weighed against an action that you must do that will cause a death. On the other hand, the person being requested is allowed to sacrifice their own life (or a 10% chance of doing so) to save others. I'll have to think about your last case a little more.
What if you just need to do ordinary driving, where there's a fraction of a percent chance of death? If you couldn't do things which had any chance at all of killing innocent people, then you wouldn't be able to drive, or do to a lot of normal things. There's probably some non-zero chance that the next time you turn on your computer it will trigger a circuit fault that causes the building to burn down an hour later.
I think there's a point where the number is low enough that it can become insignificant, but I'm pretty sure it's less than 10%. There's a concept of what considered a "normal risk". Incidentally, since you mentioned it, there have been attempts by some Rabbis to ban driving for that reason. I'm unable to find a better source currently, but see: this [http://judaism.stackexchange.com/questions/9262/which-rabbis-forbade-cars-100-years-ago]. Some (current ones) have also suggested that one shouldn't drive for pleasure, but only where there's an actual need. I thought about that your last case earlier, and decided it would also not be allowed. You need to consider each person separately. This person will have a 10% chance of being killed due to your action, which forbids it. Part of the rationale for the rules (I think), is valuing each moment of life, so, for example, someone is considered a murderer if they kill someone who would die anyway in an hour. So causing the person to die earlier, is worse than letting them die later with everyone else.
Okay, here's another question: Instead of being one person who drives and has a small chance of killing someone, you're running a big company with a lot of drivers.. If two people drive, the chance of killing someone is about twice that of when one person drives. if a lot of people drive, the chance may add up to enough that it is over your threshhold for insignificant. So is it immoral to run a company that uses a lot of drivers, because statistically the chance of death over many drivers is too large, even though each individual driver is okay? What if instead of running a company you're collecting taxes, and collecting taxes costs some people some "moments of life" (since they have to work longer to pay the taxes)? Most people would say that this is okay because the taxes benefit society, but if you aren't permitted to balance the loss to the individual against the gain to someone else, you can't use that reasoning. Or what if you're running a country and you need to decide whether to have laws that put people in jail? Because of inevitable human error, you'll be putting more than one innocent person in jail. (Even if you don't know which person is the innocent one.) If you're not willing to say "It's okay to make innocent people lose some 'moments of life' as long as it helps others more", how can you justify having jails?
Huh. Presumably they would also frown upon any similarly risky activity, like climbing, swimming or even living near Gaza borders, where one might get killed by a rocket.
Do you non-negligibly risk killing other people while swimming or climbing? It was said upthread that only killing innocent people counts, so killing yourself doesn't count ^W^Wcounts ^Wdoesn't count ^W^WScrew you Euathlos [https://en.wikipedia.org/wiki/Paradox_of_the_Court]!
Do you non-negligibly risk killing other people while swimming or climbing? It was said upthread that only killing innocent people counts, so killing yourself doesn't count ^W^Wcounts ^Wdoesn't count ^W^WScrew you Euathlos [https://en.wikipedia.org/wiki/Paradox_of_the_Court]!
See this [http://judaism.stackexchange.com/questions/29319/should-one-living-in-israel-leave/] about Gaza. I don't think climbing or swimming are as dangerous as driving. There is an obligation for a father to teach their son to swim, mentioned in the Talmud.
They're a couple orders of magnitude riskier, actually [http://www.medicine.ox.ac.uk/bandolier/booth/risk/sports.html]. It's tricky to make a direct comparison because the risk of driving is usually expressed over distance traveled, while sports is usually measured over number of sessions, but if we assume a typical day's driving is about 50 miles (80 km), then we're looking at 0.1 micromorts [http://en.wikipedia.org/wiki/Micromort#Additional] per session, as opposed to 17 for swimming or 3.1 for rock climbing. (I'm not totally sure I trust that swimming estimate. The one for rock climbing aligns with my intuition, although there's a lot of variance within the sport -- bouldering is comparatively safe, while attempting the world's highest peaks is absurdly risky by sports standards. I did know one guy who died in a shallow-water blackout and none who died climbing, for whatever that's worth.) [ETA: The estimate for swimming turns out to be bogus. See below.]
The link you gave [http://www.medicine.ox.ac.uk/bandolier/booth/risk/sports.html] puts car deaths above swimming in the second diagram. It doesn't say that the sporting numbers are measured by session. (Except for the BASE jumping, hang-gliding, scuba diving, canoeing, or rock climbing). My own research (the first three links from Googling "risk of car accident death") puts car accidents consistently higher than swimming deaths. http://www.livescience.com/3780-odds-dying.html [http://www.livescience.com/3780-odds-dying.html]: 1-in-100 lifetime car death , 1-in-8,942 swimming death. http://www.riskcomm.com/visualaids/riskscale/datasources.php [http://www.riskcomm.com/visualaids/riskscale/datasources.php]: 1 in 17,625 one year car occupant death rate (based on 2002 data), 1 in 83,534 one year drowning death overall, 1 in 452,738 one year drowning death in swimming pool http://well.blogs.nytimes.com/2007/10/31/how-scared-should-we-be/?_php=true&_type=blogs&_r=0 [http://well.blogs.nytimes.com/2007/10/31/how-scared-should-we-be/?_php=true&_type=blogs&_r=0]: 1 in 84 lifetime car deaths, 1 in 1,134 swimming deaths.
I believe that's because people drive much more than they swim, and the risk communication scale uses, say, your second numbers, and the comparison the link author gave converted that from annual to per-act.
I was trying to show that the swimming estimate wasn't per session. 1 in 56,587 is close enough to 1 in 83,534 that they're probably measuring the same thing, namely yearly deaths, in which case (assuming most swimmers swim more than 20 times a year, which I think is reasonable), the per-session risk for driving is more than that for swimming.
You're right, it's not per session -- but it isn't per year either. On closer examination it looks like they're calculating the risk of death over the ten years surveyed (unless the 31 deaths reported are annualized, which I don't think they are), which is an absolutely terrible bottom line -- but fine, it makes the annual risk of death 1 in 566,000. I also notice that the population estimate is identical to that for running and cycling, so it's probably some sort of very crude estimate of Germans involved in sports. Ugh. At least the climbing stats look more reliable. Incidentally, an annual risk of death of 1 in 566,000 and a hundred sessions per year (two a week with time off for good behavior) gives us a per-act risk of 0.017 micromorts, about equal to driving four miles in a car.
It's definitely not the chance of death in a year of swimming. My link already gives us all the numbers we need to calculate that -- the number of deaths overall, the number of years being examined, and an estimate of the population involved -- and it comes out to a chance of 1 in 5,658. (1,754,182 people / (31 deaths / 10 years).) This conveniently lets us infer how they're probably calculating the risk -- it looks like they're assuming one hundred sessions per year (or about two a week; fair enough) and doing a per-session estimate based on that. I also notice that the population estimate is identical to that for cycling and running, so it's probably some sort of estimate of the number of people in Germany involved in an arbitrary popular sport. Cruder than I'd like, but I was only shooting for an estimate good to within an order of magnitude.
Those numbers look like general population numbers (and since it looks like a lot of drowning deaths are due to ineptitude [http://www.cdc.gov/homeandrecreationalsafety/water-safety/waterinjuries-factsheet.html], it seems unclear to me whether the yearly risk for frequent swimmers is higher or lower than for non-frequent swimmers). Instead of 'all drowning,' the 1 in 83,534 number, one should probably use the 'in swimming pool' number, which is 1 in 452,738.
I'm not sure I trust these estimates -- or, rather, I don't think I find them useful. The main problem is that the probabilities involved are all strongly conditional. Consider swimming in a hotel swimming pool with a lifeguard watching and long-distance swimming alone in the ocean. Both are "swimming" but these two activities are radically different from the risk perspective. Similarly, you can do "climbing" in the climbing gym and you can do "climbing" in the Himalayas.
Sure, there's a lot of variance involved. But there are more and less safe driving habits, too, and I'll bet the variance is about as high. The point isn't to demonstrate that one practice is under all conditions more or less safe than another, it's to compare their average dangers as they're actually practiced. And that clearly favors driving. It's a profoundly bad idea to look at a set of statistics like this and say "oh, the ones that look inconvenient to me were probably doing something unsafe, they don't count". On the other hand, these statistics don't take health benefits from being physically active into account, which could potentially give ammunition for a much stronger critique -- though given ike's comments, I'm not sure it'd be a valid critique in the context of Jewish law.
I bet less. Yes, you can practice defensive driving, but if you're on the road in the traffic there is only so much you can do to avoid the idiot who is both in a hurry and needs to send that text message right now. You don't have much control over external factors. But in swimming you often do -- it's pretty hard to drown if you are swimming in a pool with others watching. Yes. Therefore if you know you practice in way that's different from the average, the probabilities change for you.
I wasn't thinking about defensive driving, I was thinking of driving thirty miles over the limit while not wearing a seat belt and texting your girlfriend about the awesome fight you just saw in the pub.
In pretty much any activity you can asymptotically drive your chance of surviving towards zero if you set your mind to it :-/ If we are talking about variance, the lower safety bound is often in approximately the same place, but the upper safety bound (as well as the center of the distribution) varies.
I'll bet there are more idiot drunks on the road than there are Himalayan mountaineers, even proportionally.
Yes, but if you're going climbing you can choose to go the climbing gym and be absolutely safe from the avalanches in the Himalayas. However if you're going driving on public roads, you cannot make yourself absolutely safe from drunk drivers. You can make your climbing safer than you can make you driving. That's what makes climbing higher variance than driving.
You can make your climbing safer than summiting K2 would be, certainly. But enough safer to overcome those one and a half orders of magnitude of difference in the average? I haven't actually seen any numbers on this, but that seems optimistic to me.
I'll have to look at the methodology to believe that one and a half orders of magnitude, but regardless of that yes, you can make your climbing safer. For example, you can do bouldering on technical routes which are all about agility and finger/arm strength. These routes rarely go more than 10 feet above thick mats -- since you're not belayed, you're expected to just jump down when/if you run into trouble. Twist you ankle, sure, possible. Die -- not very likely.
Yes, I mentioned bouldering in my original post.
0Said Achmiz9y
I don't think there's a Lesswrong-specific take on the trolley problem, so I'm assuming shminux is just referring to the usual one [http://en.wikipedia.org/wiki/Trolley_problem].
Some high-profile physicists disagree, others agree. Very few believe in some sort of objective collapse these days, but some still do. This strange situation is possible because MWI is not a well-formed physical model but more of an inspirational ontological outlook.
Hi Avi, welcome to LessWrong! There's a big problem with upvotes and downvotes on LessWrong, namely that the two important but skew dimensions of agreement/disagreement and useful/disuseful for rating posts are collapsed into one feature. A downvote can feel like 'Your comments are bad and you should feel bad (and leave and never post again)', but this is often not the case. Downvoting comments by a person asking why the parent comment was downvoted is generally poor form. In your case, it might be because you did it for a few comments in quick succession, which might have made Recent Comments (on the sidebar) less useable for someone so they downvoted the comments. To avoid this in future, maybe add a note in your comments when you post them noting that you are a new user trying to figure out how to tailor your comments to LessWrong and requesting that downvoters explain their downvotes to help you with this. On the other hand, it's not impossible that someone was being Not Nice and mass-downvoting your comments, which wouldn't be your fault.
Is "disuseful" a synonym for "unuseful [https://en.wiktionary.org/wiki/unuseful]" here or does it mean something else? I'll add a specific way for newbies to ask why a comment was downvoted without clogging up the recent comments list: edit the original, downvoted comment, appending a little "Edit: not sure why this was downvoted, could someone explain?"-type note. (It's obvious once you think of it, but easy not to realize independently.)
It means something else. I use the dis- prefix to mean the active opposite of the thing to which it is prefixed. So 'I diswant ice cream' is a stronger statement than 'I do not want ice cream', though most people, whose language is less considered and precise, would (also) use the latter to cover the former. I guess some would say 'I don't particularly want ice cream' to disambiguate somewhat. Thanks for the suggestion.
Is that different enough from “harmful” to merit a less standard word?
I can see several possible connotations and policy suggestions underlying your comment, but not sure which one(s). Can you specify? Like, are you suggesting I update in this specific case or my general inclination to use nonstandard undefined terms or...?
I was thinking about this specific case, but now that I think about it it does generalize.
Minor point of information. In English "do not want" is not the negation of want. It actually means what you have defined "diswant" to mean. The "not" is privative here, not merely negative. People are not being less considered and precise when they use it this way. They are using the words precisely as everyone but you uses them -- that is, precisely in accordance with what they mean. You are welcome to invent a new language, just like English except that "not" always means simple negation and never means privation; but that language is not English. Neither, for that matter, would the corresponding modification of French be French. Comparing the morphology of translations of "want", "do not want", "have", and "do not have" in a further selection of languages with Google Translate suggests that the range of languages for which this is the case is large.
That is indeed often the case, though I notice I feel hesitant to agree that this is always the case and retain a feeling that people use 'do not want' in both way, depending on the context. Regardless, when I said: I meant (hohoho) this as a statement about my usage, not the common usage of others. Thanks for pointing me to a further point of reference (the term 'privative'). Edit: I looked at the Wikipedia article for privative [http://en.wikipedia.org/wiki/Privative] It gives some examples: and it says: It seems like your usage of privative was excluding alpha privative, i.e. mere negation, but the examples and this summary sentence suggest 'privative' fails to distinguish (hohoho again) between mere negation and...the other thing. (Inversion? Opposition?) I'd be most amused if linguists had failed to coin a specific term for the subform of privation that is the 'active opposite' of something, and had only given a name ('alpha privative') to the subform of mere negation. In the literal sense that I have considered these things more than they have, they are. Localised examples like this seem trivial, but when generalised to encouraging good habits of thought and communication and precision, it's not just a localised decision about 'un-' vs. 'dis-', but a more general decision about how one approaches thought, language, and communication. Also, if you just look at 'do not want'/'diswant' in a vacuum, then yes, it seems like both my usage and the common usage specify what they mean. But the broader question of using negation and 'not' in a way that cues the mental process of Thinking Like Logic is inextricable from specific uses of 'not'. I generally lean towards the position that the upper echelons of a skill like Thinking Like Logic are only achieved by those who cut through to the skill in every motion, and that less comparmentalisation leads to better adoption of the skill. And I feel like it probably intersects with other skills and habits of thought.
I don't think I understand what you mean by privative. Is it something like the difference between "na'e" and "to'e" in Lojban? For reference: {mi na'e djica} would mean "I other-than want", and {mi to'e djica} would mean "I opposite-of want".
That's pretty much it. Privative "not" would be "to'e". The English "not" covers both senses according to context, but "not want" is always privative and some lengthier phrase has to be used to express absence of wanting. Or not so lengthy, e.g. "meh".
Oh, cool. I've found the distinction to be a very useful one to make.
Well [-l + come]; one of your comments was erroneous, as you said yourself (the one you retracted), another comment reads like a restatement of a popular comment predating yours by a over year (which you acknowledged yourself), and the third makes a pretty sweeping claim about superdeterminism not being Turing computable. Unfortunately, the proof you provide seems flawed on a couple of counts.* However, even if the proof did turn out to stand, people frown upon comments which do not give more explanations and context to sweeping statements that seemingly come out of thin air (even if they did turn out to be correct). FYI, I didn't read (until now) or vote on any of your comments. That makes 3 plausible downvote explanations for 3 comments, two of which you mentioned yourself. I'm surprised about your surprise. * (Superdeterminism doesn't require that part of the overall program can be perfectly predicted by a much smaller program in advance, nor that the outcome of the smaller program can then be used to change the overall outcome. At least two reasons: 1) Not being able to verify complete correspondence (except by fiat), given all hidden variables and their potentially unknowable context (unknowable from within the program, and the context may encompass the entire universe); 2) superdeterminism can in principle be saved simply by saying that the agent isn't able to show a contradiction; i.o.w. in a superdeterminist universe, a perfect prediction-machine conditional on which a contradiction can be derived cannot exist, by definition of what "superdeterminism" means. Your thought experiment would be inapplicable in a superdeterminist universe, strange as it sounds. In that light, your proof reads similar to the one that shows that a Halting problem decider cannot exist. Alternatively, the agent would be unable to use the result to show a contradiction. While such an inability would indeed seem strange, from the universe's point of view, every facet of that inabilit
You're basically saying that superdeterminism doesn't require Turing computability, not that it is in principle Turing computable. Anyway, my point was that superdeterminism predicts that we will never find a practical way to compute the observed answer to a simple quantum superposition, because that would imply that we could change it. And I guess I did make a "sweeping claim", but I was still annoyed that I just got down-voted without a reply. If I had a "sweeping claim" to discuss, how should I have posted it? The AIbox one I had thought of before seeing that comment, and it's (in my opinion) stronger than the other one. (And the replies to it didn't apply to mine fully). As an aside, would I in general be expected to read all 300+ comments on a post before commenting?
See "give more explanations and context". If you're concerned with "never find a practical way", that's an entirely different discussion than "isn't Turing-computable" (in this community, if something has a strictly technical interpretation, that's what is defaulted to). Give enough context so that a reader knows what you're concerned with (practical applications, apparently, see I wasn't aware of that), instead of a somewhat theoretical sounding claim (which you apparently meant in a more practical way) with a proof that turns out to be wrong, given that strictly theoretical claim. Also, I was only pointing out shortcomings of your proof, to do so no stance regarding Turing computability is required. However, there is no reason to assume that superdeterminism would require incomputability, on the contrary, as long as the true determinist laws of physics are computable, the universe would be as well, no? Well, at least the top level comments with a couple of upvotes, so you don't repeat one of the main responses? That boils it down to 35-ish comments.
Oh. I need to be "strictly technical"? I'll go back to the one about Turing computability and edit it to reflect a "strictly technical" comment.
Turing computability is a technical concept first. You don't "need" to be strictly technical (obviously), but talking about Turing computability and giving a proof-by-contradiction kind of sends off the vibes of a technical/theoretical point, don't you think? I was making an observation about how I interpreted your comment, and why, I wasn't telling you what you need to write about.

Hello everyone.

Consider this a just-in-case comment that I am making with very limited time before I have to run and do something else, recognizing the fact that I might fail to make one altogether if I do not do it now. How is that for acknowledging my human mental frailty?

Actually I can do one better: I just had to join the lesswrong chat to diagnose a problem with not being able to comment on an article (which was the reason I just signed up after discovering this site), and the problem turned out the stem from my misspelling my own e-mail address whe... (read more)

Hi LW, I've been a lurker for the quite some time, it ended this week.

The sequences and blog (ebook compilation I found somewhere) have a comfortable text-to-speech place in my commutes and I've incorporated quite a bit of the lingo, bias definitions and concepts into my daily Anki decks. It's not that this community was that daunting but rather that I thought I could play catch up. My reluctance reminds me of a programmer asking if it's worth getting on github if he's only joining the party now. I've studied computer systems engineering (electronics, di... (read more)

Hello, Less Wrong:

I have been lurking around LW for a while after finding it from links on MIRI or FHI. I've only recently begun to learn about Bayesian probability and inference on a practical level. I'm going through school for a bachelors in game programming. For now my primary focus is on the simplified AI currently used in gaming, but I believe that more sophisticated AI technologies like natural language parsing and more realistic behavioral simulations and problem solving will be useful in games in the near future. I work as a help desk tech where I... (read more)

Hello! My name is Mackenzie, or Mack. Brought here by HPMoR, I have been reading through the sequences off and on for the past year, a little at a time. I can't say I've committed it all to memory, but I feel like I have a good context for the language this community uses. I am a mechanical engineering major in my sophomore [?] year. If I was a humanities major, I could be a senior by now, but two years ago I became fed up with the self-masturbatory nature of that field.

I've always been interested in the objective, rational approach to life. I wa... (read more)

Hello everyone.

My name is Carlos. I'm 30 years old. I was born, and still live, in Colombia.

I excelled through elementary and high school until I crashed against the hard fact that my parents could not afford my college ambitions. At that time I cycled between wanting to study psychology, but also archaeology, but also chemistry, but also cinema. I wanted to know everything.

Then came a long, dark time while I crawled through the Business Management degree my parents made me go for. Worst years of my life, absolutely. But in the meantime, I devoted my spare... (read more)

Hi. I'm a 42yr old male, from the US and I've been aware of LessWrong for a few years now, stumbling across links to posts on LessWrong here and there in my web surfing travels. I've always been more or less a rationalist. I've been a self-identified atheist since high school. I've been a fan of Daniel Dennett for many years. I read 'Consciousness Explained' when it first came out many years ago and I've kept up reading interesting philosophy and science books since then. I've always enjoyed books that made sense out of previously mysterious phenom... (read more)

I enjoy the analytical side of sports, too. Do you follow sabermetrics and all its many children (e.g advanced statistics in basketball and hockey) or are you more interested in human performance optimization (powerlifting, HIT, barefoot running, etc.)? If the latter, does that connect to your reductionist approach to personal problems and concern with anxiety?
I follow sabermetrics and its children. I was really into Bill James back in the day and still had a subscription to BaseballProspectus.com (this post is half-drunk so excuse typos please). My 2 favorie sports are hockey and baseball. Baseball analytics made its biggest advances years ago - now it seems like they are just refining but hockey is in the initial stages. I've been into possession stats for hockey more than any baseball stats for the past couple of years although I still wander on to baseballprospectus and fangraphs and read some of the posts every 2 or 3 weeks.. I'm not a big hoops fan but I really like the advanced stats they have and footballoutsiders is great too although I havent really gone into depth there. I'm also interested in the performance stuff. I .listen to superhumanradio regularly. He has really good interviews with scientists on a regular basis.

Hi there, I'm a Biologist turned Software Engineer, age 34. I came to Less Wrong through Overcoming Bias and HPMOR, and I'm still here because the notions of rationality appeal to me. It is nice to among others who hold rationality as an ideal to aspire to.

Hello, all!

I'm a new user here at LessWrong, though I've been lurking for some time now. I originally found LessWrong by way of HPMOR, though I only starting following the site when one of my friends strongly recommended it to me at a later date. I am currently 22 years old, fresh out of school with a BA/MA in Mathematics, and working a full-time job doing mostly computer science.

I am drawn to LessWrong because of my interests in logical thinking, self improvement, and theoretical discussions. I am slowly working my way through the sequences right now - s... (read more)

Hi there, my name is Jérémy.

I found Less Wrong via HPMoR, which I found via TVTropes. I started reading the Sequences a few months ago, and am still going through them, taking my time to let the knowledge sink, and practice rationality methods.

I like to join the LW IRC chatroom, where I had (and witnessed) many interesting, provocative, and fruitful discussions.

I'm 22, I live in France, where, after an engineering degree in Computer Science, I'm now a PhD student in the wonderful field of Natural Language Processing. I've been interested in AI for about 10... (read more)

Welcome, Jérémy!

I haven't much to say.

Well, welcome to LessWrong anyway! Glad to you decided to join the conversation, talkative or not.

Hello fellow LWers,

I'm Raythen, a 25 year old European male.

I discovered this community via HPMOR.

I'd say that the rationalist way of thinking is a natural fit for me. It just makes a lot if sense, and it surprises me when other people don't think this way. To be fair, I haven't always thought this way either, but I've had quite many thoughts on the subject which are now complemented by LW material.

Besides rationality, I'm primarily interested in psychology and understanding human behavior.

To counter my general nonconformist tendency :), here are some of t... (read more)

It's quite interesting to have someone define himself as European instead of a nationality.
I know many Germans who do so. Incidentally, identifying as "European" rather than "German" is a quintessentially German thing to do. Heritage of THE WAR.
I'll happily calllmysrkf European. I'm not German, and I am a citizen kf one of the EUs more fractious members.
The German question has a longer history then the war. In the time where the German national hymn was written "Deutschland, Deutschland über alles" was a call to abolish interstate borders between different German states. It was cosmopolitan in nature. Wanting a united Europe is not that different than wanting an united Germany. At the same time most Germans who identify themselves on the internet still speak of themselves as German and not primarily as Europeans. But I'm not certain that Raythen is German. He might also been born in one European country and living in another.
Might also have something to do with Germany being one of the few countries not getting shafted by the EU and thus not objecting to the identifier European.
That's a fairly recent development and national self identification runs deeper. Building nation states and destroying them is no straightforward matter that you can do in a few years.
Good point.