If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as an aspiring rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.


A few notes about the site mechanics

To post your first comment, you must have carried out the e-mail confirmation: When you signed up to create your account, an e-mail was sent to the address you provided with a link that you need to follow to confirm your e-mail address. You must do this before you can post!

Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax (you can click the "Help" link below the text box to bring up a primer).

You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.

However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.

Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.

All recent posts (from both Main and Discussion) are available here. At the same time, it's definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there's a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way.  There's also a succession of open comment threads for discussion of anything remotely related to rationality.

Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.

There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion. They are also available in a book form.

A few notes about the community

If you've come to Less Wrong to  discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.

If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)

If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma—honestly, you don't know what you don't know about the community norms here.)

Alternatively, if you're still unsure where to submit a post, whether to submit it at all, would like some feedback before submitting, or want to gauge interest, you can ask / provide your draft / summarize your submission in the latest open comment thread. In fact, Open Threads are intended for anything 'worth saying, but not worth its own post', so please do dive in! Informally, there is also the unofficial Less Wrong IRC chat room, and you might also like to take a look at some of the other regular special threads; they're a great way to get involved with the community!

If you'd like to connect with other LWers in real life, we have  meetups  in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page. There's also a Facebook group. If you have your own blog or other online presence, please feel free to link it.

If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter

A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.

A list of some posts that are pretty awesome

I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:

More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site!


Once a post gets over 500 comments, the site stops showing them all by default. If this post has 500 comments and you have 20 karma, please do start the next welcome post; a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves. (Step-by-step, foolproof instructions here; takes <180seconds.)

If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post.

Finally, a big thank you to everyone that helped write this post via its predecessors!

New Comment
273 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Howdy All!

I’m a post middle-aged, impressively moustachioed dude from Texas, now living in Wisconsin. I moved up here recently, following the work, and now have a fine job in a surprising career path. See, I recently took a couple degrees in Mathematics (which I capitalize out of love, grammar be damned!) hoping to be a teacher for the rest of my time. It turns out, that was not such a good move for me and I was fortunate to receive an offer to get back into private-sector IT. I am now happily managing UNIX systems for a biggish software company here in the tundra.

I’ve been consuming the sequences and lurking in the forum (and newly the Slack cahtrooms) for several weeks. I have no recollection of how I found the site; StumbleUpon would be my first guess, though the xkcd forum is nearly as likely. As I read through the LW site I am struck by the quality of discourse, which is high even among those who disagree.

I am motivated to fill in some gaps in my own thinking on various issues of interest and importance. With the exception of my atheism, I don’t have many strongly held opinions (though at times I do seem to lean quite a ways over on some of them).

So, how did I become a ration... (read more)

Be welcome, sir.

Hello from Spain! I first found about LW after reading a post about Newcomb's problem and the basilisk last summer. A week after that I found HPMOR and I've been reading and lurking for this whole year. It's been amazing to see how there are other people with ideas like transhumanism and who are trying to become systematically better.

I decided to post here for the first time because I recently atended a CFAR workshop and realized that I could actually help in building a better community. I'm currently translating RAZ to Spanish and hope to create a rationality community in Madrid.

Some other things about me:

  • I'm currently studying Physics at Cambridge but I'm thinking of going into applied Maths and probably into computer science. (I'm very interested in AI risk)
  • I'm trying to find the best way to build healthy relationships and communities of people that help each other be better. (After my experience at CFAR I felt like I'm missing something amazing by not being in an environment like the Bay Area and want to recreate that.)

And that's it! You're all amazing for being part of something like this, hope we can make it even better all together! :)

welcome to LW! Just a question when you think you will finish your translation?
Thank you! :) I'm planning on finishing the first book (The map and the territory) by October but it will probably take longer as I'm not very consistent with my work. The first sequence (Predictably wrong) should be finished this week if I keep my current pace. I'm publishing it here: https://cognonomicon.wordpress.com/ (Everything is in Spanish) I'd appreciate any comments, and if you think that someone you know would benefit from reading rationality in spanish it would be great if you shared it ^^
I'd gladly read and criticize your translations if you want me to, but it will have to wait until after my topology exam next week. If you want me to do it, please remind me to do so ten days from now or so, since I will most probably forget about it.
Cambridge where?

Regards from Argentina,

Great post. I had started reading through this site randomly while I got more and more into HPMOR, which a friend recommended, and having a little list of posts to start will most probably prove helpful.

I would like to mention that the thing about this community I found the most astonishing was a comment that read something like "Edit: After reading some responses I've changed my mind and this comment no longer respresents my beliefs." I did not even know that it was possible for a human being to be so greatful and humble upon being proven wrong. And humility is something I most definitely need to learn, and I suspect I will be able to do so here. In fact, I already did, for I acknowledged the fact that someone outside my field (pure math, until recently) has something to teach me. Yes, I am (was?) THAT arrogant in a deep level, but here and now I just feel like a child, craving to learn the art of rationality.

Thank you all for what this site constitutes!

To me it feels easier to admit mistakes in an environment which does not punish admitting mistakes by loss of status. Where people cooperate to find the truth, instead of competing for image of infallibility. Just saying that how one reacts on being shown errors is partially a function of their personality, but also partially a function of their environment. Changing the environment can help, although sometimes bad habits remain.
I quite agree, but now I'm wondering how could I change my own environment -not by replacing it, but by changing people's reactions- . It seems the responsability to do so lays upon my shoulders since I am the one who intends to live differently. Do you believe it'd be right to attempt to change people's reactions (if I knew a way), or should I acknowledge the possibility that they are just happy the way they are, and should just let them be?
They probably are. Also, even if hypothetically becoming super rational should be an improvement for everyone, your ability to change them is limited, and it's uncertain whether that degree of change you could realistically achieve would be an improvement. Unless you have superior manipulation skills, I believe it is extremely difficult to change people, if they don't want to. You push; they welcome the challenge and push hard in the opposite direction. Unfortunately, defending your own opinion, however stupid it is, is a favorite hobby of too many otherwise intelligent people. It could be a very frustrating experience for you, and an enjoyment for them. At least my experiments in this area seem hugely negative. If people don't want to be rational, you are just giving them more clever arguments they can use in debates. I hate to admit it, but "people never change" seems to be a very good heuristic, even if it is not literally true. (I hate it because of the outside view it provides for my own attempts at self-improvement. That's why I usually say "people never change unless they want to", but the problem is, wanting to change, and declaring that you want to change, are two different things.) Also, I noticed that when you are trying to change, many people around you get anxious and try to bring you back to the "old you". If you want to change your own behavior, it is easier with completely new people, who don't know the "old you", and accept your new behavior as your standard one.
I know it would be hard, and most likely nearly impossible to change people without a very good idea very well executed, but perhaps a tiny possibility is reason enough to attempt to do it nonetheless. I wish to take your advice on trying to change myself among new people, and so I ask if you have any suggestion on a particular environment on which to try to do so.
The obvious new environment is the nearest LW meetup, if available. Otherwise... I don't know, maybe some public lectures. (I am not the right person to ask about meeting new people. My own social sphere is very small.)
People try to do that all the time. One of the best ways is to simply ask other people to change their reactions, and explain why - some people will listen (especially if you point out how the new environment will benefit them as well) while others won't. (Mind you, even the ones that listen will probably be slow to change their reactions... habits are not easily broken) I'd also suggest, at the same time, changing your reactions to match your preferred environment; give everyone around you an example to follow. If you have a position of authority (e.g. a university lecturer in a classroom) you could even use that authority to mandate how students are allowed to react - again, it would help to point out how the ability to change your mind is helpful to the students. I think that it can be right to attempt to change peoples' reactions, if that change is to their benefit and the means employed to effect the change are ethical (i.e. ask them to change, don't put a gun to their head and force them to change).
Just asking seems a little to plain to work, but I do know some very few people who would listen. The thing is that, by doing so, they are somewhat already reacting rationally. Now I'm thinking maybe I should gather a couple of those people and someone who is less inclined to change his mind and try to "convert" him by providing an environment in which it is ok to be mistaken and good to be corrected... Then I just repeat this process inductively until we take over he world, don't I? I don't have it, but I will have it soon enough and see how it goes.
If the simplest solution works, then, well, it works. And if it doesn't... I don't really see any negative consequences of failure. It'll work for some people, not for others. You could try, I guess, but people change slowly so it could take a while. I think that trying to force it could have ethical problems. But inviting someone to have a chat with you and your friends shouldn't have any such problems. Good luck!

Hi, my name is Jordan Sparks, and I'm the Executive Director of Oregon Cryonics. I work very hard every day to improve cryonics technology and to attract potential cryonics clinicians.

Hi LW! My name is Yaacov, I've been lurking here for maybe 6 months but I've only recently created an account. I'm interested in minimizing human existential risk, effective altruism, and rationalism. I'm just starting a computer science degree at UCLA, so I don't know much about the topic now but I'll learn more quickly.

Specific questions:

What can I do to reduce existential risk, especially that posed by AI? I don't have an income as of yet. What are the best investments I can make now in my future ability to reduce existential risk?

Hi Yaacov! The most active MIRIx group is at UCLA. Scott Garrabrant would be happy to talk to you if you are considering research aimed at reducing x-risk. Alternatively, some generic advice for improving your future abilities is to talk to interesting people, try to do hard things, and learn about things that people with similar goals do not know about.
Hi Yaacov, welcome! I guess that you can reduce X-risk by financing the relevant organizations, contributing to research, doing outreach or some combination of the three. You should probably decide which of these paths you expect to follow and plan accordingly.
If you choose the path of trying to make a lot of money and supporting the organizations who do the research, 80000 hours can help. If you choose to contribute by doing the research, you can start by reading what's already done.

Hello LW!

Been lurking for about three years now- it’s time to at least introduce myself. Plus, I want to share a little about my current situation (work problems), and get some feedback on that. I’ll try and give a balanced take, but remember I’m talking about myself here…

First, for background, I’m 23, graduated about a year and a half ago with degrees in finance, accounting, and economics (I can sit still and take tests), and I also played basketball in college (one thing I can definitively say I’m good at is dribbling a basketball).

Brief Intellectual Journey
I didn’t care much about anything besides sports until I got to college. Freshmen year, I took a micro class and found it interesting, so I went online and discovered Marginal Revolution. I’ve been addicted to the internet ever since.

It started with the George Mason econ guys (Kling, Caplan, Roberts—that’s my bias), then I got interested in the psychology behind our beliefs and our actions (greatest hits being The Righteous Mind (Haidt), Thinking Fast and Slow (Kahneman), Mark Manson’s blog, Paul Graham’s blog). Somewhere during that time I stumbled across Lesswrong, SSC, HPMOR, and the rest of the rationality blogosphere, an... (read more)

Hi chalime,

Welcome to LW!

There are many of us here who share your views on the financial services industry, and index funds with low expense ratios have been strongly recommended in nearly all of the financial advice threads posted on LW. I once went to a career information session hosted by a botique wealth management firm myself, and ended up not even sending them my resume because of similar reasons regarding my personal fit with the field, and value of services provided by advisers.

The 80,000 Hours blog has historically mentioned that the good done by donating a small part of one's income to excellent charities likely outweighs any harm done by a career in the financial services industry. However, if working for a wealth management company doesn't feel like a good fit to you, you certainly shouldn't feel morally obligated to stay with them for earning-to-give reasons!

Thanks for the reply Fhuttersly- Yes, I’ll be honest, my mind is made up. There is no way I can continue to do this every day- it’s just not sustainable. It’s a little scary because this is already my second job since graduating, and even if I think I have good reasons for leaving, that stuff is not easy to explain.
Welcome! So, presumably you're familiar with companies like Vanguard, Wealthfront, and Betterment, which are much more customer-aligned than the rest of the financial services industry. But part of that is spending much less attention on individual clients--and, consequently, employing considerably fewer people, and different sorts of people. (I would expect that Wealthfront needs more web programmers than economists, for example.) You might consider applying at those places, but my suspicion is you'll end up in another field entirely.
Yep, I've actually already applied to all three of those places. Vanguard would be my first choice of the three because I could do more outside of focusing strictly on investments, and actually have an advisor type relationship with people. You're right though in that I do have hesitations about being in this industry at all, because: * I am too anti-fee (e.g. why pay a fee on an IRA account at Wealthfront/Betterment? Yes, it’s better than what most people would do on their own, but it’s still not the optimal… I go back and forth on this one, because I do put a high value on the simplicity of it). * The business is based on meeting with lots of people and selling to them, and the people I would get along with the best are probably doing this stuff themselves * There’s tension between what this would be focused on (manage money effectively, accumulate wealth) vs. my desire to be more EA and act on the knowledge that I have enough, and many others do not. I haven’t heard back from any of the applications, so it’s a moot point right now.
Maybe someone on LW could recommend you a better job. Either here (but you would have to tell us at least what country are you from) or at a local meetup.
Well, I pulled the trigger yesterday. While it felt great to actually speak my mind and have a real discussion regarding all of these issues (it was actually pretty amazing- no yelling or anyone getting upset- there was actual discourse), I will now be jobless in a month, and I really don’t know the answer to what’s next. I’m debating between staying in my current area which would be a finance/accounting/operations type of role or just scrapping that whole path and try to go the programming route (close to zero expertise as of now). I’ve spent a lot of time working towards different credentials (CPA being the main one) so it’s hard to walk away from that even though I don’t think I’m learning anything all that useful. I’ve never met anyone from these communities (Lesswrong/EA), but I spend a lot of time here, so yeah I would definitely be open to talking with anyone here about general strategy (I’ve read all of 80000 hours) or specific opportunities if someone stumbles across this and has an idea. I will use more conventional methods as well, but I wanted to at least put this out there.
You may want to post your question in an Open Thread. Maybe it would be more strategic to skip the current one which already contains over 200 comments, and wait for the new one to appear on Wednesday 19th; so more people will see it. Better than here in a thread that started three weeks ago. I know almost nothing about the situation in "finance/accounting/operations type of role". I have mostly been a programmer, so now my availability bias screams at me that "everyone is a programmer, and everything outside of IT is super rare", which is obviously a nonsense. If there is a website in your country with job offers, perhaps you could try to imagine that you already have 3 years of experience, and look how many opportunities are there for each option and how well they pay. My experience with programming in Java was that about 50% of jobs available are programs for some kind of financial institutions. (But this may be irrelevant for you; I am describing Eastern Europe.) The companies usually need some analyst to talk with the customer and explain their needs to the programmers. If you have a good financial backgroud, this could be the right job for you. Programming could be risky, because it's not for everyone. You should probably try it first in your free time. (Hint: If you don't like programming in your free time, then the job probably is not the right one for you.) Also, after a few years the programmers usually hit the salary ceiling, and want to switch to managers or analysts. (Again, in Eastern Europe; I don't know how universal this is.) If you could start as an analyst, you would be already ahead of me in the IT career, and I am almost twice your age with about 20 years of programming experience. I have a friend who works in IT and makes more money than I do despite being a worse programmer, because he is a specialist: in his case it is finance and databases; also he is willing to travel to a customer in a different country whenever necessary. So the lesso
I thought you had issues with the financial services industry -- if you are an accountant you can work as an accountant in any industry you want including non-profits.

Hey less wrong.

I am Vicente and I am new here, i have been lurking here for one or two months and I have just created an account two or three weeks ago.

And right now I am reading Rationality: From AI to Zombies from Eliezer Yudkowsky.

some facts about me:

  • I live in Quebec city, Canada
  • I am under 18 but you will never know my age
  • I love computer science and I know php a little bit of c and html css (but those are not real programming languages)
  • I love and use free software (like in freedom)
  • the distro that I use is Debian gnu/linux

and that it!

and also:

i wanted to know howto have a bio in your user page like eliezer page.


Hi Vicente!

To make an user profile, set up an account on the Wiki, with the same name as your LessWrong account. Then make a user page for it. After a day, LW will automatically use that to make your profile page

How did you found out about Less Wrong? What's been the most interesting part about the writings so far?

I found out about LW in a french video and then i just remembered the site name, Two or three months later I came visit the site and i read some post and i found it interesting so after that I came back and discover that the site was power by Reddit code and I check the reddit source code on github and then I discover it was a fsf(free software foundation) approved license so I decide to create an account plus I already was on reddit. well for the reading i am only at the page number 23 (I just started) but so far: why true, book 1 section 1 sub-section 3 Feeling Rational, book 1 section 1 sub-section 2 And for the help thanks I will try later. NOTE: why your username is asd has it some thing to do with Autism spectrum disorder?
It's interesting that you took such a note from the fact that LW is powered by reddit, why was that so interesting? No, not at all. It's a version of "asdf" which is the first thing you write if you start to write nonsense on your keyboard, and it doesn't have any explicit symbolism.
Because i try to avoid non-free software and sites but I make some exceptions for sites like google and etc because there are no good free alternative. But if there where I will be the first to switch. NOTE: free as in freedom
That's very interesting code of honour! Do you have anything in mind on how you'd like to contribute on LW or do you have such plans?
I think I will be contributing to the discussion section and maybe when I get enough karma I will see what I can post in the main section.
I understand the concept of libre and non-libre software, but what are "non-free websites"?
non-free websites are websites that use non-free code (non-free license or are Proprietary) but my philosophy is that if there not any free alternative then I will use the site anyway. But any good free alternative I will be the first to switch. NOTE: free as in freedom

Hi everyone.

I'm about to start my second year of college in Utah. My intent is to major in math and/or computer science, although more generally I'm interested in many of the subjects that LessWrongers seem to gravitate towards (philosophy, physics, psychology, economics, etc.)

I first noticed something that Eliezer Yudkowsky posted on Facebook several months ago, and have since been quietly exploring the rationality-sphere and surrounding digital territories (although I'm no longer on FB). Joining LessWrong seemed like the obvious next step given the time I had spent on adjacent sites. I'm here solely out of curiosity and philosophical interest.

Thanks to Sarunas and predecessors for the welcome page, and the LW community more generally. I look forward to being a part of it.

Exciting! If I were in your place I would look at the growing field of causal inference which lives at the interface of statistics, computer science, epidemiology and economics. The books by Hernan and Robins (causal inference) and Pearl (causality), as well as the journal edited by Judea Pearl and Maya Petersen (causal inference).
Thanks for the recommendations (esp. Hernan and Robins). I'll definitely take a look.
And if you did in fact have a secret agenda, you wouldn't reveal it.
Psst, it's way more fun to treat everyone on LW as having a secret agenda.

Hello! I'm Alex, from Maryland, but I go to college in Ithaca, NY, where I am working on my math major/computer science minor. Way back when, a few of my friends kept talking about how great HPMOR was, so I started reading and I loved it. It is one of my all-time favorite stories. As I was reading it, I was very interested by all the ways Harry knew how to think right, and then one of my friends recommended the sequences and I read them all! Except for metaethics and quantum stuff.

I really enjoyed the sequences. They changed how I think. I managed to climb out of the agnostic trap of "you can neither prove nor disprove the existence of a deity". I plan on becoming even more rational. I've heard CFAR is a good resource.

I had been reading the posts on the main page for a while when I saw the most recent census and felt guilty about taking it without an account, so I made one but haven't used it until now. I didn't feel right commenting in other places when I hadn't introduced myself, but I am finally done putting it off!

Hi LWers.

My brothers got me into HPMOR, I started reading a couple sequences, switched over to reading the full Rationality: AI to Zombies, and recently finished that. The last few days, I've been browsing around LW semi-randomly, reading posts about starting to apply the concepts and about fighting akrasia.

I'm guessing I'm atypical for an LW reader: I'm a stay-at-home mom. Any others of those on here?

I'm not a mom yet but I'm effectively a house spouse :)
There are definitely a lot of parents on LessWrong. I'm sure there are at least a few stay-at-home moms. In fact, 18.4% of the participants in the 2014 LW Survey have children, and 0.5% (8 people) describe themselves as 'homemakers.'
Thanks for the link! I made a (brief, low effort) attempt to find that post earlier, but only came across the census surveys, not the results. Heck, there's even one survey respondent who has more kids than I do. Cool beans.
Welcome! How many kids, and how old are they?
6... 7 if you count my adult step-daughter (who I didn't really help raise). Ages 12, 11, 9, 7, 5, and 7-months.
Impressive! Both of my parents came from huge households (7 and 8), but I had the more typical upbringing with only one sibling, who was only slightly older.
My mom was one of 11, my dad one of 4; I am one of 7 myself. It definitely makes having a big family feel more natural.

Hi, I am a graduate student who is working on getting a PhD in math. My journey here started when I took a moral philosophy course as an undergrad that made me think about what I should do. I decided that I should do my best to improve the world, and I eventually decided that existential risk mitigation was the highest priority improvement. Researching that lead me here, I lurked for a few years, and now I have finally made an account.

I am hoping to get some insight here as to whether it would be most effective for me to work on the AI friendliness problem, donate money, or something else. I am also interested in learning how to manage routine aspects of my life better.

Hi, I'm Alexandra. I'm turning 18 tomorrow, and I'm slowly coming to the conclusion that I have GOT to be more rigorous in my self-improvement if I'm going to manage to reach my ambitions.

I'm not quite a new member- I've lurked a lot, and even made a post a while back that got a decent number of comments and karma.

I discovered Less Wrong through HPMOR. It was the first time I'd read a story with genuinely intelligent characters, and the things in it resonated a lot with me. This was a couple of years ago. I've spent a lot of time here and on the various other sites the rationalist community likes.

I'm mostly posting this now because I'd like to get more involved. I recently read an article that said the best way to increase competency at a subject is to join a community revolving around the subject. I live in OKC, where I've never even HEARD of another student of rationality. The closest I've gotten is introducing my boyfriend to HPMOR.

I'm a biology student at a community college near my living space. I'm very good at biology, english, philosophy, etc. I'm really, REALLY bad at chemistry/physics and math. I've done some basic research into what makes a person suck at mathematical... (read more)

Hi, Alexandria! Okay... I am one of those people who is really good at math. Of course, I cannot be certain, but I suspect that the trouble here might be that you failed to grasp some essential point way, way back at the early stages of your mathematical education. So, let's see how you handle a non-obvious problem. In answering this question, I'd like you to show me, as far as possible, your entire reasoning process, start to finish; the more information you can give, the more helpful my further responses can be. The question is as follows: John is on his way to an important meeting; he has to be there at noon. Before leaving home, he has calculated what his average speed has to be to arrive at his meeting on time. When he is exactly half-way to his destination, he calculates his average speed so far, and to his dismay he finds that it is half the value that it needs to be. How fast does John need to travel on the second half of his journey in order to reach his destination on time?
Hello, Alexandra. I also struggle with the math thing. My secret to success is practicing until I'm miserable, but these things also help: 1. Read layman books about mathematical history, theory, and research. It ignites enthusiasm. I recommend James Glieck's [sp?] book Chaos, and his book The Information. He has a talent for weaving compelling narratives around the science. 2. Learn a little bit of programming. While coding is frustrating in its own right, I find that it forces me to think mathematically. I can't leave steps out. I'm learning Python right now, and it's a good introductory language (I'm told). 3. Explain it to your cat. I'm only mostly kidding. I've found that tutoring lower-level math has helped my skills in calculus and statistics. Learning to walk through the problems in a coherent way, so that a moody sixth-grader can understand it, is tremendously helpful. I'd love to work together on exploring mathematical concepts. If you'd like to collaborate, hit me up sometime. Also: if you like HPMOR, you should read Luminosity. It is a rationality-driven version of Twilight that's actually really good.
I will do that. I think I may actually have a copy of Chaos lying around. I've actually read (most of) Luminosity- I lost my place in the story at one point due to computer issues and never got back to it. I tried CodeAcademy once, didn't find it that interesting. I don't think it used python, though. I'll check it out. Programming is in general very useful. If I can find someone to tutor, I'll try that. It certainly can't hurt. Thank you!

Hello. My name is Andrey, I'm a C++ programmer from Russia. I've been lurking here for about three years. As many others I've found this site by link from HPMOR. The biggest reason for joining in the first place was that I believe the community is right about a lot of important things, and the comments of quality that's difficult to find in the bigger Net. I've already finished reading the Sequences and right now I'm interested in ethics and I believe I've got a few ideas to discuss.

For the origin story as a rationalist, as it often happens it's all started with a crisis of faith. Actually, the second one. The first was a turn from Christianity to a complicated New Age paradigm I'll maybe explain later. The second was prompted by a question of why I believe some of the things I believe in. While I used to think there was a lot of evidence for the supernatural, I've started trying to verify them and also read religion apologetics to evaluate the best arguments they have. Yup, they were bad. The world doesn't look like there exists a powerful interventionist deity. (And even if the miracles they were talking about that happen right now are true miracles, all of them are better explai... (read more)

Hi there Andrey! I am also a former apologist (aspiring, anyways - teenage girls aren't taken very seriously by theologians). I clung to my faith so hard. It's amazing how much the evidence there is against the classical notion of the supernatural. It's a snowball effect. Every piece stripped away another aspect of my fundamentalism, until I was a socially-liberal Christian. Then, an agnostic theist. Then, an agnostic atheist. I'm also looking forward to getting involved with the community. The high standards for conversation here are intimidating, but it's exciting, too.

Well since I'm procrastinating on important things I might as well use this time to introduce myself. Structured procrastination for the win!

Hello everyone, I have been poking around on less wrong , slater star codex and related places for around three to four years now but mostly lurking. I have gradually become more and more taken with the risks of artificial intelligence orders of magnitudes smarter than us Homo Sapiens. In that aspect, I'm glad that the topic of a super-intelligent AI has taken off into the mainstream media and academia. EY isn't the lonely crank with no real academic affiliation, a nerdy Cassandra of his time, spewing nonsense on the internet anymore. From what I gather, status games are so cliche here that it's not cool. But with endorsements by people like Hawking and Gates, people can't easily dismiss these ideas anymore. I feel like this is a massively good thing because with these ideas up in the air so to speak, even intelligent AI researchers who disagree on these topics will probably not accidentally build an AI that will turn us all into paper clips to maximize happiness. That is not to say that there doesn't exist numerous other failure pathways. ... (read more)

Hello all,

I found this site from a link in the comments section of an SCP Foundation post, which consequently linked to one of Eliezer's stranger allegorical pieces about the dangers of runaway AI intelligence getting the best of us. I've been hooked since.

Thanks to this site, I'm relearning university physics through Feynman, have plans to pick up a couple textbooks from the recommended list, and plan on taking the opportunity to meet some hopefully intellectually stimulating people in person if any of the meetups you guys seem to regularly have manage to ever make it closer to the general Massachusetts area.

I recently graduated with a B.S in Chemistry with the now odd realization that I haven't really learned anything during my experiences at university. I hope participating here will alleviate this void of knowledge I could have potentially learned.

Furthermore, if I'm lucky, I might get to contribute to the plethora of useful discussions that seem to populate this site. If I'm even luckier, those contributions will be positive. Let's just hope I learn fast enough to make sure luck isn't the deciding factor for such an outcome.

I am also curious as to the level of regular activity... (read more)



I became interested in psychology at a young age, and irritated everyone around me by reading (and refusing to shut up about) the entire psych section of my local library. I had a difficult time at that age separating the "woo" from actual science, and am disappointed that I focused more on "trivia learned" and "books read" than actual retention. At any rate, I have a pretty good contextual knowledge of psychology, even if my specific knowledge is shaky. I put this knowledge to good use for seven years while I worked with developmentally delayed children.

I discovered Less Wrong in 2011/2009/2007/I actually have three distinct memories of discovering it at different times, but was turned off by the trend of atheism. I know how ridiculous that is for an aspiring rationalist, to reject evidence because it's uncomfortable. The "quiet strain" was too much, and I found the community exclusive and hard to break into. This site was not responsible for the disintegration of my faith, but it was another nudge in that direction. I don't know how to quantify my beliefs anymore; I think the God/No-God dichotomy is irrelevant. I'm perfectly ... (read more)

Hello all!

Im a medical student and a researcher. My interests are consciousness, computational theory of mind, evolutionary psychology, and medical decision making. I bought Eliezers book and found here because of it.

Want to thank Eliezer for writing the book, best writing i have read this year. Thank You.

Welcome! I'm an MD and haven't yet figured out why there are so few of us here, given the importance of rationality for medical decision making. It's interesting that at least in my country there is zero training in cognitive biases in the curriculum.
I have the Irish equivalent of an MD; "Medical Bachelor, Bachelor of Surgery, Bachelor of the Art of Obstetrics". This unwieldy degree puts me in fairly decent company on Less Wrong. I may be generalizing from a sample of one, but my impression is that medicine selects out rationalists for the following reasons: (1) The human body is an incompletely understood highly complex system; the consequences of manipulating any of the components can generally not be predicted from an understanding of the overall system. Medicine therefore necessarily has to rely heavily on memorization (at least until we get algorithms that take care of the memorization) (2) A large component of successful practice of medicine is the ability to play the socially expected part of a doctor. (3) From a financial perspective, medical school is a junk investment after you consider the opportunity costs. Consider the years in training, the number of hours worked, the high stakes and high pressure, the possibility of being sued etc. For mainstream society, this idea sounds almost contrarian, so rationalists may be more likely to recognize it. -- My story may be relevant here: I was a middling medical student; I did well in those of the pre-clinical courses that did not rely too heavily on memorization, but barely scraped by in many of the clinical rotations. I never had any real passion for medicine, and this was certainly reflected in my performance. When I worked as an intern physician, I realized that my map of the human body was insufficiently detailed to confidently make clinical decisions; I still wonder whether my classmates were better at absorbing knowledge that I had missed out on, or if they are just better at exuding confidence under uncertainty. I now work in a very subspecialized area of medical research that is better aligned with rational thinking; I essentially try to apply modern ideas about causal inference to comparative effectiveness research and medical decision making
I don't think medicine is a junk investment when you consider the opportunity cost, at least in the US. Consider my sister, a fairly median medical school graduate in the US. After 4 years of medical school (plus her undergrad) she graduated with 150k in debt (at 6% or so). She then did a residency for 3 years making 50k a year, give or take. After that she became an attending with a starting salary of $220k. At younger than 30, she was in the top 4% of salaries in the US. The opportunity cost is maybe ~45k*4 years, 180k + direct cost of 150k or so.. So $330k "lost to training," however 35+ years of making 100k a year more than some alternative version that didn't do medical school. Depending on investment and loan decisions by 5 years out you've recouped your investment. Now, if you don't like medicine and hate the work, you've probably damned yourself to doing it anyway. Paying back that much loan is going to be tough working in any other job. But that is a different story than opportunity cost.
Thanks hyporational ! It is exactly same here. Cognitive biases, heuristics, or even Bayes Theorem (normative decision making) is not really taught here. Also I once argued against some pseudoscientific treatment (in mental illnesses) and my arguments were completely ignored by 200 people because of argumentum ad hominem and attribute substitution (who looks like he is right vs. looking the actual arguments). Most people dont know what is a good argument or how to think about the propability of a statement. Interesting points Anders_H, I have to think about those littlebit.
We were taught bayes in the form of predictive values, but this was pretty cursory. Challenging the medical professors' competence publicly isn't a smart move careerwise, unless they happen to be exceptionally rational and principled, unfortunately. There's a time to shut up and multiply, and a time to bend to the will of the elders :)
Reminds me of:
Yep :) You are definetely right career wise. Problem for me was the 200 other people who will absorb completely wrong idea of how the mind works if I wont say anything. Primum non nocere. But yeah, this was 4 years ago anyway...just wanted to mention it as an anecdote of bad general reasoning and biases :)
Huh. My experience is somewhat similar to yours in the sense that I never was a big fan of memorization, and I'm glad that I could outsource some parts of the process to Anki. I also seem to outperform my peers in complex situations where ready made decision algorithms are not available, and outperformed them in the few courses in medschool that were not heavy on memorization. The complex situations obviously don't benefit from bayes too much, but they benefit from understanding the relevant cognitive biases. The medical degree is a financial jackpot here in Finland, since I was actually paid for studying, and landed in one of the top 3 best paying professions in the country straight out of medschool. Money attracts every type, and the selection process doesn't especially favor rationalists, who happen to be rare. It just baffles me how the need for rationality doesn't become self evident for med students in the process of becoming a doctor, not to mention after that.
Is it just a matter of terminology? I would guess that all med students will agree that they should be able to make a correct diagnosis (where correct = corresponding to the underlying reality) and then prescribe appropriate treatment (where appropriate = effective in achieving goals set for this patient).
Whatever the terminology, they should make the connection between the process of decision making and the science of decision making, which they don't seem to do. Medicine is like this isolated bubble where every insight must come from the medical community itself. I found overcoming bias and became a rationalist during med school. Finding the blog was purely accidental, although I recognized the need for understanding my thinking, so I'm not sure what form this need would have taken given a slightly different circumstance.

Hello from Houston, Texas! I've been following LessWrong for several years now, slowly working my way through the Sequences. I'm an aspiring fantasy/sci-fi writer, martial artist, and outdoorsman and I am overjoyed to be a part of the LW community. It's hard for me to say exactly when I first 'clicked' on rationality, but the Tsuyoku Naritai post certainly struck a chord for me.

A few months ago, I attended a LessWrong meetup in Austin. I enjoyed the meetup immensely, not least because it also happened to be a Petrov Day celebration. I'd like to attend LW meetups more frequently, but I live in Spring (north Houston) and the Austin meetup is a 3+ hour drive for me.

So, I've decided to start a Houston meetup group. According to some (admittedly old) statistics, the number of visitors to LessWrong from the Houston area is over 9000, and I think this is more than enough to create an enjoyable meetup group.

Our first meetup will be Saturday, February 20 at the Black Walnut Cafe in the Woodlands, TX. It will start at 1:00PM and go until 4:00PM (or later, if enough people show up and are interested in staying).

If you're interested, please reply below so I know who to expect!

Hi, and welcome! I'm hoping to start a Meetup group sometime this spring or summer. If you're amenable to it, I may bug you afterwards and see how your meetup went.
Gladly! Of course, if you're interested, you are also welcome to attend this one.
It'd be quite a drive, I'm in Idaho, but I'll keep that in mind next time I'm in Houston.

Hello everyone! Came to less wrong as a lurker something like a two years ago (Perhaps more, my grasp on time is... fragile at best), and binged through all of HPMOR that was up then, and waited with bated breath for the rest. After a long time spent lurking, reading the blogs and then the e-book, I decided I wanted to do more than aimlessly wander through readings and sequences.

So here I am! I posted to the lounge on reddit, and now I'm posting here. The essence of why I'm posting now is simple: I want to start down a road towards aiding in the work towards FAI. I graduated a year and a half ago, and I want to start learning in a directed and purposeful way. So I'm here to ask for advice on where and how to get started, outside of standard higher education.

Welcome! MIRI created a research guide for people interested in helping with FAI.
In what discipline?

I joined lesswrong because my friends suggested it to me. I really like all the articles and the fact that the comments on the articles are useful and don't have lots of bad language. This really surprised me.

I think I've caused enough kerfluffles around here that many people know me but I'm Cameron. I've been on the site almost a year I think. BA and MA in Political Science. I have a regular interest in philosophy and I found out about the site from a disparaging article on Slate.com. I'm one of the weird spiritual people on her practicing western esoterica. In the past I've worked in media and PR. Currently, I'm a novelist in Tacoma, WA, USA and host of The Cameron Cowan Show, every monday and friday on youtube (fresh shows in August!) For more information, clips and All The News You Need To Know In 10 Minutes or Less (and why you should care about it), see me at CameronCowan.net! Thanks for reading!

Hello LW,

My name is Alex, and while I first discovered LW 2-3 years ago, I have only visited the site sporadically since then. I have always found the discussion here intriguing and insightful, but never found myself motivated enough to dedicate time to joining the community (until now!).

I'm a 26 year old Canadian with an undergraduate degree majoring in chemistry and minoring in philosophy (with a healthy dose of physics on the side). I have always been very analytical and process driven, and I have used that to fuel my creativity, and develop a more thorough understanding of the world we find ourselves a part of. I have been self-employed since graduating, with the eventual goal of returning to school for a graduate degree.

In my undergrad, my strengths and interests were in synthetic/materials chemistry, as well as organic chemistry. I spent time working for a research group that specialized (largely) in group 14 nano-material chemistry, which I enjoyed immensely. The areas of philosophy I concentrated on were philosophy of science, computing & AI, theory of mind, and existentialism. In short, I avoided the 'historical overview' philosophy courses in favour of those which we... (read more)

Hi everyone,

I'm a PhD candidate at Cornell, where I work on logic and philosophy of science. I learned about Less Wrong from Slate Star Codex and someone I used to date told me she really liked it. I recently started a blog where I plan to post my thoughts about random topics: http://necpluribusimpar.net. For instance, I wrote a post (http://necpluribusimpar.net/slavery-and-capitalism/) against the widely held but false belief that much of the US wealth derives from slavery and that without slavery the industrial revolution wouldn't have happened, as well ... (read more)

Hello all,

South Carolinian uni student. Been lurking here for some time. Once my desire to give an input came to a boil, I decided to go ahead and make an account. Mathematics, CompSci, and various forms of Biology are my intensive studies.

Less intense hobbies include music theory, politics, game theory, and cultural studies. I'm more of a 'genetics is the seed, culture is the flower' kind of guy.

The art of manipulation is fascinating to me; sometimes, when one knows their audience, one must make non-rational appeals to their audience to persuade them. T... (read more)

Welcome! I partially agree, but I believe there is usually no clear dividing line between "those who know, and use irrational claims strategically" and "the followers who drink the kool-aid". First, peer pressure is a thing. Even if you consciously invent a lie, when everyone in your social group keeps repeating it, it will create an enormous emotional pressure on you to rationalize "well, my intention was to invent a lie, but it seems like I accidentally stumbled upon an important piece of truth". Or more simply, you start believing that the strong version of X is the lie you invented, but some weaker variant of X is actually true. Second, unless there is a formal conspiracy coordination among the alpha lizardmen, it is possible that leader A will create and spread a lie X without explaining to leader B what happened, and leader B will create and spread a lie Y without explaining to leader A what happened, so at the end both of them are the manipulators and the sheep at the same time.
Very good point. On a similar note: we often don't consider whether we have empirically tested what we, ourselves, believe to be true. Most often, we have not. I'd wager that we are all 'useful idiots' of a sort.
It's sheep all the way up!
Sheep all the way up, turtles all the way down, and here we are stuck in the middle!
"Or more simply, you start believing that the strong version of X is the lie you invented, but some weaker variant of X is actually true." That's true, but in most cases it is in fact the case that some weaker variant is true, and this explains why you were able to convince people of the lie. That said, this process is not in general a good way to discover the truth.
I would still expect a shift towards the group beliefs; e.g. if the actual value of some x is 5, and the enemy tribe believes it's 0, and you strategically convince your tribe that it is 10... you may find yourself slowly updating towards 6, 7, or 8... even if you keep remembering that 10 was a lie. Anyway, as long as we both agree that this is not a good way to discover truth, the specific details are less important.
I agree with that, and that is one reason why it is not a good method.

Hello from Boston. I've been reading LW since some point this summer. I like it a lot.

I'm an engineering student and willing to learn whatever it takes for me to tackle world problems like poverty, hunger and transmissible diseases. But for now I'm focusing my efforts on my degree.

Hello LessWrong!

I'm Marko, a mathematician from Germany. I like nerding around with epistemology, decision theory, statistics and the like. I've spent a few wonderful years with the Viennese rationality community and got to meet lots of other interesting and fun LessWrongians at the European Community Weekend this year. Now I'm in Zürich and want to build a similar group there.

Thanks for giving me so much food for thought!

Welcome, Marko!

Hi everyone.

I've already posted a couple of pieces - probably should have visited this page first, especially before posting my last piece. Well, such is life.

I headed over to LessWrong because I was/am a bit burned out by the high-octane conversations that go on online. I've disagreed with some things I've read here, but never wanted to beat my head - or someone else's - against a wall. So, I'm here to learn. I like the sequences have picked up some good points already - especially about replacing the symbol with the substance.

Question - what's ... (read more)

My guess is: it is okay, if it would be okay to post the same content here. Please provide a short summary when linking.

Hi LW Users,

I apologise in advance for not having more to say initially, but I created an account on this website for one reason. I have one proposition/idea to put forth on the discussion section.

I would prefer to wait until I have twenty karma so that I may post the proposition/idea there, so I hope that your curiosity has been sparked enough, otherwise let me know.

Thanks so much for reading :)

Welcome. You will only accumulate karma by having people upvote your comments, so if your goal is as you describe then I'm afraid you'll have to participate in other ways too before you get to show us your idea. (Of course you could put it in a comment in the Open Thread or something if you can't wait.)
Where should I be commenting then? Right here? And where is the open thread? Thank you so much for your help and I look forward to it.
The current open thread is here: http://lesswrong.com/r/discussion/lw/nns/open_thread_may_30_june_5_2016/ A new one will be started soon.

Hi LW,

I got interested in rationality from the books Irrationality, some others I can't remember in between and later, fast and slow. Somehow I found HPMOR, which I loved, and through that, found this. Other influences have included growing up with quite strongly religious parents (first win for the power of the question - but why do you believe that, first loss for thinking that because something was obvious to me I could snap my fingers and make it obvious to others.)

What I'm doing: I'm in my twenties, working in the energy sector because I started foll... (read more)

Hello from Canada! I study computer science and philosophy at the University of Waterloo. Above anything, I love mathematics. The certainty that comes from a mathematical proof is amazing, and it fuels my current position about epistemology (see below). My favourite courses for mathematics so far have been the introductory course about proofs, and a course about formal logic (the axioms of first order logic, deduction rules, etc). Philosophy has always been very interesting to me: I've taken courses about epistemology, ethics, the philosophy language; I am... (read more)

This sounds similar to Coherence theory of truth.

Hello LW,

My name is Colton. I'm a 22 year old electrical engineering student from Missouri who found Less Wrong about a year ago through Slate Star Codex and binged most of the sequences.

I have been interested in the study of bias and how to avoid it since I read the book Predictably Irrational a few years back. I also consider myself quite academic for an engineer, with a good deal of physics, math, and computer science theory under my belt.

I have been lurking around LW for a little over a year. I found it indirectly through the Simulation Argument > Bostrom > AI > MIRI > LW. I am a graduate of Yale Law School, and have an undergraduate degree in Economics and International Studies focusing on NGO work. I also read a lot, but in something of a wandering path that I realize can and should be improved upon with the help, resources, and advice of LW.

I have spent the last few years living and working in developing countries around the world in various public interest roles, trying to ... (read more)

Hello all!

I'm a graduated International Relations student from London. I took a year off after graduation to learn how to manage my finances and invest in the stock market. Because of that, I came across my life hero, Charlie Munger, the vice-chairman of Berkshire Hathaway. He is a machine of rationality and is by far one of the wisest men (if not the wisest) alive. He wrote an essay called, "The psychology of human misjudgement" (http://law.indiana.edu/instruction/profession/doc/16_1.pdf) which I implore all rationality-seekers to devour. This... (read more)

Welcome! One of my primary pieces of exposure to Munger is Peter Bevelin's book, Seeking Wisdom from Darwin to Munger, which I think you might enjoy--as I recall, it draws from the same Heuristics and Biases literature as many other things (like Munger's essay) but has enough examples that don't show up in the more standard works (Thinking and Deciding, Thinking Fast and Slow, etc.) to be worthwhile on its own.
Thanks for the recommendation. I've seen Bevelin's book come up many times during my Munger-searches, but I haven't gotten around to reading it yet. I'm sure I'll more than enjoy it.

Hello LW,

I'm an aspiring rationalist from a community called PsychonautWiki. Our intent it to study and catalog all manner of altered states of continuousness in a legitimate and scientific manner. I am very interested in AGI and hope to understand the architecture and design choices for current major AGI projects.

I'll probably start a discussion for you guys tomorrow.


Hi Aleks! Have you read "Mysticism and Pattern-Matching" at Slate Star Codex? What is your opinion?
Just read it. Fascinating. https://psychonautwiki.org/wiki/Geometry You might want to look into level 8B and 8A geometry.

Hey y'all, I come here both as a friend and with an agenda. I'm scary.

See I have a crazy pet theory... (and yes it's a TOE, fancy that!)

...and I'd love to give it a small home on the Internet. Here?

This like to share it with you because this community seems to be be the proper blend of open minded and skeptical. Which is what the damn thing needs.

Anyways I've lurked for quite awhile, and you guys have been great at opening my mind to a lot of things. I figure this might be good enough and crazy enough to give something back.

As a personal note, I'm curre... (read more)

The response to your theory, though, will depend on whether it's one of those. And the response to "should I tell you my new theory" will depend on the fact that such theories have some probability of being one of those. Ultimately, you have to tell us the theory to know how we'll react.
Well for better or for worse! Here it is! http://lesswrong.com/r/discussion/lw/mms/fragile_universe_hypothesis_and_the_continual/

Hello all,

I am just an other lurker here. Most of the times, I am found in LW slack group. I think, I better had introduced myself earlier. I have zero karma so I am unable to post anything at all. It would be better for me to explore how LW website works.


Hi. I'm Bernardo, a business student from Brazil. I came across Less Wrong from a answer to a thread on Quora (https://www.quora.com/How-would-you-estimate-the-number-of-restaurants-in-London). It got me interested in Fermi Estimates and I'm surfing Less Wrong to read about it.

I'd love to translate those articles on Fermi Estimates to Portuguese to add to the translated pages list. How do I do that?

Hello Bernardo, I'm Christiano from Brazil too! Nice to see a brazilian here! Did you manage to translate the article? I can help you with English-Portuguese revision or even help you with the translation.

Hi friends,

I'm Chris :D I've been lurking on and off for a few months now (after hearing about LW from some of my friends at uni, reading some SlateStarCodex, and devouring HPMOR in less than a week) and have decided it's about to take the plunge into the scary world of commenting. (It's a bit scary being a somewhat smart person among people who are much, much smarter)

My academic background: growing up in my family meant I picked up a lot of random stuff, but at uni I have been studying pure mathematics and a bit (pun intended) of computer science.

What mot... (read more)

The LW commentariat is indeed smart, but probably not as smart relative to you as you are suggesting.

I'm a creative writer and a virtual assistant, I have been a freelancer for 2years now. From the creative educational environment I'll like to express an interest in becoming more rational, and I found Less Wrong through Intentional Insights.

Yeah thanks I also believe I could become more rational too by becoming a rational thinker.
Thanks, I also believe that becoming rational can help me achieve all of my objectives and long term goals.
Glad you're joining LW, Beatrice! Nice to see another volunteer and part-time contractor for Intentional Insights join LW :-) For the rest of LW folks, I want to clarify that Beatrice volunteers at Intentional Insights for about 30 hours, and gets paid as a virtual assistant to help manage our social media for about 10 hours. She decided to volunteer so much of his time because of his desire to improve her thinking and grow more rational. She's been improving through InIn content, and so I am encouraging her to engage with LW.


I found LessWrong after reading HPMoR. I think I woke up as a rationalist when I realised that in my everyday reasoning I always judjed from the bottom line not considering any third alternatives, and started to think what to do about that. I am currently trying to stop my mind from always aimlessly and uselessly wandering from one topic to another. I registered on LessWrong after I started to question myself on why do I believe rationality to work, and ran into a problem, and thought I could get some help here. The problem is expressed in the follow... (read more)

Welcome to Less Wrong! My short answer to the conundrum is that if the first thing your tool does is destroy itself, the tool is defective. That doesn't make "rationality" defective any more than crashing your first attempt at building a car implies that "The Car" is defective. Designing foundations for human intelligence is rather like designing foundations for artificial (general) intelligence in this respect. (I don't know if you've looked at The Sequences yet, but it has a lot of material on the common fallacies the latter enterprise has often fallen into, fallacies that apply to everyday thinking as well.) That people, on the whole, do not go crazy — at least, not as crazy as the tool that blows itself up as soon as you turn it on — is a proof by example that not going crazy is possible. If your hypothetical system of thought immediately goes crazy, the design is wrong. The idea is to do better at thinking than the general run of what we can see around us. Again, we have a proof by example that this is possible: some people do think better than the general run.
Well, it sounds right. But which mistake in rationality was done in that described situation, and how can it be improved? My first idea was that there are things we shouldn't doubt... But it is kind of dogmatic and feels wrong. So should it maybe be like "Before doubting X think of what will you become if you succeed, and take it into consideration before actually trying to doubt X". But this still implies "There are cases when you shouldn't doubt", which is still suspicious and doesn't sound "rational". I mean, doesn't sound like making the map reflect the territory.
It's like repairing the foundations of a building. You can't uproot all of them, but you can uproot any of them, as long as you take care that the building doesn't fall down during renovations.
As soon as the Dark Matrix Lords can (and do) directly edit your perceptions, you've lost. (Unless they're complete idiots about it) They'll simply ensure that you cannot perceive any inconsistencies in the world, and then there's no way to tell whether or not your perceptions are, in fact, being edited. The best thing you could do is find a different proof and hope that the Dark Lord's perception-altering abilities only ever affected a single proof. At this point, John has to ask himself - why? Why does it matter what is true and what is not? Is there a simple and straightforward test for truth? As it turns out, there is. A true theory, in the absence of an antagonist who deliberately messes with things, will allow you to make accurate predictions about the world. I assume that John cares about making accurate predictions, because making accurate predictions is a prerequisite to being able to put any sort of plan in motion. Therefore, what I think John should do is come up with a number of alternative ideas on how to predict probabilities - as many as he wants - and test them against Bayesian reasoning. Whichever allows him to make the most accurate predictions will be the most correct method. (John should also take care not to bias his trials in favour of situations - like tossing a coin 100 times - in which Bayesian reasoning might be particularly good as opposed to other methods)

Salutations! I've been reading Less Wrong for three or four years now without registering - ever since stumbling across a supremely accessible explanation of Bayes Theorem - and suddenly felt I might have something to add. I feel significantly more cynical than most of the posters here, but endeavor to keep my pessimism grounded.

My parents raised me rationalist (not merely atheist), encouraging an environment where questions were always more important than answers and everyone was willing to admit that "I don't know." I spent the requisite few y... (read more)

How about writing an article where you explain why you hold that belief? How would reality look like if the belief would be wrong? What sorts of predictions can be made with it?


I'm a middle-aged computer scientist/philosopher, who specialized in artificial intelligence and machine learning back in the stone age when I was getting my degrees. Since then I've done a bit of work in probabilistic simulations and biologically inspired methods of problem solving, mostly for industry. I've recently finished writing a book about politics, although God knows if I'll ever sell a copy. Now I'm into a bit of everything. Politics. Economics.

I came here looking for input into a conlang project that I'm working on. Basically it involve... (read more)

I do have a conlang draft. A few thoughts based on my conlang thinking: Loglan/Lojban is a language were math was an afterthought. That's likely mistake. If you look at a concept like grandfather, using the word "grand" doesn't make much sense. I think it's better to say something like father-one for grandfather, father-two for great-grandfather. The same way the boss of your boss should be boss-one. Having a grammer in which relationships can be expressed well is very valuable. I think that loglan attempt to build on existing roots of the widely spoken languages is flawed because it allows less freedom organizing the language effectively. It would be good to have a lot of concepts with 3 letters instead of 5. In my language draft I started to take concept of graph theory for naming relationships (the structure of the words matters but the actual word is provisional): bei node in same graph cai node parent doi node children beiq relative caiq parent doiq son/daughter bei person employed in the same company caiß boss (person with authority to order) doiß direct (person who can be ordered) Once you understand that structure and learn the new word "fuiq" for sibling, you can guess that a direct coworker is called fuiß. Of the in a graph notes that share the same parent note are "fui". I like grouping concepts this way where I can go from parent to son/daughter simply by going one forward in the alphabet and replace "c" with "d" and "a" with "o" ("i" get's skipped because the word ends in "i"). I did use a similar principle for naming numbers: ba 0 ce 1 di 2 fo 3 gu 4 ha 5 je 6 For the number I also gave adding a "q" meaning. It turn the number into base 16. Base 16 numbers are later quite useful if you want to make an expression like north-east. At the moment pilots use phrases based on the clock to navigate: "There's a bird at 2 o'clock." It's much better to bake numbers more centrally into the language. ---------------------------------------- In case you
Also, the topic is now up and running in the regular "discussion" area.
It sounds like you were trying to construct an a-priori conlang, in which the meaning of any word could be determined from its spelling, because the spelling is sufficient to give the word exact coordinates on a concept graph of some sort. I thought about this approach some time ago, but was never able to find a non-arbitrary concept graph to use, or a system of word formation that didn't create overly long or unpronounceable words. I was originally thinking about including non-ascii characters, but eventually compromised on retaining English capitals instead. The biggest problem that any conlang faces is getting people to use it, and anything that makes that more difficult, such as requiring changes to the standard American keyboard, needs to be avoided unless it's absolutely necessary.

Hello LW! Long time lurker here. Got here from HPMOR a few years ago now. This is one of my favourite places on the internet due to its high sanity waterline and I thought I'd sign up so I could participate here and there (plus I finally came up with a username I like!). I've got a B.Sc. in math with a concentration in psychology (apparently that is a thing you can get, I didn't know either) and my other passions are music, film, humor, and being right all the time ;)

Thanks to LW and the rest of the rationality blogosphere, I've added effective altruism to... (read more)

Hi! I'm interested in curing death or at least contributing to the cure. I'm an ok computer programmer and I'm preparing to go to school this spring to work on a bachelor's degree in Biomedical Engineering with a minor in Cognitive Science. I'd like to make friends with someone who is also at the early planning stages of pursuing a similar degree, and yeah, I do realize just how specific those requirements are, but it doesn't hurt to keep an eye out just in case. I'm in a fairly good place in my life to pursue my education, but I don't yet know how it's go... (read more)


I first heard about LW through a SomethingAwful thread. Not the most auspicious of introductions, but when I read some of your material on my own instead of receiving it through the sneerfilter, I found myself interested. Futurology and cognitive biases are two topics that are near and dear to my heart, and I hope to pick up some new ideas and perhaps even new ways of thinking here. I've also had some thoughts about Friendly AI which I haven't seen discussed yet, and I'm excited to see what holes you guys can poke in my theories!

Hi! I signed up to LessWrong, because I have the following question.

I care about the current and future state of humanity, so I think it's good to work on existential or global catastrophic risk. Since I've studied computer science at a university until last year, I decided to work on AI safety. Currently I'm a research student at Kagoshima University doing exactly that. Before April this year I had only little experience with AI or ML. Therefore, I'm slowly digging through books and articles in order to be able to do research.

I'm living off my savings. My... (read more)

Hello from NZ. So I'm basically, I'm here to promote my... Jokes, I came across this website from a Wait But Why article I was doing research on (Cryonics). The comments here are next level awesome, people share ideas and I feel like the moderators aren't ruled by one discourse or another. So yeah I decided to jump on in and check it out.

I enjoy Science, Learning, Entrepreneur stuff, and better ways of looking at the world.

4mako yass
Menilik Dyer! I thought it might be you! We met at a Mum's Garage thing (I was the one wearing no shoes and a lot of grey). So cool to see you here. Welcome to the mouth of this bottomless rabbithole that is modern analytical futurism. I'd hazard you already have some sense of how deep it goes. If anyone's reading this; Menilik is a badass. He once successfully built a business by picking a random market sector he knew nothing about and asking people on the ground what they might need.

Hi. I live in Umeå, Sweden. I have been aware of Less Wrong for some time now. First through HPMoR, and more lately I have been reading posts that my friend has recommended to me. I just recently decided I want to also join the discussion, so i crated this user to be able to comment.

I find it very useful do distinguish between what I call "debate" and "discussion"
"debate" = everyone involved is trying to win, where "win" usually means convincing the audience.
"discussion" = everyone involved is trying to l... (read more)

Welcome! How much programming have you done so far? In my experience physicists tend to make the transition to programming fairly well because they have lots of experience with modeling / reasoning from first principles / mathematical thinking.
Yes, that is one plan. I have not done much programming, but I have done enough to know that this is something I am capable of learning.
I might post something soon, only I am confused by all the formatting. Is there some where I can try it out with out actually posting? I would like to to try out what LaTeX code is possible to include. I looked at the LaTeX to HTML for Less Wrong app but it seem to only pick upp expressions enclosed by single $, which is very limiting. Is this the only type of LaTeX code that is possible in LessWrong formatting environment, or it is just a limitation of the app?
Never mind, I found the sandbox
Anyone know a good trick for numbering equations?
The article that I find most similar to this idea is The Scales of Justice, the Notebook of Rationality. You might call debate 'defending a side' or 'counting points', as opposed to seeking the truth.

Hey LW. I found this site about an hour ago while browsing Quora (I know, I know) and the concept is really appealing to me. Currently I'm studying for my undergrad degree in Neuroscience, not sure exactly what direction I want to take it in afterwards. Artificial neural networks and AI in general are intriguing to me. Being able to actually explain/understand concepts like consciousness and perception of reality in a material sense is sort of my (possibly idealistic) goal. Empiricism is very dear to me, but I think in order to fully explore any idea you... (read more)

Welcome to Less Wrong! I think you'll find a lot of conversation relevant to your interests here.

Hi. Just leaving a few comments about me and what I have been doing in terms of research people here will find interesting. I joined just a couple of days ago so I am not so sure about styles, this seems to be the proper place for a first post and I am guessing the format and contents are free.

While I was once a normal theoretical physicist, I was always interested in the questions of why we believe in some theories, I think that for a while I felt that we were not doing everything right. As I went through my professional life, I had to start interacting... (read more)

Thanks for the link! Very nice publication!

META. LessWrong Welcome threads have changed very little since late 2011. Should something be updated?

This link shows you all new posts in both Main and Discussion, by title and vote count and so on, and is my preferred landing page for LW. I don't think there are any obvious links to it, and this thread seems like a fine place to do so.

What about the list of users who offered to provide English assistance? If this is a useful service to members it may be worth revisiting as most of the listed members seem to be inactive (at least from looking at post/comment history): Randaly has returned to posting recently, but shokwave hasn't posted in more than a year, Barry Cotter and Normal_Anomaly's last posts were in April.
PMed all of them. Does anyone else also want to volunteer?
So far only one person (Randaly) has replied. Does any native speaker want to volunteer? Edit: two people (Randaly and Normal_Anomaly)
At the end of "SEQUENCES:" paragraph you could add: They are also available in a book form.
Done. Should I also add a link to the Slovak translation of the book?
The translation is irrelevant for 99.9% readers, so I guess no.
There's a way to see all new posts, in both main and discussion, that should be highlighted. (I'll grab the link later if someone doesn't beat me to it.)


Browsing the web I found this site. I think it will be fun to indulge a bit and read more.

I'm retired, living on a sailboat and enjoying life. At this time I can't think of any topic of interest in the context of discussions, but I like the reading and I'm sure I'll jump in somewhere to contribute more down the road.


Welcome :-) You don't live on a Macgregor 22, do you?
I live on a Nantucket Island 38. Just big enough to be roomy, and just small enough to sail about by myself. I'm just getting into the living on it part. Had the boat 4+yrs but only moved in full time this past July. Hope to start traveling on it more in 2017, targeting the Pacific Northwest for my first trips, but we'll see, I don't actually have a hard schedule, just rolling along at my own pace.
Welcome, Peter!

Hey kids. I'm a young Canadian philosophy student trying to diversify my understanding of the world-as-it-is. Pressing my way through Rationality A-Z slowly, but while doing university, progress can be slow. I've been visiting the site frequently for a few months, but typically feel too uninformed to comment. I appreciate the (surprise) lack of bias and openness to critical thinking here, that I've found mysteriously vacant from my social, business and academic circles. I've gone through the process of being contrarian, then being a 'communist' (then readi... (read more)

I don't know if this is a good answer to your last question, but you could ask what "philosophy" might look like today if Aristotle had never tutored the Emperor of the known world. I tend to think it wouldn't exist - as an umbrella category - nor should it.
I see it more as the underlying theory of theory, an aspect of all things. I chose to study it with different intentions, but now I'm just capitalizing on my ability to understand theory to learn the theories important too as many different disciplines as possible. I read somehere that philosophers have a responsibility to learn as much science as they can if they want to be relevant. I'm trying.

Rachelle is an academic consultant at a community college in specializes in helping students with their academic problems, college stress and such. She also works part-time for an online dissertation help at dissertation corp. She’s also a hobbyist blogger and loves to do guest blogging on education or college life related topics.

Hello from Paris, France.

As many of you, I first discovered all of this by HPMOR (actually, its French translation). I then read entirely Rationality, from AI to Zombie (because, honestly, reading things in order is SO MUCH easier than having 20 tabs open with 20 links I followed on the previous pages). I thought I would finish to read this blog, or at least the sequences, before posting, and then realized it may implies I would never post.

I'm a doctor in Fundamental Computer Science, an amateur writer (in French only), and an LGBT activist who goes into s... (read more)

Hi! I've been lurking around here for a while; I'm quite the beginner and will be further lurking rather than contributing. A few months ago, I found and played a nifty little game that asked you to make guesses about statistics and set an interval of confidence, is mostly about updating probabilities based on new information and that ultimately requires you to collect information to decide whether a certain savant is more likely in his cave or at the pub. I've been wanting to have another look at it, but I have been entirely unable to find it again.

Could... (read more)

Hello I am from Department of Psychology of Higher School of Economics. I study problem solving, systems thought, and help and counteraction in social interactions. Both rationality and irrationality are important here.

Web: http://www.hse.ru/en/staff/apoddiakov#sci, http://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=426114

Hi. I have only ever browsed one thread on this website, before. i used to like arguing a lot, but I lost my fervor when I felt like the validity of my argument and ability to defend myself in argument didn't and doesn't matter to most. It makes me sad. I only want to make everyone happy and able to cope with their pain, but everyone rejects me.

I don't have much of a personality beyond my liking logic a lot. All I know is logic, even if most people disagree with me. I am saddened by the fact that I feel my life only truly began in my late teens when I rand... (read more)

Hello from Beijing.

I found out about Less Wrong from Slate Star Codex. I also read HPMOR last year, but hadn't realised there was a connection between that and Less Wrong.

I am posting here because I have been thinking about morality. I get into a lot of debates that all boil down to the fact that people hold a very firm belief in a particular moral principle, to the extent that they would be happy to force others to live in accordance to that principle, without evaluating whether this principle is subjective or rational.

In response to this, I have come up ... (read more)

It seems like (a) and (c) are easily granted, but what's your definition of "non-arbitrary", and how should we determine if that definition is itself a non-arbitrary one? This topic is one I enjoy thinking about so thank you for your post :)
Thanks for your comment! My definition of non-arbitrary would be, can we derive your principle from facts on which everyone agrees? I can propose two such principles: a) liberty - in the absence of moral absolutes, the only thing you can say is live and let live, as to do otherwise is to presuppose the existence of some kind of moral authority; or b) survival of the fittest - there is no moral truth, and even liberty is arbitrary - why should I respect someone else's liberty? If I am stronger, I should feel free to take what I can. That said, I think there could also be an argument for some sort of virtue ethics - e.g. you could argue that perhaps there is absolute truth, and there are certain virtues that will help us discover it. But you'd need to be smarter than me to make a convincing argument in this line of thought.

Hi all,

I'm a 3rd year CS student at MIT interested in working with computer graphics in the future. I have way too many things I'm interested in doing (I pretty much can't find any field/hobby that I think would be completely boring after I put enough effort into finding out more about it), but things I'm actually involved in and maybe a little good at are art (digital/traditional, and some 3D) and music (singing mostly nowadays). In terms of how I spend my free time I love games and reading, and generally spending too much time on the internet.

I've always... (read more)

[This comment is no longer endorsed by its author]Reply

Hello Less Wrong community! I study Statistics at the Federal University of Rio de Janeiro in Brazil. I am oriented by the bayesian probability philosophy because our Department of Statistical Methods is focused in studies of Bayesian Statistics. I found this website during my studies of bayesian philosophy and error. In Rio de Janeiro, we still do not have any rationality community, but in São Paulo there is meetings organized every month in Meetup. I am very exited to spend my time in this community developing and debating philosophy of error!

Hiya! I am currently a postdoc in the neurosciences, with a computational focus. Dealing with the uncertainties and vicissitudes attendant upon one still plodding on along the path to "nowhere close to tenure-track". My core research interests include decision making, self-control/self-regulation, goal-directed behaviour, RL in the brain etc. I am quite interested in AI research, especially FAI and while I am aware of the broad picture on AI risk, I would describe myself as an optimist. On the social side of things, I am interested in understandi... (read more)

Hey everybody! My name's Trent, and I'm a computer science student and hobbyist game developer who's been following LessWrong for a while. Finished reading the sequences about a year ago (after blazing through HPMOR and loving it) and have lurked here (and on Weird Sun Twitter...!) since then. Figured I'd make an account and get more involved in the community; reading stuff here makes me more motivated in my studies, and it's pretty entertaining either way!

I'd love to be one of the first people on Mars. Not sure how realistic that goal is or what steps I s... (read more)

Hi, Trent! Have you heard of the Mars One project?
I have! Wish I'd gotten in on the initial astronaut selection, haha. Still, my money is on SpaceX beating them to the punch.

We'd love to know who you are, what you're doing: I was a high school teacher. Now I'm back to school for Honours and hopefully PhD in science (computational modelling) in Australia. I'm Chinese-Indonesian (my grammar and spelling are a mess) and I'm a theist (leaning toward Reformed Christianity).

what you value: Whatever is valuable.

how you came to identify as an aspiring rationalist or how you found us: My friend who is now a sister under the Fransiscan order of the Roman Catholic Church recommended me Harry Potter and the method of Rationality.

I think... (read more)

Am Sargin Rukevwe Oghneneruona, Am from Nigeria a student studying Business Administration and Management in Delta state polytechnic otefe. I am a rational person and this has helped me a lot i really love engaging in certain activities which could make me become a more rational thinker and also improve my knowledge about being rational. I found out about less wrong by reading articles on http://intentionalinsights.org/ and also written by Intentional insight personnel which has helped me alot to build my strength and knowledge of achieving goals and becoming more successful in life. I believe becoming a member of lesswrong.com would also help me in becoming a more rational thinker.

Glad you're joining LW, Sargin! Nice to see another volunteer and part-time contractor for Intentional Insights join LW :-) I want to add that Sargin volunteers at Intentional Insights for about 25 hours, and gets paid as a virtual assistant to help manage our social media for about 15 hours. He decided to volunteer so much of his time because of his desire to improve his thinking and grow more rational. He's been improving through InIn content, and so I am encouraging him to engage with LW.
It would help if you or they or both you and they wrote about what exactly was improved, and why do you think they even ought to engage with LW, which is, after all, hardly the only place to be rational in.
Good point about specifics on improvement, thanks! I'll encourage them to describe their improvements in the future. Regarding LW: Intentional Insights content is a broad-version introduction to LW-style rationality. After getting that introduction, we aim to send people that are ready for more complex materials to ClearerThinking, CFAR, and LW.

Hi! i am new and don't know where to ask this question exactly, so I'm asking here..

how do you vote on articles and comments? i can't figure out how!!

(i hope I'm not noticing some obvious button and be embarrassed)


voting is enabled at 10+ karma. Welcome! You managed to make a post which means you successfully verified your email address (which sometimes stops people).

Thanks! This fact should be added in FAQ related to voting.

Hello from a lot of places! :) I'm Chinese (Shanghai), studying in Brighton England, and lives in Vienna Austria (Moving to Prague Czech soon). How I discovered LW is not a very long story.

I have a great interest in artificial intelligence. I was reading James Barrat's 'Our Final Invention' and he mentioned AI-Box experiment which got me excited (because just that morning I was reading an article about the Turing Test and how it's very unreliable in measuring intelligence in machines; AI box experiment might be a better experiment in the future?). Before h... (read more)


[This comment is no longer endorsed by its author]Reply
Do you mean financial markets?
So what kind of metrics are you interested in forecasting? Macroeconomic ones (GDP, inflation, etc.)? Industry-specific things? Interest rates?
Your mathematical models are supposed to reflect real-life features of the data. All data is not the same and the same models are not appropriate for all data.

I’ve found the Welcome thread!

Hi, I’m Alia and I live with my husband in San Jose, California. I found this site via SlateStarCodex and having read Rationality:From AI to Zombies I think this is a fascinating and useful set of concepts and that using this type of reasoning more often is something to aspire to. I want to do more Bayesian calculations so I get more of a feel for them.

I’m also a fundamentalist* Christian. I’m perfectly ready to discuss and defend these beliefs, but I wouldn’t always bring up these beliefs in threads. I’m not trying to deceiv... (read more)

Welcome! I applaud your decision to embrace hostile terminology. I don't think you should feel any obligation to bring up your religious beliefs all the time. If you're interested in the interactions between unashamedly traditionalist religion and rationalism, you might want to drop into the ongoing discussion of talking snakes. Most of it lately, though, has been discussion between people who agree that the story in question is almost certainly hopelessly wrong and disagree about exactly which bits of it offer most evidence against the religion(s) it's a part of, which you might find merely annoying... [EDITED to add: Aha, I see you've already found that. My apologies for not having noticed that you were already participating actively there.] Just out of curiosity (and you should feel free not to answer), how "typically fundamentalist" are your positions? E.g., are you a young-earth creationist, do you believe that a large fraction of the human race is likely to spend eternity in torment, do you believe in "verbal plenary inspiration" of the Christian scriptures, etc.? (Meta-note that in a better world would be unnecessary: it happens that one disgruntled LessWronger has taken to downvoting almost everything I post, sometimes several times by means of sockpuppets. I mention this only so that if you see this comment sitting there with a negative score you don't take it to mean that the LW community generally disapproves of my welcoming you or disagrees with what I said above.)
Fairly typically fundamentalist, I believe in young earth creationism with a roughly estimated confidence level of 70%, a large fraction of the human race destined for eternal torment at about 85% and verbal plenary inspiration at about 90%. I'm a little more theologically engaged then average but (as is typical in my circles) that mean's I'm more theologically conservative, not less.
Are those figures derived from any sort of numerical evidence-weighing process, or are they quantifications of gut feelings? (I do not intend either of those as a value judgement. Different kinds of probability estimate are appropriate on different occasions.)
These are more gut feelings, I had already considered a lot of evidence for and against these before I found out about Bayesian updating, so the bottom line was really already written. If I tried to do a numerically rigorous calculation now, I would just end up double counting evidence. This is just a 'if I had to make a hundred statements of this type that I was this confident about, how often would I be right guess.
Much though this amuses atheist-curmudgeon me, may I suggest that you might want to fix the typo?
Oops, thnks
Interesting! Thanks.
Do you believe that both Black and White people who live currently descend from one individual that lived around 6000 years ago?
Welcome Alia! You sure sound like one of us. Hope you like it here.

Hi from San Diego, California. I'm an attorney with academic training in molecular biology (BS, MS, PhD). I have an intense interest in politics, specifically the cognitive biology/social science of politics. I'm currently reading The Rationalizing Voter by Lodge and Taber. I have read both of Tetlock's books, Haidt's Righteous Mind, Khaneman's Thinking, Fast and Slow, Thaler's Nudge, Achen and Bartels Democracy for Realists and a few others. I also took a college-level MOOC on cognitive biology and attendant analytic techniques (fMRI, etc) and one on the ... (read more)

First impressions from skim reading the blog: That points for me into the direction of objectivism with all it's problems. There are good reasons to be quite suspicious when someone claims that they don't have an ideology and there views are simply "objective". To me saying something like that without bringing forward a specific proposal suggests to me politcal ignorance. The blog isn't spell-checked.
I have been arguing and debating politics online for over 7 years now and I am quite used to how people speak to each other. There is nothing at all politically ignorant in my comment. When I say something is obvious, it has to be taken in the context of the entire post. It's easy to cherry pick and criticize by the well-known and popular practice of out-of-context distortion of a snippet on content in a bigger context. I have seen that tactic dozens of times and I reject it. It's cheap shot and nothing more. You can do better. Bring it on. My blog and all of my other online content speaks directly to the American people in their own language. I do not address academics in academic language. I have tried academic language with the general public and it doesn't work. Here's a news flash: There is an astonishing number of average adult Americans who have little or no trust in most any kind of science, social and cognitive science included. As soon as one resorts to the language of science, or even mentions something as "technical" as "cognitive science", red flags go up in many people and their minds automatically switch to conscious rationalization mode. My guess is that anti-science attitude applies to about 40-60% of adult Americans if my online experience is a reasonably accurate indicator. (my personal experience database is based on roughly 600-1,000 people -- no, I am not so stupid as to think that is definitive, it's just my personal experience) I am trying to foster the spread of the idea that maybe, just maybe, politics might be rationalized at least enough to make some detectable difference for the better in the real world. My world is firmly based in messy, chaotic online retail politics, not any pristine, controlled laboratory or academic lecture room environment. Political ignorance is in the eye of the beholder. You see it in me and I see it in you. By the way, reread the blog post you criticize as making no specific proposal. There is a specific pr
That's the problem. Most relevant political discussions that have real world effects don't happen online. Knowing how to debate politics online and actual knowing how politics processes work are two different things. That's no specific proposal. The fact that you think it is suggests that you haven't talked seriously to people who make public policy but only to people on the internet who are as far removed from political processes as you are. It's like people who are outside of mathematical academia writing proofs for important mathematical problems. They usually think that their proofs are correct because they aren't specific enough about them to see the problems that exist with them. I read one post and gave my impression of it. The spelling errors reduce the likelihood that reading other posts would be valuable, so I stopped at that point. If you are actually interested in spreading your ideas, that's valuable information for you.
Is a short summary of your ideology or set of morals available somewhere on the 'net?
I have tried for short summaries, but it hasn't worked. Very short summary: A "rational" ideology can be based on three morals (or core ideological principles): (1) fidelity to "unbiased" facts and (2) "unbiased" logic (or maybe "common sense" is the better term), both of which are focused on (3) service to an "objectively" defined conception of the public interest. Maybe the best online attempts to explain this are these two items: 1. an article I wrote for IVN: http://ivn.us/2015/08/21/opinion-america-needs-move-past-flawed-two-party-ideology/ 2. my blog post that tries to explain what an "objective" public interest definition can be and why it is important to be broad, i.e., so as to not impose fact- and logic-distorting ideological limits on how people see issues in politics: http://dispol.blogspot.com/2015/12/serving-public-interest.html I confess, I am struggling to articulate the concepts, at least to a lay audience and maybe to everyone. That's why I was really jazzed to come across Less Wrong -- maybe some folks here will understand what I am trying to convey. I was under the impression that I was alone in my brand of politics and thinking.
These are not particularly contentious, given how they both can rephrased as "let's be really honest". However... is somewhat more problematic. I assume we are speaking normatively, not descriptively, by the way, since real politics is nothing like that. Off the top of my head, there are two big issues here. One is the notion of the "public interest" and how do you deal with aggregating the very diverse desires of the public into a single "public interest" and how do you resolve conflicts between incompatible desires. The other one is what makes it "objective", even with the quotes. People have preferences (or values), some of them are pretty universal (e.g. the biologically hardwired ones), but some are not. Are you saying that some values should be uplifted into the "objective" realm, while others should be cast down into the "deviant" pit? Are there "right" values and "wrong" values?
I'm done with this weird shit arrogant, academic web site. Fuck all of you academic idiots. Your impact on the 2016 November elections: Zero. Your efforts will have zero impact on the Donald's election. Only the wisdom of American common sense can save us. LW is fucking useless. :)
Elections aren't everything. Yes, I know that I, personally, have had (and will have) absolutely zero effect on the American 2016 November elections. I am fully aware that I, personally, will have absolutely zero impact on Donald Trump's candidacy, and everything that goes into that. And I am perfectly fine with that, for a single, simple, and straightforward reason; I am not American, I live in a different country entirely. I have a (very tiny) impact on a completely different set of elections, dealing with a completely different set of politicians and political problems. And that has absolutely nothing to do with why I am here. I've taken a (very) brief look over your blog. And I don't think I have much to say about it - it is very America-centric, in that you're not talking about an ideal political system nearly as much as you're talking about how the American system differs from an ideal political system. Having said that, you might want to take a look over this article - it seems to cover a lot of the same ground as you're talking about. (Then note the date on that article; if you really want to change American politics, this is probably the wrong place to be doing it. If you really want to change the mind of the average American, then you need to somehow talk to the average American - I only have an outsider's view of America, but I understand that TV ads and televised political debates are the best way to do that). Good luck!
Oh, dear. Somebody had a meltdown and a hissy fit. Y'know, in some respects LW is like 4chan. Specifically, it's not your personal army. You seem to have taken a break from bashing your face into a brick wall. Get back to it, the bricks are waiting.
I read your article on IVN, so this is mostly a response to that. I do think that it would be great if people thought about politics in a scientifico-rational way. And it isn't great that you really only have two options in the United States if you want to join a coalition that will actually have some effect. It's true that having two sets of positions that cannot be mismatched without signaling disloyalty results in a false-dichotomous sort of thinking. But it seems important to think about why things are in this state in the first place. Political parties can't be all bad, they must serve some function. Think about labor unions and business leaders. Employees have some recourse if they dislike their boss. They can demand better conditions or pay, and they can also quit and go to another company. But we know that when employees do this, it usually doesn't work. They usually get fired and replaced instead. The reason is that if an employer loses one employee out of one hundred, then they will be operating at 99% productivity, while the employee that quit will be operating at 0% productivity for some time. Labor unions solve the coordination problem. Likewise, the use of a political party is that it offers bargaining power. Any scientifico-rational political platform will have to solve such a coordination problem, and they will have to use a different solution from the historical one: ideology. That's not easy. Which is not to say that it's not worth trying. So, it's not enough that citizens be able to reveal their demand for goods and services from the government, or other centers of power; it's also necessary that officials have incentives to provide the quality and quantity of goods and services demanded. In democracy this is obtained through the voting mechanism, among other things. A politician will have a strong incentive to commit an action that obtains many votes, but barely any incentive to commit an action that will obtain few votes, even if they have d

Hi Less Wrong,

I am John Chavez from the Philippines. I'm a part-time teacher in a community college, teaching computer hardware servicing and maintenance to out-of-school youths.

As I take much value on helping others in my community and reach out to people who needs help, I came to know about Intentional Insights on Facebook, which leads me here in Less Wrong. I have been here for a while reading several published articles. There are a lot of articles here that I really love to read, although I must admit that there are a few that I found confusing and I d... (read more)

Hi Less Wrong,

I am John Chavez from the Philippines. I'm a part-time teacher in a community college, teaching computer hardware servicing and maintenance to out-of-school youths.

As I take much value on helping others in my community and reach out to people who needs help, I came to know about Intentional Insights on Facebook, which leads me here in Less Wrong. I have been here for a while reading several published articles. There are a lot of articles here that I really love to read, although I must admit that there are a few that I found confusing and I ... (read more)

Hi John! From what you have described, I think it could be a better experience for you if you start with the more structured reading, which is (at the moment) best provided by Eliezer's "From AI To Zombies". You can download it for free if you follow the link. It may seem long, but it's well worth the read.
Cool! Thank you. I will definitely read it. :)
Glad you're joining LW, John! Nice to see another volunteer and part-time contractor for Intentional Insights join LW :-) It's definitely a nice place to develop rationality, and don't be put off by the occasional roughness of the commentary here. For the rest of LW folks, I want to clarify that John volunteers at Intentional Insights for about 45 hours, and gets paid as a virtual assistant to do various administrative tasks for about 20 hours.
What exactly do you have them do?
They work on a variety of tasks, such as website management, image creation, managing social media channels such as Delicious, StumbleUpon, Twitter, Facebook, Google+, etc. Here's an image of the organizational Trello showing some of the things that they do (Trello is a platform to organize teams together). We also have a couple more who do other stuff, such as Youtube editing, Pinterest, etc.
That doesn't really tell me what "managing social media channels means". Does managing Twitter mean that the person registers a Twitter page, follows random people and repost InIn articles? Does it basically mean that the people are supposed to post links at various places?
Managing Twitter means several things. Regarding content, the person finds appropriate things to post on Twitter, which we do about 4 times a day. This includes both InIn and non-InIn materials that we curate for our audience, and most of the things we post are not InIn content - about 2/3. The latter involves reading the article and determining whether our audience would find it appropriate. Then, the person writes up Tweets with appropriate hashtags for each piece. They put it into a spreadsheet. Then it gets read over by two other people for grammar/spelling/fit. Then, these are scheduled through Hootsuite, a social media scheduling app. Regarding managing Twitter itself, this involves managing the Twitter audience of the channel, including questions, comments, etc. (we have over 10K followers on Twitter). It also involves reTweeting interesting Tweets, and other Twitter-oriented activities. This takes place for a number of social media channels. Here's an example of a weekly social media plan for Hootsuite, if you're curious. This includes Twitter, FB, LinkedIn, and Google+. This doesn't include Pinterest, Instagram, StumbleUpon, or Delicious, since Hootsuite doesn't handle those.
Who's your target audience when you think that a Nigerian can make a good decision about whether your target audience would find an article appropriate?
What are you implying about Nigerians here?
That they are culturally different from Western people. They might be very well know what's culturally appropriate to post when trying to reach a Nigerian audience but Western culture is a bit different in lot's of aspects. The posts those people posted on LW look like they are not written by normal Western people but either by people who wrote them because they are payed to do so or by people who operate under different cultural norms.
As I think I mentioned before, Intentional Insights tries to reach a global audience, and after the US, our top three countries are non-western. So it's highly valuable for us to have non-western volunteers/contractors who can figure out what would be salient to a diverse international audience.
Do you have other data about your impact in those countries besides passive reading numbers? Do you have links to the receptions of InIn content by non-western audiences besides those people you payed?
Links are hard, since most things I have are people writing to me. However, here is one relevant link. After finding out about our content, a prominent Indian secular humanist association invited me to do a guest blog for them. I was happy to oblige.
How many of those are payed and how many organic?
Five are paid as virtual assistants, but they are not paid to follow Twitter. There wouldn't be a point to having paid followers, because the goal is to distribute content widely. There are plenty of people who after reading our widely-shared articles then choose to engage with our social media.


[This comment is no longer endorsed by its author]Reply
Please don't regret it! Welcome.

Hello Less Wrong!

My name is Bryan Faucher. I'm a 27 year old from Edmonton (Canada) in the middle of the slow process of immigrating to Limerick (Ireland) where my wife has taken a contract with the University. I've been working in education for the past five years but I'm looking to pursue a masters in mathematical modeling next year, rather than attempting to fight for the right to work in a crowded industry as a non-citizen.

I've been aware of LW for something like six years, having been introduced by an old roommate's SO by way of HPMOR. In that time I'... (read more)


I am new to this site but judging from HPMOR and some articles I read here, I think I have come to the right place for some help.

I am working on the early stages of a project called WikiLogic which has many aims. Here are some that may interest LW readers specifically:

-Make skills such as logical thinking, argument construction and fallacy recognition accessible to the general public

-Provide a community created database of every argument ever made along with their issues and any existing solutions

-Highlight the dependencies between different fields i... (read more)

Welcome! I've seen these sorts of argument maps before. https://wiki.lesswrong.com/wiki/Debate_tools http://en.arguman.org/ It seems there is some overlap with your list here Generally what I've noticed about them is that they focus very hard on things like fallacies. One problem here is that some people are simply better debaters even though their ideas may be unsound. Because they can better follow the strict argument structure they 'win' debates, but actually remain incorrect. For example: http://commonsenseatheism.com/?p=1437 He uses mostly the same arguments debate after debate and so has a supreme advantage over his opponents. He picks apart the responses, knowing full well all of the problems with typical responses. There isn't really any discussion going on anymore. It is an exercise in saying things exactly the right way without invoking a list of problem patterns. See: http://lesswrong.com/lw/ik/one_argument_against_an_army/ Now, this should be slightly less of an issue since everyone can see what everyone's arguments are, and we should expect highly skilled people on both sides of just about every issue. That said the standard for actual solid evidence and arguments becomes rather ridiculous. It is significantly easier to find some niggling problem with your opponents argument than to actually address its core issues. I suppose I'm trying to describe the effects of the 'fallacy fallacy.' Thus a significant portion of manpower is spent on wording and putting the argument precisely exactly right instead of dealing with the underlying facts. You'll also have to deal with the fact that if a majority of people believe something then the shear amount of manpower they can spend on shoring up their own arguments and poking holes in their opponents will make it difficult for minority views to look like they hold water. What are we to do with equally credible citations that say opposing things? 'Every argument ever made' is a huge goal. Especially with th
Thanks for an excellent, in-depth reply! Brilliant resource! Thanks for pointing it out. You bring up a few worries although i think you also realize how i plan to deal with them. (Whether i am successful or not is another matter!) One part of this project is to make some positive aspects of debating skills easy to pick up by newbies using the site. Charisma and confidence are worthless in a written format and even powerful prose are diluted to simple facts and reasoning in this particular medium. In my mind, if a niggling issue can break an argument then it was crucial and not merely 'niggling'. If the argument was employing it but did not rely on it, then losing it wont change its status. Being aware of issues like the 'fallacy fallacy' is useful in time-limited oral debates but in this format its ok to attack a bad argument on an otherwise well supported theory. The usual issue is it allows ones bias to come into play and makes the opponent feel the whole argument is weak. But this is easily avoided when the node remains glowing green to signify it is still 'true'. Is this so bad? We are used to being frugal with a resource like manpower because its traditionally been limited, but i believe you can overcome that with the world wide reach offered by the internet. People will only concentrate on what they are passionate about which means the most contentious of arguments will also get the most attention to detail. Most people accept gravity so it wont get or need as much attention. In the future if a new prominent school of thought is formed attacking it, then it may require a revisit from those looking to defend it. I think the opposite is true. In most other formats, such as a forum, the one comment can easily be drowned out. Here there will simply be two different ideas. More people working on one will help of course but they cannot conjure good arguments from nothing. We also have to have faith (the good kind) in people here and assume that they will be w

Hi LW! My name is Alex, a Salesperson by profession. I found Less Wrong through Intentional Insights and been here for a couple of months now. I'd like to express my interest in becoming more rational.

Nice to see you on LW, Alex! I want to add for LW folks that Alex volunteers at Intentional Insights for about 25 hours, and gets paid as a virtual assistant to help manage our social media for about 15 hours. He decided to volunteer so much of his time because of his desire to improve his thinking and grow more rational. He's been improving through InIn content, and so I am encouraging him to engage with LW.

Hello everyone,

/u/mind_bomber here from https://www.reddit.com/r/Futurology.

I've been a moderator there for over two years now and watched the community grow from several thousand futurist to over 3.5 million subscribers. As a moderator I've had the pleasure of working with Peter Diamandis, David Brin, Kevin Kelly, and others on several AMA's. I also curate the glossary and post videos, documentaries, talks, and keynotes to the site.

I hope to participate in this community, although the Less Wrong Community is exactly the type of people I would like to s... (read more)

Are you sure you have enough copies of that link there? There are only four, and two of your paragraphs don't have one. (If you're trying for some SEO thing, please note that links from LW comments get rel="nofollow" on them and therefore don't provide extra googlejuice. I wouldn't be at all surprised to find that Google gives less weight to a link when it sees several instances of it in rapid succession, because that's a thing spammers do.)
Offense intended: your subreddit mainly consists of hype-trains, please do not advertise it.
This is not an advertisement!
What, no movie and a dinner first? X-)