If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.

(This is the fifth incarnation of the welcome thread; once a post gets over 500 comments, it stops showing them all by default, so we make a new one. Besides, a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves.)

A few notes about the site mechanics

Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax (you can click the "Help" link below the text box to bring up a primer).

You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.

However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.

Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.

It's definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there's a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way.  There's also a succession of open comment threads for discussion of anything remotely related to rationality.

Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.

EXTRA FEATURES:
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion.

A few notes about the community

If you've come to Less Wrong to  discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.

If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)

If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma- honestly, you don't know what you don't know about the community norms here.)

If you'd like to connect with other LWers in real life, we have  meetups  in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page. There's also a Facebook group. If you have your own blog or other online presence, please feel free to link it.

If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address. 
Normal_Anomaly 
Randaly 
shokwave 
Barry Cotter

A list of some posts that are pretty awesome

I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:

More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site!

Note from Clarity: MBlume and other contributors wrote the original version of this welcome post, and orthonormal edited it a fair bit. If there's anything I should add or update please send me a private message or make the change by making the next thread—I may not notice a comment on the post. Finally, once this gets past 500 comments, anyone is welcome to copy and edit this intro to start the next welcome thread.

New to LessWrong?

New Comment
16 comments, sorted by Click to highlight new comments since: Today at 8:57 PM

Hi,

I've read some of "Rationality: From AI to Zombies", and find myself worrying about unfriendly strong AI.

Reddit recently had an AMA with the OpenAI team, where "thegdb" seems to misunderstand the concerns. Another user, "AnvaMiba" provides 2 links (http://www.popsci.com/bill-gates-fears-ai-ai-researchers-know-better and http://fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-building-ai/) as examples of researchers not worried about unfriendly strong AI.

The arguments presented in the links above are really poor. However, I feel like I am attacking a straw man - quite possibly, www.popsci.com is misrepresenting a more reasonable argument.

Where can I find some precise, well thought out reasons why the risk of human extinction from strong AI is not just small, but for practical purposes equal to 0? I am interested in both arguments from people who believe the risk is zero, and people who do not believe this, but still attempt to "steel man" the argument.

Stuart Armstrong asked a similar question a while back. You may find the comments to his post useful.

Thank you. That was exactly what I was after.

Where can I find some precise, well thought out reasons why the risk of human extinction from strong AI is not just small, but for practical purposes equal to 0? I am interested in both arguments from people who believe the risk is zero, and people who do not believe this, but still attempt to "steel man" the argument.

The primary disagreement, in the steel man universe, is over urgency. If one knew that we would make AGI in 2200, then one would be less worried about solving the friendliness problem now. If one knew that we would make AGI in 2020, then one would be very worried about solving the friendliness problem now.

For many people who work on AI, it's hard to believe that it will 'just start working' at that high level of ability soon, because of how optimistic AI proponents have been over the years and how hard it is to wring a bit more predictive accuracy out of the algorithms for their problems.

But if one takes the position not that one is certain that it will happen soon, but that one is uncertain when it will happen, the uncertainty implies that it will happen sooner or later, and that means we need to do some planning for the sooner case. (That is, uncertainty does not imply it can only happen a long time from now.) This is, it seems, the most effective way to communicate with people who aren't worried about Strong AI.

There is also the question of what should this type of research actually look like.

There is also the question of what should this type of research actually look like.

I think that's an answer to "why aren't people supporting MIRI's specific research agenda?" but I see SoerenE's question as about "is there a good reason to not be worried about AI danger?"

(In the steelman universe, I think people understand that different research priorities will stem from different intuitions and skills, and think that there's space for everyone to work in the direction that suits them best.)

[-][anonymous]8y50

Hello all,

I joined in late December (so my intro post is there), but I was wondering if anyone has any advice regarding LW-style blogging?

I've been ruminating a lot on the nature of motivation, and I think it would be helpful to my fellow students/friends in real life. A lot of the concepts have their basis in rationalist ideas, so I'm finding myself restating a lot of ideas that are expressed much better here.

So, I suppose my question is, "What is the typical consensus on rationalist blogs?" and "Would anyone be willing to offer their advice on this type of thing/give feedback on what I'm thinking of posting?"

What I write here is just my personal opinion, I am not speaking for the whole LW. Obviously. For the sake of brevity, please assume that every sentence begins with "I think..." and ends with "...unless you have good reasons to do otherwise, of course." I am saying this to avoid scaring a new member, because sometimes new members report that LW culture feels scary to them. /end-of-disclaimers

If you write your beliefs about the nature of motivation (or anything else), it is better to also provide an information why you think that. So that we could estimate your level of certainty, and possibly find out more about the topic. If what you write is based on a research, please provide links, so curious people can look at the original papers. But don't worry, quoting research is not necessary for blogging on LW. If what you write is based on your personal observation, that's also okay. If you have some credentials, e.g. you are a motivation coach, feel free to say shortly something about yourself and link to your website. But not having credentials is still okay. If your conclusions are based on some specific examples you have seen, the article would be better if you write not only the conclusions but also an example or two (properly anonymized) that have led you to the conclusion. This may prevent or reduce misunderstanding.

Please start with the topic you want to write about; avoid long disclaimers and introductions. Don't write an introductory article that merely describes what are you planning to write in the following articles; instead go ahead and post the first part. Make the first part contain its own conclusion, instead of merely opening the topic and promissing to come to the conclusion in one of the future parts. Essentially, write in a way that doesn't leave anything unfinished. If your topic is too long for a single article, choose a subset that can fit in a single article, and write that first. Then choose another subset (now you are allowed to refer to the article you already wrote) and write that. Repeat until the topic is exhausted, or until you lose interest in writing more. Why? For the reader, it allows to discuss and vote on your articles immediately, instead of thinking "okay, this seems kinda interesting, but I haven't heard anything specific yet". For the writer, not promising anything means not creating a pressure on yourself: there are only the articles you have already published, and the completely unconstrained future. (If you have read the Sequences, which I recommend, especially in the book form, you'll see it was written mostly this way.)

What is the typical consensus on rationalist blogs?

About what -- motivation? I remember reading about "hyperbolic discounting" and similar mathy stuff, but to me personally that always felt wrong... not technically incorrect, but avoiding the core of the matter, which is usually something emotional, not an equation. I would recommend reading PJ Eby, who also sometimes posts on LW (Spock's Dirty Little Secret, Improving The Akrasia Hypothesis).

Would anyone be willing to offer their advice on this type of thing/give feedback on what I'm thinking of posting?

If your post will contain interesting information, I am sure people will reply.

[-][anonymous]8y00

Hello Viliam,

Thanks for the thoughts! I can see why having "self-contained" articles is helpful-- and the Sequences definitely are arranged that way.

It had occurred to me to put a few of the essays here, but I'm also considering putting up a personal blog for them-- targeted for my friends at school, who are nonrationalists.

Because of that, I've found myself writing a few basic essays to go into some core rationalist ideas (before talking about motivation), but they pale in comparison to better-explained articles here.

So I feel a little guilty about writing about both motivation and the core ideas because I don't feel "worthy", as I haven't had too much experience-- mainly just anecdotal stuff and introspection.

Every writer was a beginner once, don't worry, you grow up by trying (and learning from feedback of course, but unless you try there is no feedback).

Hello everyone!

I am an engineering student from India and found Less Wrong through an article about 'tabooing' your words, which I was brought to through Slate Star Codex's description of Motte and Bailey arguments. I have subconsciously thought of my interests in cool logic problems and in the disagreements between social groups as somewhat tenuously related, and I find the content here very appealing and applicable to both areas.

Hi everyone,

I've been lurking for years (originally read Three Worlds Collide, then HPMOR, then started the sequences) and I guess it's about time to be able to interact.

I participate on the LW Slack, facebook, and local meetups. Looking forward to the new directions LW is heading, and being a part of LW 2.0

Hello! I've been a reader since 2012 or so, and used to comment occasionally under a different username. (I switched because I wanted to be less directly connected to my real name.)

I don't see upvote or downvote buttons anywhere. Did LW remove this feature, or is it something that's only happening to me specifically?

Edit: Also, the sort function is static in most but not all comment sections for me. I'm running Chrome.

[-][anonymous]8y00

It would be nice if first time LessWrong meetup attendees to the receive a special welcome at the ACR Newcomer’s breakfast, and special recognition at rationalist events.