Some research says that lurkers make up over 90% of online groups. I suspect that Less Wrong has an even higher percentage of lurkers than other online communities.

Please post a comment in this thread saying "Hi." You can say more if you want, but just posting "Hi" is good for a guaranteed free point of karma.

Also see the introduction thread.

Attention Lurkers: Please say hi
New Comment
636 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hi.

edit: to add some potentially useful information, I think the biggest reason I haven't participated is that I feel uncomfortable with the existing ways of contributing (solely, as I understand it, top-level posts and comments on those posts). I know there has been discussion on LW before on potentially adding forums, chat, or other methods of conversing. Consider me a data point in favor of opening up more channels of communication. In my case I really think having a LW IRC would help.

8Airedale
Hi, I think explanations for lurking, if people feel comfortable giving them, may indeed be helpful. I also felt uncomfortable about posting to LW for a long time and still do to some extent, even after spending a couple months at SIAI as a visiting fellow. Part of the problem is also lack of time; I feel guilty posting on a thread if I haven't read the whole thread of comments, and, especially in the past, almost never had time to read the thread and post in a timely fashion. People tell me that lots of people here post without reading all the comments on a thread, but (except for some of the particularly unwieldy and long-running threads), I can't bring myself to do it. I agree that a forum or Sub-Reddit as announced by TomMcCabe here might encourage broader participation, if they were somewhat widely used without too significant a drop in quality. But the concerns expressed in various comments about spreading out the conversation also seem valid.
3JStewart
Reddit-style posting is basically the same format as comment threads here, it's just a little easier to see the threading. One thing that feels awkward using threaded comments is conversation, and people's attempts to converse in comment threads is probably part of why comment threads balloon to the size they do. That's one area that chat/IRC can fill in well. Another issue is that top-level posts have a feeling of permanence to them. It's like publishing something. I'd rather start with an idea and be able to discuss it and shape it. Top-level posts seem like they should have been able to be exposed to feedback before being judged ready to publish. I'm not really sure what kind of structure would work for this, but if I did, I probably would have jumped into an open thread or a meta thread before now :)
3AdeleneDawner
Google Wave is decent for this - it's wikilike in that document at hand can be edited by any participant, and bloglike in that comments (including threaded comments) can be added underneath the starting blip. There's a way to set it up so that members of a google group can be given access to a wave automatically, which would be convenient. I have a few invitations left for Wave, if anyone would like to try it. I'm not interested in taking charge of a google group, though.
0PeerInfinity
I agree. Google Wave is awesome. I use it constantly. Though it's still in beta, and it shows. But I guess I shouldn't start ranting about the advantages and disadvantages of Wave here. I also have some Wave invitations left over.
6Peter_de_Blanc
This made me think of how cool a LessWrong MOO would be. I went and looked at some Python-based MOOs, but they don't seem very usable. I'd guess that the LambdaMOO server is still the best, but the programming language is pretty bad compared to Python.
6Jack
What exactly would we do with it?
3Peter_de_Blanc
Chat, and sometimes write code together.
0saliency
Some of the MOO's programming is pretty easy. I think I used to use something called cyber. You would create your world by creating rooms and exits. With just the to you could create some nice areas. Note an exit from a room could be something like 'kill dragon' It got more complex with key objects and automated objects but even with simple rooms and exits a person could be very creative.
2Peter_de_Blanc
Yes, but if you want to make, say, a chess AI or a computer algebra system, then your code ends up being much longer and harder to read than it would be in Python.
0[anonymous]
A LW MOO would be awesome. I think it would be fun exploring the worlds LessWrongers would create. At the same time we could just take part of LambdaMOO and create rooms.
0Morendil
I liked LambdaMoo enough that I wrote a compiler for it, targeting the JVM. Fun stuff.
4Kevin
. #lesswrong on Freenode! And a local Less Wrong subreddit is coming, eventually...
0Jack
IT IS?! Really?
0Kevin
The Less Wrong site authorities all want it; it's just an issue of getting someone to program it. It's not exceptionally challenging or anything to code, but it would require some real programmer-hours.
0[anonymous]
http://webchat.freenode.net/?channels=lesswrong# There it is. (at least, that is how I know to access it...)
[-]homunq200

Hi.

I am not actually a lurker - I currently have 13 karma - but I am not a heavy participator. However, now I would like to get to 20 karma so I can make a post on why MWI makes acausal incentives into minor considerations. I would also be gratified if someone told me how to make my draft of this post linkable, even if it does not show up within "new".

I think that you should get some bonus towards the initial 20 karma for your average karma per post. This belief is clearly self-serving, but not necessarily thereby invalid. I believe my own average karma per post is decent but not outstanding.

I believe that the businesslike tone of this post, as a series of declarative statements, will be seen as excessive subservience to the imagined norms of a community of rationalists, and thus net me less status and karma than a chattier post. I am honestly unsure if the simple self-referential gambit of this paragraph will help or hurt this situation.

[-]homunq100

I posted a diary, and it was banned for containing a dangerous idea. I can understand that certain ideas are dangerous; in fact, in the discussion I started, I consciously refrained from expressing several sub-points for that reason, starting with my initial post. But I think that if there's such a policy, it should be explicit, and there should be some form of appeal. If the very discussion of these issues shouldn't happen in public, then there should be a private space to give whatever explanation can be given of why. A secret, unappealable rule which cannot even be discussed - this is not the path to rationalism, it's the way down the rabbit hole.

-1PhilGoetz
What? Is this separate from the recent Banned Post? Is this a different idea?
0FAWS
It was a counter argument against the dangerous topic being dangerous, which by necessity touched the dangerous topic and which wasn't strong enough to justify this (anyone for whom the dangerous topic actually would be dangerous [rather than just causing nightmares] would almost by necessity already be aware of a stronger argument).
2homunq
Interesting. Thanks, uprated; with the caveat that of course, we only have your word that the other argument is "stronger". Without further evidence, it's my rationality plus consideration of the issue minus overconfidence against yours. You have an advantage on consideration, since you know both arguments while I only know that I know one; however, on the whole, I think it would be pathological for me to abandon my argument and belief just on that basis. As for the other aspects, we're both probably smarter and less biased than average people, and I don't see any argument to swing that. In other words, I still think I'm right.
-2Eliezer Yudkowsky
No posts on Riddle Theory.
6MBlume
Nor joke warfare
4dclayh
Nor pictures of birds.
[-]homunq150

Nor writing "Bloody Mary" in lipstick on mirrors?

Seriously, my post was about why that stuff is not scary. Fiction can be good allegory for reality, but those stories all use a lot of you-should-be-scared tricks, all very well and good for ghost stories, but not conducive to actual discussion.

We are swimming in a soup of sirens' songs, every single day. Dangerous ideas don't just exist, they abound. But I see no evidence of any dangerous ideas which are not best fought with some measure of banality, among other tactics. The trappings of Avert Your Eyes For That Way Lies Doom seem to be one of the best ways to enhance the danger of an idea.

In fact... what if Eliezer himself... no, that would be too horrible... oh my god, it's full of stars. (Or, in serious terms: I'm being asked to believe not just in a threat, but also that those who claim to protect us have some special immunity, either inherent or acquired; I see no evidence for either proposition).

Gah, it's incredibly annoying to try to talk about something without being too explicit. The more explicit I get in my head, the more ridiculous this whole charade seems to me. Of course I can find plenty of rational arguments to support that, but I also trust the feeling. I'm participationg in the "that which must not be mentioned" dance out of both respect and precaution, but honestly, it's mostly just respect. You're smart people and high status in this arena and I probably shouldn't laugh at your bugaboos.

I'm participationg in the "that which must not be mentioned" dance out of both respect and precaution, but honestly, it's mostly just respect.

Just to point out some irony - I'm participating in the "that which must not be mentioned" dance out of lost respect. I no longer believe Eliezer is able to consider such questions rationally. Anyone who wants to have a useful discussion on the subject must find a place outside of Eliezer's influence to do it. For much the same reason I don't try to discuss the details of biology in church.

4timtyler
FWIW, it seems pretty ridiculous to me too. It might be funny - were it not so negative. Plus, if you don't do the dance just right, your comments get deleted by the moderator.
4thomblake
So apparently either "that which can be destroyed by the truth should be" is false, or you've written dangerous falsehoods which would overtax the rationality of our readers. Eliezer's response above seems to imply the former.
1homunq
Did you read the "riddle theory" link? The riddle is not dangerous because it's false, but because it's incomprehensible. And of course, if you meant to list all the possibilities, you left out the ones where E. is just wrong about the danger.
1timtyler
My comparison at the time was to The Ring.
2cousin_it
Very good question, but AFAIK Eliezer tries to not think the dangerous thought, too. Seconded.
4timtyler
I don't think there was ever any good evidence that the thought was dangerous. At the time I argued that youthful agents that might become powerful would be able to promise much to helpers and to threaten supporters of their competitors - if they were so inclined. They would still be able to do that whether people think the forbidden thought or not. All that is needed is for people not to be able to block out such messages. That seems reasonable - if the message needs to get out it can be put into TV adverts and billboards - and then few will escape exposure. In which case, the thought seems to be more forbidden than dangerous.
5jimrandomh
If there was any such evidence, it would be in the form of additional details, and sharing it with someone would be worse than punching them in the face. So don't take the lack of publically disclosed evidence as an indication that no evidence exists, because it isn't.
7wedrifid
It actually is, in the sense we use the term here.
4SilasBarta
Exactly. One must be careful to distinguish between "this is not evidence" and "accounting for this evidence should not leave you with a high posterior".
1timtyler
I think we already had most of the details, many of them in BOLD CAPS for good measure. But there is the issue of probabilities - of how much it is likely to matter. FWIW, I do not fear thinking the forbidden thought. Indeed, it seems reasonable to expect that people will think similar thoughts more in the future - and that those thoughts will motivate people to act.
0jimrandomh
No, you haven't. The worst of it has never appeared in public, deleted or otherwise.
2timtyler
Fine. The thought is evidently forbidden, but merely alleged dangerous. I see no good reason to call it "dangerous" - in the absence of publicly verifiable evidence on the issue - unless the aim is to scare people without the inconvenience of having to back up the story with evidence.
0EStokes
If one backed it up with how exactly it was dangerous, people would be exposed to the danger.
6timtyler
The hypothetical danger. The alleged danger. Note that it was alleged dangerous by someone whose living apparently depends on scaring people about machine intelligence. So: now we have the danger-that-is-too-awful-to-even-think about. And where is the evidence that it is actually dangerous? Oh yes: that was all deleted - to save people from the danger! Faced with this, it is pretty hard not to be sceptical.
5khafra
I really don't have a handle on the situation, but the censored material has allegedly caused serious and lasting psychological stress to at least one person, and could easily be interpreted as an attempt to get gullible people to donate more to SIAI. I don't see any way out for an administrator of human-level intelligence.
-1timtyler
AFAICT, the stresses seem to be largely confined to those in the close orbit of the Singularity Institute. Eliezer once said: "Beware lest Friendliness eat your soul". So: perhaps the associated pathology could be christened Singularity Fever - or something.
5EStokes
I don't donate to SIAI on a regular basis, but I haven't donated because of being scared of UFAI. I think more about aging and death. So, I'm assuming that UFAI is not why most people donate. Also, this incident seems like a net loss for PR, so it being a strategy for more donations doesn't really seem to make sense. As for the evidence, what'd you'd expect to see in a universe where it was dangerous would be it being deleted. (Going somewhere, will be back in a couple of hours)
7homunq
I have little doubt that some smart people honestly believe that it's dangerous. The deletions are sufficient evidence of that belief for me. The belief, however, is not sufficient evidence for me of the actual danger, given that I see such danger as implausible on the face of it. In other words, sure, it gets deleted in the world where it's dangerous, as in the world where people falsely believe it is. Any good Bayesian should consider both possibilities. I happen to think that the latter is more probable. However, of course I grant that there is some possibility that I'm wrong, so I assign some weight to this alleged danger. The important point is that that is not enough, because the value of free expression and debate weighs on the other side. Even if I grant "full" weight to the alleged danger, I'm not sure it beats free expression. There are a lot of dangerous ideas - for example, dispensationalist christianity - and, while I'd probably be willing to suppress them if I had the power to do so cleanly, I think any real-world efforts of mine to do so would be a net negative because I'd harm free debate and lower my own credibility while failing to supress the idea. Since the forbidden idea, insofar as I know what it is, seems far more likely to independently occur to various people than something like dispensationalism, while the idea of suppressing it is less likely to do so than in that case, I think that such an argument is even stronger in this case.
0EStokes
Well, I figure if people that have been proven rational in the past see something potentially dangerous, it's not proof but it lends it more weight. Basically that the idea of there being something dangerous there should be taken seriously. Hmm, what I meant was that it being deleted isn't evidence of foul play, since it'd happen in both instances. I don't see any arguments against except for surface implausibility? Free expression doesn't trump everything. For example, in the Riddle Theory story, the spread of the riddle would be a bad idea. It might occur to people independently, but they might not take it seriously, at at least the spread will be lessened. I'm not sure if it turned out for the better, deleting it, because people only wanted to know more after its deletion. But who knows.
5homunq
I have several reasons, not just surface implausibility, for believing what I do. There's little point in further discussion until the ground rules are cleared up.
0EStokes
Okay.
-1timtyler
Riddle theory is fiction. In real life, humans are not truth-proving machines. If confronted with their Godel sentences, they will just shrug - and say "you expect me to do what?" Fiction isn't evidence. If anything it shows that there is so little real evidence of ideas so harmful that they deserve censorship, that people have to make things up in order to prove their point.
5timtyler
There are PR upsides: the shephard protects his flock from the unspeakable danger; it makes for good drama and folklaw; there's opportunity for further drama caused by leaks. Also, it shows everyone who's the boss. A popular motto claims that there is no such thing as bad publicity.
2EStokes
Firstly, if there's an unspeakable danger, surely it'd be best to try and not let others be exposed, so this one's really a question of if it's dangerous, and not an argument in itself. It's only a PR stunt if it's not dangerous, if it's dangerous good PR would merely be a side effect. The drama was bad IMO. Looks like bad publicity to me. I discredit the PR stunt idea because I don't think SIAI would've dumb enough to pull something like this as a stunt. If we were being modeled as ones who'd simply go along with a lie- well, there's no way we'd be modeled as such fools. If we were modeled as ones who would look at a lie carefully, a PR stunt wouldn't work anyways. There's also the fact that people who have read the post and are unaffiliated with the SIAI are taking it seriously. That says something, too.
3wnoise
Well, many are only taking it seriously under pain of censorship.
2EStokes
I dunno, I'd call that putting up with it. Edit: Why do I keep getting downvoted? This comment wasn't meant sarcastically, though it might've been worded carelessly. I'm also confused about the other two in this thread that got downvoted. Not blaming you, wnoise. Edit2: Back to zeroes. Huh.
0wedrifid
I only just read your comments and my votes seem to bring you up to 1.
2timtyler
Well, it doesn't really matter what the people involved were thinking, the issue is whether all the associated drama eventually has a net positive or negative effect. It evidently drives some people away - but may increase engagement and interest among those who remain. I can see how it contributes to the site's mythology and mystique - even if to me it looks more like a car crash that I can't help looking at. It may not be over yet - we may see more drama around the forbidden topic in the future - with the possibility of leaks, and further transgressions. After all, if this is really such a terrible risk, shouldn't other people be aware of it - so they can avoid thinking about it for themselves?
2jimrandomh
Not quite. It's a question of what the probability that it's dangerous is, what the magnitude of the effect is if so, what the cost (including goodwill and credibility) to suppressing it are, and what the cost (including psychological harm to third parties) to not suppressing it is. To make a proper judgement, you must determine all four of these, separately, and perform the expected utility computation (probabiltiy effect-if-dangerous + effect-if-not-dangerous vs cost). A sufficiently large magnitude of effect is sufficient to outweigh both* a small probability and large cost. That's the problem here. Some people see a small probability, round it off to 0, and see that the effect-if-not-dangerous isn't huge, and conclude that it's ok to talk about it, without computing the expected utility. I tell you that I have done the computation, and that the utility of hearing, discussing, and allowing discussion of the banned topic are all negative. Furthermore, they are negative by enough orders of magnitude that I believe anyone who concludes otherwise must be either missing a piece of information vital to the computation, or have made an error in their reasoning. They remain negative even if one of the probability or the effect-if-not-dangerous is set to zero. Both missing information and miscalculation are especially likely - the former because information is not readily shared on this topic, and the latter because it is inherently confusing.
3homunq
1. You also have to calculate what the effectiveness of your suppression is. If that effectiveness is negative, as is plausibly the case with hamhanded tactics, the rest of the calculation is moot. 2. Also, I believe I have information about the supposed threat. I think that there are several flaws in the supposed mechanisms, but that even if all the effects work as advertised, there is a factor which you're not considering which makes 0 the only stable value for the effect-if-dangerous in current conditions. 3. I agree with you about the effect-if-not-dangerous. This is a good argument, and should be your main one, because you can largely make it without touching the third rail. That would allow an explicit, rather than a secret, policy, which would reduce the costs of supression considerably.
2cousin_it
Tiny probabilities of vast utilities again? Some of us are okay with rejecting Pascal's Mugging by using heuristics and injunctions, even though the expected utility calculation contradicts our choice. Why not reject the basilisk in the same way? For what it's worth, over the last few weeks I've slowly updated to considering the ban a Very Bad Thing. One of the reasons: the CEV document hasn't changed (or even been marked dubious/obsolete), though it really should have.
0timtyler
You sum doesn't seem like useful evidence. You can't cite your sources, because that information is self-censored. Since you can't support your argument, I am not sure why you are bothering to post it. People are supposed to think you conclusions are true - because Jim said so? Pah! Support your assertions, or drop them.
4FAWS
It's not a special immunity, it's a special vulnerability which some people have. For most people reading the forbidden topic would be safe. Unfortunately most of those people don't take the matter serious enough so allowing them to read it is not safe for others. EDIT: Removed first paragraph since it might have served as a minor clue.
3homunq
Interesting. Well, if that's the case, I can state with high confidence that I am not vulnerable to the forbidden idea. I don't believe it, and even if I saw something that would rationally convince me, I am too much of a constitutional optimist to let that kind of danger get me. So, what's the secret knock so people will tell me the secret? I promise I can keep a secret, and I know I can keep a promise. In fact, the past shows that I am more likely to draw attention to the idea accidentally, in ignorance, than deliberately. (Of course, I would have to know a little more about the extent of my promise before I'd consider it binding. But I believe I'd make such a promise, once I knew more about its bounds.)
-2[anonymous]
Your comment gave me a funny idea: what if the forbidden meme also says "you must spread the forbidden meme"? I wonder how PeerInfinity, Roko and others would react to this.
5PhilGoetz
If we're going to keep acquiring more banned topics, there ought to be a list of them somewhere. You just lost the game.
0homunq
Response to this above. (attached to grandchild)

(I'm sorry if this comment gets posted multiple times. My African internet connection really sucks.)

Hi. 25 years old, HIV/AIDS worker in Africa, pro-BDSM sex activist in Chicago. Blog at clarissethorn.wordpress.com.

I very rarely comment because comments here are expected to be very well-thought-out. Stating something quick, on the basis of instinct, or without stating it in perfectly precise language seems to me to be dangerous.

Another reason this site has a higher percentage of lurkers is, obviously, because of the account requirement. There's another related problem, though: there's no way to have followup comments emailed to you. This means that if you really want to participate in the site, you have to be pretty obsessive about checking the site itself. That's annoying unless you are very interested in a very high percentage of the site's output. If, for a given commenter (like me), rationalism is a side interest rather than a major one, then the failure to email comments on posts that I'm interested in -- or even responses to my own comments -- becomes a prohibitive barrier unless I've got an unexpected amount of free time.

1NancyLebovitz
Welcome. You can find follow-ups to your comments by clicking on the red envelope under your karma score. I found out about that by asking-- it isn't what I'd call an intuitive interface.
7clarissethorn
Thank you, I'm aware of that. But that still requires a person to be a pretty obsessive user of this site. Unless I have a lot of free time (like today), there's no way I can go back and check every single site where I've left comments and see how my comments are doing. At least LW aggregates reply comments to my input, but that doesn't solve the bigger problem of me having to come back to LW in the first place. It's also worth noting that this comment interface is difficult to use in many places with slow/bad connections, like, you know, the entirety of Africa. Right now I'm in an amazing internet café in a capital city; but when I'm at home, I sometimes can't comment at all because my connection is too crappy to handle it. I don't get the impression that LW is very concerned with diversifying its userbase, but if it is, then a more accessible interface for slow connections would be important.
2NancyLebovitz
What does it take for a site to have a good low bandwidth comment interface?
2clarissethorn
I'm not a technician -- so I'm not sure. But I have noticed that I pretty much always seem to be able to leave comments on Wordpress blogs, for example, whereas I frequently have trouble here and sometimes at Blogspot as well. It helps not to require a login, but Wordpress seems to function okay for me even when it's logging me in.
4NancyLebovitz
So the problem is something about getting to post at all, not the design? I've noticed something mildly glitchy-- a grey warning screen comes up sometimes when I refresh the screen, but if I hit "cancel" and refresh again, it's fine. It's trivial on high bandwidth, but would be a pain on low bandwidth. Can you detail exactly what goes wrong when it's hard for you to post?
0clarissethorn
Well, it just doesn't post. I'm not really sure what goes wrong ... sorry.
[-]Yoreth160

Hi!

I've been registered for a few months now, but only rarely have I commented.

Perhaps I'm overly averse to loss of karma? "If you've never been downvoted, you're not commenting enough."

63 year old carpenter from Vancouver, been lurking here since the beginning and overcoming biases before that. heuristics and bias was what brought me here, and akrasia is what kept me coming back

Hello. Didn't realise I had an account here, but I think one got autogenerated from a single comment I made at OB in early 2008.

To be honest I was somewhat surprised that LW turned out to be so much of a self-help support group, and I somewhat miss the time when I could go on OB and just have my mind blown so many ways every day. The work on decision theory that's being done here still has the sort of brain-everting quality that keeps me coming back for more, though, so I happily pick the promising posts from the sidebar regularly in addition to keeping up with the front page. I guess I'm addicted to the feeling of my brain being violently rewired :-(

Hi.

Er, I have posted comments a few times, but I still consider myself a lurker... Bah.

Hi. The Harry Potter fanfic hooked me. Excited to see where this takes me.

6Mass_Driver
Careful, Clippy is lying. By convention, we here at Less Wrong play along with Clippy's claim to be a moderately intelligent, moderately strange Artificial Intelligence whose utility function is entirely based on how many paper clips exist in the Universe. He might be your friend, but he has been around since well before the Harry Potter fanfic came out. Welcome to Less Wrong!
5Jack
I'm moderately worried that new members will read this comment and think we believe Clippy is really an AI. But that's probably only because I just read that obtuse MoR hate blog.
5Risto_Saarelma
I see it as a bit of obviously gratuitous in-group weirdness, which can grow to be a problem if trying to develop output appreciated by a wide array of different people rather just developing an insular hobby society with inside jokes and requisite fandom weirdness.
[-]Clippy110

I'm sorry, I didn't mean to unnecessarily make your group look weird. I like this group and don't want to hurt it.

As a matter of fact, I am slightly more committed to this group’s welfare -- particularly to that of its weakest members -- than most of its members are. If you suffer a serious loss of status/well-being I will still help you in order to display affiliation to this group even though you will no longer be in a position to help me.

I am substantially more kind and helpful to the people I like and substantially more vindictive and aggressive towards those I dislike. I am generally stable in who I like. I am much more capable and popular than most members of this group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself. I adhere to simple taboos so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends

I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable 'liking y... (read more)

2Tyrrell_McAllister
I missed the memo: What is the MoR hate blog? ETA: Sorry, I finally realized that "MoR" must mean "Methods of Rationality", and a little googling turned up http://methodsofrationalitysucks.blogspot.com/ I suppose that that's what you were referring to.
1Jack
Yup.
3khafra
That is fantastic! You know you've really made it when people devote large amounts of time to explaining why you are unworthy of your level of success.
1Blueberry
Exactly. I hope Eliezer isn't discouraged.