All of JStewart's Comments + Replies

Some recent evidence against the Big Bang

Is this not kosher? The minimum karma requirement seems like an anti-spam and anti-troll measure, with the unfortunate collateral damage of temporarily gating out some potential good content. The post seems clear to me as good content, and my suggestion to MazeHatter in the open thread that this deserved its own thread was upvoted.

If that doesn't justify skirting the rule, I can remove the post.

9Baughn6yThe point of the rule is to limit the amount of bad content. This isn't bad content, so working around the rule seems justified. If a rule and the stated reason for the rule conflict... the rule sometimes wins, but only for practical reasons that don't seem to apply here.
Open thread Jan. 5-11, 2015

I think you should post this as its own thread in Discussion.

2[anonymous]6yIf that sounds good, please, it'd be great if you could do it. I don't have the status. The link for the dwarf galaxy article use wrong, it should be: http://www.natureworldnews.com/articles/7528/20140611/galaxy-formation-theories-undermined-dwarf-galaxies.htm [http://www.natureworldnews.com/articles/7528/20140611/galaxy-formation-theories-undermined-dwarf-galaxies.htm] Thanks.
Open thread, Nov. 24 - Nov. 30, 2014

This has been proposed before, and on LW is usually referred to as "Oracle AI". There's an entry for it on the LessWrong wiki, including some interesting links to various discussions of the idea. Eliezer has addressed it as well.

See also Tool AI, from the discussions between Holden Karnofsky and LW.

1Capla6yI was just reading though the Eliezer article. I'm not sure I understand. Is he saying that my computer actually does have goals? Isn't there a difference between simple cause and effect and an optimization process that aims at some specific state?
Rationality Quotes May 2013

Interesting. I wonder to what extent this corrects for people's risk-aversion. Success is evidence against the riskiness of the action.

Circular Preferences Don't Lead To Getting Money Pumped

Having circular preferences is incoherent, and being vulnerable to a money pump is a consequence of that.

I knew that if I had 0.95Y I would trade it for (0.95^2)Z, which I would trade for (0.95^3)X, then actually I'd be trading 1X for (0.95^3)X, which I'm obviously not going to do.

This means that you won't, in fact, trade your X for .95Y. That in turn means that you do not actually value X at .9Y, and so the initially stated exchange rates are meaningless (or rather, they don't reflect your true preferences).

Your strategy requires you to refuse all tr... (read more)

2Antisuji9yThis is only true if your definition of value compels you to trade X for Y if you value Y more than X in the absence of external transaction costs. A simpler and more clearly symmetric definition would be, if given a choice between X and Y, you value Y more than X if you choose Y and vice versa. An otherwise rational agent with hard-coded pairwise preferences as in the OP would detect the cycle and adjust their willingness to trade between X, Y, and Z on an ad hoc basis, perhaps as an implicit transaction cost calculated to match expected apples to apples losses from future trades.
The noncentral fallacy - the worst argument in the world?

Judging from the comments this is receiving on Hacker News, this post is a mindkiller. HN is an audience more friendly to LW ideas than most, so this is a bad sign. I liked it, but unfortunately it's probably unsuitable for general consumption.

I know we've debated the "no politics" norm on LW many times, but I think a distinction should be made when it comes to the target audience of a post. In posts aimed to make a contribution to "raising the sanity waterline", I think we're shooting ourselves in the foot by invoking politics.

3Bruno_Coelho9yCalling something 'worst' before conversations is bad sign.

Reading that HN thread, the problem appears to be a troll (who also showed up on Yvain's original blog post).

A Primer On Risks From AI

I like the combination of conciseness and thoroughness you've achieved with this.

There are a couple of specific parts I'll quibble about:

Therefore the next logical step is to use science to figure out how to replace humans by a better version of themselves, artificial general intelligence.

"The Automation of Science" section seems weaker to me than the others, perhaps even superfluous. I think the line I've quoted is the crux of the problem; I highly doubt that the development of AGI will be driven by any such motivations.

Will we be able to

... (read more)
0jmmcd9yAgreed -- AGI will probably not be developed with the aim of improving science. I also want to quibble about this: Since most readers don't want to be replaced, at least in one interpretation of that term, this line sticks in the throat and breaks the flow. The natural response is something like "logical? According to whose goals?"
A Primer On Risks From AI

Out of curiosity, what are your current thoughts on the arguments you've laid out here?

3XiXiDu9yStrong enough to justify the existence of an organisation like SIAI. Everything else is a matter of expected utility calculations. Which I am not able to handle. Not given my current education and not given my psyche. I know how what I am saying is incredible repugnant to some people here. I see no flaws. But I can't help but flinch away from taking all those ideas seriously. Although I am currently trying hard. I suppose the post above is a baby-step. This video [http://youtu.be/lC4FnfNKwUo] pretty much is the window to my soul. You see how something can be completely rational yet feel ridiculous? Less Wrong opens up the terrifying vistas of reality that I tried to flee from since a young age. -- The Call of Cthulhu I felt compelled to try and see if I can make it all vanish.
What epistemic hygiene norms should there be?

I agree. I've noticed an especially strong tendency to premature generalization (including in myself) in response to people asking for advice. Tell people what your experiences were, not (just) the general conclusions you drew from them.

A Problem About Bargaining and Logical Uncertainty

Is Omega even necessary to this problem?

I would consider transferring control to staply if and only if I were sure that staply would make the same decision were our positions reversed (in this way it's reminiscent of the prisoner's dilemma). If I were so convinced, then shouldn't I consider staply's argument even in a situation without Omega?

If staply is in fact using the same decision algorithms I am, then he shouldn't even have to voice the offer. I should arrive at the conclusion that he should control the universe as soon as I find out that it can prod... (read more)

0thescoundrel9yPerhaps I am missing something, but if my utility function is based on paper clips, how do I ever arrive at the conclusion that Staply should be in charge? I get no utility from it, unless my utility function has an even higher value on allowing entities with utility functions that create a larger output than mine take precedence over my own utility on paper clips.
February 2012 Media Thread

I dunno, I think it is. It took me several hours of reflection to realize that it could be framed in those terms. The show didn't do any breaking.

February 2012 Media Thread

Yes, thanks. I wanted to use strikethrough but a) I couldn't figure out how to do it in LW's markdown and b) it wouldn't work anyway if you copy/paste to rot13.com like I do.

2MarkusRamikin9yYeah, I wish we had strikethrough. Then I could get downvoted for abusing it for sarcasm purposes. sighs wistfully
February 2012 Media Thread

I mostly agree with you. In particular I really liked that Znqbxn'f jvfu jrag fb sne nf gb erjevgr gur havirefr. Gur fbhepr bs ure rzbgvbaf orvat sbe gur zntvpny tveyf naq gur pehrygl bs gur onetnva gurl znqr, V jnf npghnyyl n yvggyr jbeevrq va gur yrnq-hc gb gur svanyr gung ure jvfu jbhyqa'g or zbzragbhf rabhtu.

Ng gur fnzr gvzr, gubhtu gur jvfu raqrq hc ovt rabhtu gb or n fngvfslvat raq, V guvax vg'f cerggl rnfl gb jbaqre jul fur pbhyqa'g tb shegure. Gur arj havirefr vf arneyl vqragvpny gb gur byq bar, evtug qbja gb vaqvivqhny crbcyr. Gur zntvpny tveyf ab... (read more)

3MarkusRamikin8yGood post. Vs gur fgeratgu bs jvfurf vf eryngrq gb fgebat srryvatf, gura vg'f dhvgr cynhfvoyr gb zr gung Znqbxn'f jvfu ernyyl jnf gur orfg fur pbhyq qb rira nffhzvat nobir abezny engvbanyvgl. Orpnhfr gung znxrf vg n ceboyrz gung erdhverf zber guna /n zrer pbeerpg qrpvfvba/; lbhe vagreany fgngr znggref gbb. Naq Znqbxn vf uhzna, fb svkvat gur jbeyq va bgure jnlf jbhyq erdhver ure gb qrsrng fpbcr vafrafvivgl. V unir ab ernfba gb qbhog fur xarj, yvxr jr nyy qb, gung gurer ner na njshy ybg bs bgure ubeevoyr guvatf gung pbhyq fgnaq gb punatr, naq orvat n xvaq tvey fur pnerq nobhg gung, ohg gurer'f bayl fb zhpu lbh pna jbex lbhefrys hc nobhg nyy gung ab znggre ubj zhpu lbh gel gb fuhg hc naq zhygvcyl. Ohg jura vg pbzrf gb jvgpurf, Xlhorl ernyyl fubg uvzfrys va gur sbbg, orpnhfr ur /fubjrq ure/ nyy gubfr zntvpny tveyf sebz nyy npebff uvfgbel. Fur qvqa'g unir gb fuhg hc naq zhygvcyl - fur jnf cenpgvpnyyl zvaqencrq jvgu gur xabjyrqtr. Ba gbc bs gung, zntvpny tvey ceboyrzf jnf jung fur unq orra yvivat gur cnfg srj jrrxf, naq bar rknzcyr jnf evtug va sebag bs ure snpr (n arneyl oebxra Ubzhen). Ohg zbfgyl V guvax vg jnf Xlhorl'f yvggyr uvfgbel yrffba. Fb fur unq ab ceboyrz srryvat gehyl cnffvbangryl nobhg gung ceboyrz, naq pbhyq hfr ure jvfu gb nqqerff vg. V svaq gung n zber fngvfsnpgbel jnl gb guvax nobhg vg guna nalguvat vaibyivat gur sbhegu jnyy. :)
1Anubhav9yGung vf abg n fhogyr nffnhyg ba gur sbhegu jnyy.
0[anonymous]9y.... Sometimes, a blog post works better.
0pedanterrific9yWhat is meant by "ylvat onfgneq^U^U^U^U^U^U"? Specifically the bit with the ^Us.
February 2012 Media Thread

I nearly posted exactly this earlier today.

It's an excellent show, though don't expect too much rationality. Madoka is no HP:MoR, but since there is very little rationality-relevant content in anime it does stand out.

For me it was a case of two independent interests unexpectedly having some crossover. As a fan of SHAFT (the animation studio) and mahou shoujo in general, it was a given I was going to watch Madoka. Then fhcreuhzna vagryyvtraprf naq vasbezngvba-nflzzrgevp artbgvngvba?

In a classic mahou shoujo setup like this, with magical powers and wish-gra... (read more)

4Leonhart9yActually, I disagree; Madoka's wish was pretty optimal. The cosmic horror in PMMM was abg gur Vaphongbef ohg gur snpg gung gur Znqbxn havirefr jnf aba-uhzna-bcgvzvfnoyr - gurve ynjf bs gurezbqlanzvpf nyybjrq ragebcl erirefny, ohg rafherq gung ubcr naq qrfcnve onynaprq gb mreb. Sbe n uhzna inyhr flfgrz, ab birenyy vzcebirzrag pbhyq rire unccra. Madoka explicitly oebxr guvf naq erjebgr gur havirefr gb fbzrguvat gung uhznaf naq Vaphongbef pbhyq obgu bcgvzvfr; vg'f abg pyrne gung fur pbhyq unir qbar nalguvat orggre. Vg'f dhvgr fvzvyne gb gur qvssrerapr orgjrra Abefr naq Puevfgvna zlgu gung Gbyxvra xrcg tbvat ba nobhg. Rcvfbqr 12 vf cher rhpngnfgebcur. I'm not sure why you're positing erfgevpgvbaf ba Xlhorl'f cbjre. Ur qvq rirelguvat ur pynvzrq gb or noyr gb qb, naq vg znxrf frafr sbe uvz gb tenag nal jvfu gung tvirf na raretl cebsvg. Am I forgetting something?
Video Q&A with Singularity Institute Executive Director

As one of the 83.5%, I wish to point out that you're misinterpreting the results of the poll. The question was: "Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?" This is not the same as "unfriendly AI is the most worrisome existential risk".

I think that unfriendly AI is the most likely existential risk to wipe out humanity. But I think that an AI singularity is likely farther off than 2100. I voted for an engineered pandemic, because that and nuclear war were the only two risks I... (read more)

2011 Less Wrong Census / Survey

I just took the survey. I was pretty sure I remembered the decade of Newton's book, but I was gambling on the century and I lost.

I think quibbles over definitions and wording of most of the probability questions would change my answers by up to a couple of orders of magnitude.

Lastly, I really wanted some way to specify that I thought several xrisks were much more likely than the rest (for example, [nuclear weapons, engineered pandemic] >> others).

1mwengler10yI nailed the 2nd edition date without meaning to.
Rhetoric for the Good

My central objection is that this feels like a very un-LessWrongish way to approach a problem. A grab bag of unrelated and unsourced advice is what I might expect to see on the average blog.

Not only is there basically no analysis of what we're trying to do and why, but the advice is a mixed bag. If one entry on the list completely dominates most of the others in terms of effectiveness (and is a prerequisite to putting the others to good use), I don't expect it to be presented as just another member of the list. A few other entries on the list I consider t... (read more)

3thomblake10yAllow me to introduce you to the Sequences, which have been called out many times for being unsourced, rambling, and pointless, and yet they kept chugging away.
Rhetoric for the Good

Edit: Grouchymusicologist has already covered silly grammar-nazism, passives, and Strunk and White, complete with the Languagelog links I was looking for.

\25. Write like you talk. When possible, use small, old, Germanic words.

I think this one should be deleted. The first sentence of it is wrong as written, but the idea behind it is expressed clearly and sufficiently in #26 anyway. People do not talk in grammatical, complete sentences.

As for the second half, do you really look up etymologies as you write? I have only the vaguest sense of the origins of ... (read more)

9NancyLebovitz10yYou don't need to look up etymology to have a feeling for the sources of words. In general, the Germanic words are shorter, seem less academic, and have a lower proportion of vowels. Of course, it's possible to overdo it [http://groups.google.com/group/alt.language.artificial/msg/69250bac6c7cbaff].
Rhetoric for the Good

I agree with your conclusion (this is a worthwhile pursuit), but I have some qualms.

There are a couple of general points that I think really need to be addressed before most of the individual points on this list can be considered seriously:

  • Following a list of prescriptions and proscriptions is a really poor way to learn any complex skill. A bad writer who earnestly tries to follow all the advice on this list will almost certainly still be bad at writing. I think the absolute best, most important advice to give to an aspiring writer is to write. A lot.

... (read more)
0thomblake10yI believe that was one of the rules on the list.
Welcome to Less Wrong! (2010-2011)

That was an awesome introduction post. I like the way you think.

Q: Experiment on blaming the one you hurt?

Some googling lead me to the Wikipedia article on cognitive dissonance (this link should get you to the right spot on the page).

Wikipedia's citation for this is: Tavris, Carol; Elliot Aronson (2008). Mistakes were made (but not by me). Pinter and Martin. pp. 26–29. This book's first 55 pages are viewable on Google Books. I'll attempt to link directly to the relevant section here but it's an ugly URL so I'm not sure it'll work.

Citation 17 looks like just the thing you're looking for, but the viewable portion of their citations section cuts off just too ear... (read more)

Aieee! The stupid! it burns!

The original was Eliezer himself, in How to Seem (and Be) Deep. I'm more fond of TheOtherDave's analogy, though, since I think the baseball bat analogy suffers from one weakness: you're drawing a metaphorical parallel in which death (which you disagree is bad) is replaced by something that's definitely bad. Sometimes you can't get any farther than this, since this sets off some people's BS detectors (and to be honest I think the heuristic they're using to call foul on this is a decent one).

Even if you can get them to consider the true payload of the argum... (read more)

0TheOtherDave10y(nods) Agreed; I don't think I was saying anything Eliezer wasn't, just building a slightly different intuition pump [http://en.wikipedia.org/wiki/Intuition_pump]. That said, the precise construction of the intuition pump can matter a lot for rhetorical purposes. Mainstream culture entangles two separate ideas when it comes to death: first, that an agent's choices are more subject to skepticism than the consistently applied ground rules of existence (A1) and second, that death is better than life (A2). A1 is a lot easier to support than A2, so in any scenario where life-extension is an agent's choice the arguments against life-extension will tend to rest heavily on A1. Setting up a scenario where A1 and A2 point in different directions -- where life-extension just happens, and death is a deliberate choice -- kicks that particular leg out from under the argument, and forces people to actually defend A2. (Which, to be fair, some people will proceed to do... but others will balk. And there are A3..An's that I'm ignoring here.) The "I think that if you took someone who was immortal, and asked them if they wanted to die for benefit X, they would say no." argument does something similar: it also makes life the default, and death a choice. In some ways it's an even better pump: my version still has an agent responsible for life-extension, even if it's accidental. OTOH, in some ways it's worse: telling a story about how the immortal person got that way makes the narrative easier to swallow. (Incidentally, this suggests that a revision involving a large-scale mutation rather than a rogue scientist might work even better, though the connotations of "mutation" impose their own difficulties.)
"Behind the Power Curve" by Simon Funk

I don't mean to rain on cousin_it's parade at all here, but I have to put in an additional plug for "After Life." Even if you didn't really find the blog post especially interesting, if you have any affinity for science fiction I really think "After Life" is worth a look. I recommend it with no reservations.

It's short, it's free, and it's the best exploration I've seen of some very LessWrong ideas. The premise of the story is based on recursive self-modification and uploading, and it's entertaining as well as interesting.

2nazgulnarsil10yum....if it's the best you've seen dealing with ems you need to read permutation city immediately. also greg egan's online short stories (try crystal nights, dark integers, and singleton): http://gregegan.customer.netspace.net.au/BIBLIOGRAPHY/Online.html#Stories [http://gregegan.customer.netspace.net.au/BIBLIOGRAPHY/Online.html#Stories] you might also like the five part passages story: http://www.kuro5hin.org/story/2002/12/21/17846/757 [http://www.kuro5hin.org/story/2002/12/21/17846/757] life artificial: http://lifeartificial.com/ [http://lifeartificial.com/]
Ask and Guess

(There actually was a method for getting it, but it was an Advanced Guess Culture technique, not readily taught in one session.)

I'd love an explanation of the technique.

9Relsqui10yFWIW, among my friends--whom I might describe as "polite askers" or "assertive guessers"--it's common to ask "does anybody want to split this with me?" That way, you're both asking for what you want (more of the thing) and making an offer in a guess-culture-compatible way. It's easy for other people to accept, because now by taking it they're not preventing you from having it. If no one does, you can be reasonably confident no one else actually wanted it. A variant on the same thing is: "Would anyone else like this?" which is a shorter version of the offering ritual that TheOtherDave described. Because it's skipping most of the ceremony, it's much askier, but it's still not polite to say "yes" and take the thing, because you'd be taking it out of the hands of someone who clearly wanted it. (An exception might be made if you hadn't actually had any of the thing yet, and said so.) But you can say "I'll split it with you," achieving the same result as the above. Of course, this only works for plausibly divisible things. I've had a friend laugh at me--good-naturedly--for offering to split something bite-sized. Surprise, surprise: he's much askier, I'm much guessier.

Ferd's method works, assuming you can actually manage to help with the dishes (the trick to that is to just start doing it, rather than ask... if you ask, the host is obligated to say "no, of course not," since it is understood that you don't actually want to help with the dishes), but the one I had in mind is you take a serving implement, pick up the last piece of chicken, catch the eye of someone else at the table, and offer it to them. They, of course, are obligated to say "No, you take it" (as are you, if someone offers it to you). ... (read more)

Volunteer to help with the dishes, then ask whether you can take away the plate the chicken is sitting on. If nobody else claims it, it's yours.

Clear another plate before you touch the one with the chicken on it. Clear something else after. Clear your plate when you're done eating.

Don't do more work than your hosts. You're being helpful, not trying to work off the price of dinner.

Hi - I'm new here - some questions

Hello, and welcome.

You are correct in your observation that this section does not have a high rate of new posts. I'm not sure, but I think you are likely correct in your guess that a flood of new posts would not be appreciated. LessWrong doesn't have a very traditional forum structure, and I'm not sure that a place exists on this site yet that quite fits your posting style. I'm commenting here in part because that puts you in the same boat as me - my first comment on this site was the opinion that the avenues of participation in LW don't seem to fit how I ... (read more)

0InquilineKea10yHello, thanks very much for your post! I really appreciate it. "my first comment on this site was the opinion that the avenues of participation in LW don't seem to fit how I like to express myself, and that probably other potential users were in the same situation. I think LW doesn't lend itself to conversation or stepwise refinement of ideas by a group, which is my best guess for how I would like to really engage with the ideas discussed here" Ah yes, I definitely agree about that. Hence why I (and many others) am hesitant to post (the other thing is that no one seems to post in threads more than a month old, so there isn't much I can post on). I know someone suggested the idea of subreddits some time ago, but we instead went with tags. But that just means that all the threads will go on a particular front page. "My only concern is that while your goal is good, the methods perhaps leave something to be desired. It may well be the fastest way for you to learn, but putting the burden of critique of a large flow of ideas onto others can be something of an imposition. Time certainly is a valuable resource, as you state, and remember that other people value their time too. What I hope LW can do for you is read and critique just as you wish, but that also you learn here some habits and skills of thinking that allow you to do more and more of this sort of critique of your own ideas as you have them. My time at LW (and OB, and many other places on the net) has been spent largely lurking, in a project of refining my own ability to reason and critique effectively and correctly, and I hope that it works out for you that way too." Okay, very good points there. Yeah, I generally self-critique my own ideas (and frequently edit them without input). My main idea in putting everything online, in any case, was that someone (with time) could probably find me and email me (I've emailed other people who ended up not replying to my emails, so I ended up making everything public).
If a tree falls on Sleeping Beauty...

I was going more for the point that some ambiguous questions about probabilities are disguised less-ambiguous questions about payoffs

To provide my feedback data point in the opposite direction, I found this to be well-expressed in the OP.

3prase11ySame for me.
Harry Potter and the Methods of Rationality discussion thread, part 4

I have not read the original Harry Potter series. I first learned that Quirrell was Voldemort when, after finishing the 49 chapters of MoR out at that point, I followed a link from LW to the collected author's notes and read those.

I think that for those who have not read the source material (though there may not be many of us), it is basically impossible to intuit that Quirrell is Voldemort from the body of the fanfic so far.

That said, I don't feel like I missed out in any way and don't see why it necessarily needs to be any more explicit until the inevita... (read more)

4Vladimir_Nesov11yEliezer planted lots of clues about many facts that are never explicitly revealed, in such a way that noticing correct hypotheses is sufficient to confirm them upon observing enough of those little clues. Now, for some facts, it could be difficult to even locate them, but Quirrell=Voldemort seems to be a good hypothesis to entertain, even if it's not apparently confirmed from any single passage, and it does get lots of evidence if you know to look for it.
Harry Potter and the Methods of Rationality discussion thread, part 4

I have a question about chapter 49 and was wondering if anyone else had a similar reaction. Assuming Quirrell is not lying/wrong, and Voldemort did kill Slytherin's Monster, then my first thought was how unlikely that Slytherin's Monster should have even survived long enough to make it to 1943. No prior Heir of Slytherin had had the same idea? Perhaps no prior Heir of Slytherin had been strong enough to defeat Slytherin's Monster? No prior Heir had been ruthless enough?

Maybe this constitutes weak evidence for the theory that Quirrell is lying.

6AdShea11yIt also could be that the Basillisk has some sort of genetic memory (or DNA-based cognition ala the Super Happies!) such that the monster in the book is not the original monster but rather a great-great grandwhelp of the original monster. This would allow any heirs to kill their specific monster while the line (and thus memories) are preserved. (This is of course all predicated on Slytherin realizing that his descendents may be nasty enough to keep knowledge from others by any means possible).
Open Thread June 2010, Part 2

Isn't your point of view precisely the one the SuperHappies are coming from? Your critique of humanity seems to be the one they level when asking why, when humans achieved the necessary level of biotechnology, they did not edit their own minds. The SuperHappy solution was to, rather than inflict disutility by punishing defection, instead change preferences so that the cooperative attitude gives the highest utility payoff.

1Clippy11yNo, I'm criticizing humans for wanting to help enforce a relevantly-hypocritical preference on the grounds of its superficial similarities to acts they normally oppose. Good question though.
Attention Lurkers: Please say hi

Reddit-style posting is basically the same format as comment threads here, it's just a little easier to see the threading. One thing that feels awkward using threaded comments is conversation, and people's attempts to converse in comment threads is probably part of why comment threads balloon to the size they do. That's one area that chat/IRC can fill in well.

Another issue is that top-level posts have a feeling of permanence to them. It's like publishing something. I'd rather start with an idea and be able to discuss it and shape it. Top-level posts seem l... (read more)

3AdeleneDawner11yGoogle Wave is decent for this - it's wikilike in that document at hand can be edited by any participant, and bloglike in that comments (including threaded comments) can be added underneath the starting blip. There's a way to set it up so that members of a google group can be given access to a wave automatically, which would be convenient. I have a few invitations left for Wave, if anyone would like to try it. I'm not interested in taking charge of a google group, though.
Attention Lurkers: Please say hi

Hi.

edit: to add some potentially useful information, I think the biggest reason I haven't participated is that I feel uncomfortable with the existing ways of contributing (solely, as I understand it, top-level posts and comments on those posts). I know there has been discussion on LW before on potentially adding forums, chat, or other methods of conversing. Consider me a data point in favor of opening up more channels of communication. In my case I really think having a LW IRC would help.

0[anonymous]11yhttp://webchat.freenode.net/?channels=lesswrong# [http://webchat.freenode.net/?channels=lesswrong#] There it is. (at least, that is how I know to access it...)
6Peter_de_Blanc11yThis made me think of how cool a LessWrong MOO [http://en.wikipedia.org/wiki/MOO] would be. I went and looked at some Python-based MOOs, but they don't seem very usable. I'd guess that the LambdaMOO server is still the best, but the programming language is pretty bad compared to Python.
4Kevin11y. #lesswrong on Freenode! And a local Less Wrong subreddit is coming, eventually...
8Airedale11yHi, I think explanations for lurking, if people feel comfortable giving them, may indeed be helpful. I also felt uncomfortable about posting to LW for a long time and still do to some extent, even after spending a couple months at SIAI as a visiting fellow. Part of the problem is also lack of time; I feel guilty posting on a thread if I haven't read the whole thread of comments, and, especially in the past, almost never had time to read the thread and post in a timely fashion. People tell me that lots of people here post without reading all the comments on a thread, but (except for some of the particularly unwieldy and long-running threads), I can't bring myself to do it. I agree that a forum or Sub-Reddit [http://www.reddit.com/r/LessWrong] as announced by TomMcCabe here [http://lesswrong.com/lw/20z/announcing_the_less_wrong_subreddit/] might encourage broader participation, if they were somewhat widely used without too significant a drop in quality. But the concerns expressed in various comments about spreading out the conversation also seem valid.