LessWrong seems to have two main topics for discussion; rationality and AI. This, of course, is caused by Eliezer Yudkowsky's sequences (and interests), which are mostly about rationality, but also include a lot of writing about AI. LessWrong currently has two sections, Main and Discussion. This is meant to separate the purpose and quality of the post. I think usability of the site has improved greatly since adding the Discussion section. Should split the discussion section, and have one for rationality and one for AI?

Advantages

  • It would make LW more pleasant for rationality enthusiasts who don't want to sort through lots of AI discussions.
  • It would make LW more pleasant for AI enthusiasts who don't want to sort through lots of rationality discussions.
  • It would not split up the community as much as a new AI community site would.
  • It would increase LW's capacity for discussion.
  • Newcomers could learn about rationality, while people who want higher quality of discussion on AI topics could have it.

Disadvantages

  • Many posts about both subjects are highly relevant about both subjects
  • It would make LW less pleasant for enthusiasts of both subjects.
  • It would split up the community more than doing nothing.
  • Everybody has their ideas for separate sections, and we can't do them all.
What do you guys think?

New Comment
24 comments, sorted by Click to highlight new comments since: Today at 8:02 AM

I want a feature that just merges the comment stream of all sections. Having to click through both of them is just annoying. Having to click through three would be even more annoying.

Yes, this.

New vote: vote this up if you don't want a separate AI section on LW. Link to karma balance for downvote. Explanation of why a new vote is called for.

Vote up if you want a separate AI section on LW.

New vote: vote this up if you do want a separate AI section on LW. Link to karma balance for downvote. Explanation of why a new vote is called for.

I think it might be better to have both options be vote up. Seems the first voter was confused. If I vote now it will look as if nobody has voted.

There's also the problem of the vote down option being hidden.

I think that's confusing enough that I'd take the voting patterns as very weak evidence about the preferences of LWers, and basically set it aside altogether. Sorry!

Yes, the voting is non-standard. Suggest delete and start over.

I've posted new comments: vote for AI section, vote against AI section, karma balance.

I will of course not be offended if Alex chooses to delete the whole article and start over, and if s/he would prefer me to delete my comments then I'll do so.

Should have been under a single parent comment, not as separate top-level comments.

To keep them together? Damn, you're right. Sorry.

Crap... I was that confused first voter.

Guh... too late now, I think.

Unless you include stuff like game theory or mind upload pontification in "AI", I'm seeing less than 10 % of the new posts on either discussion or main as something that could be categorized as AI. Also, I see very little of the sort of very interesting AI discussion like starglider's threads on StarDestroyer.net on the site in general, it's mostly "here's a news link about AI stuff" and the occasional interview with a mainstream AI researcher. So not really seeing the need for this, since I'm not seeing the implied half of the majority of the topics being AI.

I'd like to see some proper AI threads too. I've no idea what kind of general research framework the current cutting-edge implementation stuff, such as whatever Google and IBM are cooking up, is operating, and whether there are any promising new theoretical approaches towards AGI.

ETA: A division I might support could be something in the lines of "practical rationality with essays and techniques" versus "theoretical crunch with lots of math". These two do seem to be both represented on the site, valuable, and potentially possessing significantly non-overlapping readerships. Having the more technical and rigorous theory stuff in its own section might encourage more of it, and the separate sections could develop different conversation styles if needed.

It doesn't have to be half of the posts - if it's approaching 10%, then that's 10% of the posts here being blatantly off-topic. It irks me to no end.

And I'm not sure if you're counting links to stuff about the Singularity, or other such off-topic things.

If a post doesn't at least pretend to be about either epistemic rationality, instrumental rationality, or both, then I don't want to see it on Less Wrong.

I think we need a separate section for something; but I am not sure whether it should be only "AI" or something wider like "AI + game theory" or in general "scientifically-looking articles".

What makes me interested in the separation is that AI and decision theory articles occasionally have explicit, new mathematics as opposed to being largely expository or having links to other places--I read these articles (or, in practice, don't read them) in a very different way than I read a series of anecdotes summarizing some new study or whatever.

For example, the recent game theory sequence would be something I would want on the main site, since it's about introducing people to rationality tools.

Now that I'm thinking about it for more than five seconds, I'm realizing that it is really not obvious how to slice the site up in this way, and so it is probably a bad idea.

It would be good if things could be cross-posted. I also consider decision theory to be part of AI, not sure about game theory.

I'm tentatively against because I don't think the line between the two is that clear and don't think discussion is currently overloaded. But I'd like it if we heard more people's reasoning before voting.

Possible alternative, We already have a tagging system but it is underutilised. Possibly an alternative would be to add an option of "filter by tag" to the new queue so we can easily separate out things we have a particular interest in. And to encourage people to use tags more frequently. Then we could easily filter by "AI" or "rationality" or indeed an arbitrary topic of interest, while still having everything visible by default.