richardbatty
richardbatty has not written any posts yet.

richardbatty has not written any posts yet.

The core of my argument is: try to select as much as possible on what you care about (ability and desire to contribute and learn from lesswrong 2.0) and as little as possible on stuff that's not so important (e.g. do they get references to hpmor). And do testing to work out how best to achieve this.
By intellectual community I wasn't meaning 'high status subculture', I was trying to get across the idea of a community that selects on people's ability to make intellectual contributions, rather than fit in to a culture. Science is somewhat like this, although as you say there is a culture of academic science which makes it more... (read more)
"From my point of view, you are proposing to destroy something I like which has been somewhat useful in the hopes of creating a community which might not happen."
I think a good argument against my position is that projects need to focus quite narrowly, and it makes sense to focus on the existing community given that it's also already produced good stuff.
Hopefully that's the justification that the project leaders have in mind, rather than them focusing on the current rationality community because they think that there aren't many people outside of it who could make valuable contributions.
"I think communities form because people discover they share a desire"
I agree with this, but would add that it's possible for people to share a desire with a community but not want to join it because there are aspects of the community that they don't like.
"Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they've got already?"
That's something I'd like to know. But I think it's important for the rationality community to attempt to serve these kinds of people both because these people are important for the goals of the rationality community and because they will probably... (read more)
You're mainly arguing against my point about weirdness, which I think was less important than my point about user testing with people outside of the community. Perhaps I could have argued more clearly: the thing I'm most concerned about is that you're building lesswrong 2.0 for the current rationality community rather than thinking about what kinds of people you want to be contributing to it and learning from it and building it for them. So it seems important to do some user interviews with people outside of the community who you'd like to join it.
On the weirdness point: maybe it's useful to distinguish between two meanings of 'rationality community'. One meaning... (read more)
Have you done user interviews and testing with people who it would be valuable to have contribute, but who are not currently in the rationalist community? I'm thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I'd also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned.
You should just test this empirically, but here are some vague ideas for how you could increase the credibility of the site to these people:
What about asking your audience questions?
For example, you could ask questions:
* Seeking criticism, such as "I think section x is the weakest part, what are some alternative arguments?"
* Promoting understanding, such as "Can you think of 2 more examples of <concept I just introduced>?"
* Stimulating research, such as "I think this model can be applied to area y, does anyone have suggestions for how to do this?"
This might help get readers out of passive consumption mode, and into thinking about something they could comment about. It would also make the writing more useful.