1-3 months doesn't seem so bad as a timeline. While it's important not to let the perfect be the enemy of the good (since projects like this can easily turn into a boondoggle where everyone quibbles endlessly about what the end-product should look like), I think it's also worth a little bit of up-front effort to create something that we can improve upon later, rather than getting stuck with a mediocre solution permanently. (I imagine it's difficult to migrate a social network to a new platform once it's already gotten off the ground, the more so the more people have joined.)
I would also like to register my opposition to using Facebook. While it might seem convenient in the short term, it makes the community more fragile by adding a centralized failure point that's unaccountable to any of its members. Communicating on LessWrong.com has the virtue of it being owned by the same community that it serves.
It seems to me that there's a tension at the heart of defining what the "purpose" of meetups is. On the one hand, the community aspect is one of the most valuable things one can get out of it - I love that I can visit dozens of cities across the US, and go to a Less Wrong meetup and instantly have stuff to talk about. On the other hand, a community cannot exist solely for its own sake. Someone's personal interest in participating in the community will naturally fluctuate over time, and if everyone quits the moment their interest touches zero then nobody will ever feel like it's worth investing in its long-term health.
Personally, I do have a sense that going to meetups matters, in that it helps (however marginally) to raise the sanity waterline in one's local community, and to move important conversations about x-risk and the future of humanity into the mainstream. I myself was motivated to dive into Less Wrong again, after a hiatus of many years, by finding a lively meetup group that was discussing these ideas regularly.
In any case I think that the question of "why meetups matter" is something that we're all collectively trying to figure out over time. I don't claim to know the answer right now.
I do, however, have some concern about creating a "monoculture" among the various sub-groups. It's good that we have a wide variety of intellectual interests, ways-of-running-meetups, etc., because this allows for mistakes to be corrected and innovations to be discovered. If we are all given a directive from on high saying "We are going to mobilize all the resources of the Rationality Community towards goal X, which we will achieve by strategy Y," then it might at first seem like a lot of stuff is getting done. But what if strategy Y is ineffective, or goal X is a bad goal? Then we would have ruined our chance to discover our mistake until it was too late. This is especially important when the goals of the community are so ill-defined, as is the case now.
Of course, in order to reap these benefits of having a diverse community, a prerequisite is that there be any communication at all between groups. So, the suggestion of having meetups write up blog posts for public consumption seems like a good one. But I don't think the groups should be told which topics they must discuss, because they might be interested in something else that nobody else would've thought of. Perhaps it's enough to provide a list of topics that any meetup group can draw from if they can't think of something. And maybe, after one group publishes a writeup, another group might be inspired to discuss the same topic later and submit their own writeup in response.
 Or, more realistically, a persuasive message to the effect of "All the cool kids are doing Z and you're going to feel left out if you don't," which can feel like a compulsory directive because of Schelling points, etc.
 Caveat: The mood of a conversation is likely to change dramatically if it's known that someone is taking notes that will be posted later, since then one is not speaking merely to those in attendance, but effectively to an indefinitely large audience of all LessWrong readers. So, I would recommend that meetups have a mixture of on- and off-the-record conversations, with a clear signal of which norm is in effect at any given time.
What's the relation between religion and morality? I drew up a table to compare the two. This shows the absolute numbers and the percentages normalized in two directions (by religion, and by morality). I also highlighted the cells corresponding to the greatest percentage across the direction that was not normalized (for example, 22.89% of agnostics said there's no such thing as morality, a higher percentage than any other religious group).
Many pairs were highlighted both ways. In other words, these are pairs such that "Xs are more likely to be Ys" and vice-versa.
(I didn't do any statistical analysis, so be careful with the low-population groups.)
Would it be correct to say that, insofar as you would hope that the one person would be willing to sacrifice his/her life for the cause of saving the 5*10^6 others, you yourself would pull the switch and then willingly sacrifice yourself to the death penalty (or whatever penalty there is for murder) for the same cause?
I think I may have artificially induced an Ugh Field in myself.
A little over a week ago it occurred to me that perhaps I was thinking too much about X, and that this was distracting me from more important things. So I resolved to not think about X for the next week.
Of course, I could not stop X from crossing my mind, but as soon as I noticed it, I would sternly think to myself, "No. Shut up. Think about something else."
Now that the week's over, I don't even want to think about X any more. It just feels too weird.
And maybe that's a good thing.
I suppose, perhaps, an asteroid impact or nuclear holocaust? It's hard for me to imagine a disaster that wipes out 99.999999% of the population but doesn't just finish the job. The scenario is more a prompt to provoke examination of the amount of knowledge our civilization relies on.
(What first got me thinking about this was the idea that if you went up into space, you would find that the Earth was no longer protected by the anthropic principle, and so you would shortly see the LHC produce a black hole that devours the Earth. But you would be hard pressed to restart civilization from a space station, at least at current tech levels.)
But apparently it still wasn't enough to keep them together...
Suppose you know from good sources that there is going to be a huge catastrophe in the very near future, which will result in the near-extermination of humanity (but the natural environment will recover more easily). You and a small group of ordinary men and women will have to restart from scratch.
You have a limited time to compile a compendium of knowledge to preserve for the new era. What is the most important knowledge to preserve?
I am humbled by how poorly my own personal knowledge would fare.
Is there any philosophy worth reading?
As far as I can tell, a great deal of "philosophy" (basically the intellectuals' wastebasket taxon) consists of wordplay, apologetics, or outright nonsense. Consequently, for any given philosophical work, my prior strongly favors not reading it because the expected benefit won't outweigh the cost. It takes a great deal of evidence to tip the balance.
For example: I've heard vague rumors that GWF Hegel concludes that the Prussian State (under which, coincidentally, he lived) was the best form of human existence. I've also heard that Descartes "proves" that God exists. Now, whether or not Hegel or Descartes may have had any valid insights, this is enough to tell me that it's not worth my time to go looking for them.
However, at the same time I'm concerned that this leads me to read things that only reinforce the beliefs I already have. And there's little point in seeking information if it doesn't change your beliefs.
It's a complicated question what purpose philosophy serves, but I wouldn't be posting here if I thought it served none. So my question is: What philosophical works and authors have you found especially valuable, for whatever reason? Perhaps the recommendations of such esteemed individuals as yourselves will carry enough evidentiary weight that I'll actually read the darned things.