I am writing to propose an online discussion of global risk within the discussion part of Less Wrong. We might call this discussion "More Safe". In future it could be a site where anyone interested could discuss global risks, possibly to aggregate all existing information about global risks. The idea comes from discussions at the Singularity summit that I had with Anna Solomon, Seth Baum, and others.
I propose labeling such discussions "More Safe". Less wrong means more safe. Fewer mistakes means fewer catastrophes.
At Seth's suggestion, we should be careful to follow safety guidelines for such discussions. For example, no technical detail should be posted online about topics which could be used by potential terrorists especially in biology. The point of the discussion is to try to reduce risk, not to have open discussion of risky ideas.
Here are some further thoughts on my idea:
Intelligence appeared as an instrument of adaptation which lead to longer survival of an individual or a small group. Unfortunately it was not adapted as an instrument for survival of technological civilizations. So we have to somehow update our intelligence. One way to do it is to reduce personal cognitive biases.
Another way is to make our intelligence collective. Collective intelligence is more effective in finding errors and equilibrium - democracy and free markets are examples. Several people and organizations dedicated themselves to preventing existential risks.
But we do not see a place which is accepted as main discussion point on existential risks.
Lifeboat Foundation has a mailing list and blog, but its themes are not strictly about existential risks (a lot of star travel) and no open forum exists.
Future of Humanity Institute has excellent collection of articles and the book but it's not a place where people could meet online.
Less Wrong was not specially dedicated to existential risks and many risks are out of its main theme (nuclear, climate and so on).
Immortality Institute has a subforum about global risks but it is not a professional discussing point.
J.Hughes has a mailing list on existential risks [x-risk].
The popular blog "extinction protocol" is full of fear mongering and focuses mostly on natural disasters like earthquakes and floods.
There are several small organizations which are created by one person and limited to a small website that nobody reads.
So efforts of different researchers are scattered and lost in noise.
In my view the collective intelligence as an instrument should consist of following parts:
1. An open-structured forum in which everyone could post but a strict moderation policy should prevent any general discussion about Obama, poverty, and other themes that are interesting but not related to existential risks. It should have several level of accesses and several levels of proof - strict science, hypotheses, and unprovable claims. I think that such forum should be all inclusive, but niburu-stuff should be moved in separate section for debunking.
Good example of such organization is site flutrackers.com which is about flu.
2. Wiki-based knowledge base
3. Small and effective board of experts who really take responsibility to work on content ( but no paid stuff and fund rising problems). Also there will be no head, otherwise it will not be effective collective intelligence. All work for the site should be done on free volunteering.
4. Open complete library of all literature on existential risks.
5. The place should be friendly to other risks sites and should cross-link interesting posts there.
In future the site will became a database of knowledge on global risks.
Now i am going to start sequence of posts about main problems of global risk prevention and i invite everyone to post about global risks under 'moresafe' tag.