This post is prompted by a short discussion with Wei Dai. He says:

Perhaps we should get other causes to participate/recruit on LW? Actually, why aren't they here already? We have a bunch of individuals with non-SIAI interests, but no causes other than SIAI, despite Eliezer repeatedly saying that they would be welcome.

He has a good point!

Here's my plug, for the cause I spend lots of my time and energy on, Wikimedia/Wikipedia:

Wikipedia/Wikimedia is more analogous to a software project than an ordinary charity - your money is useful and most welcomed, but the real contribution is your knowledge.

Or, more generally: create educational material under a free content licence. If it's CC-by-sa, CC-by or public domain, it can interbreed and propagate.

(Then we need to fix the things wrong with the editor experience on Wikipedia ... though the Wikimedia Foundation is paying serious attention to that as well of late. In the meantime, if you find Wikipedia too personally annoying to participate in directly, writing your own site and CC-by-sa'ing it still helps a lot.)

What good causes can you think of that are relevant to LessWrong and its community, that leverage effectiveness through rationality? (I'm thinking beyond just legally-blessed charities, right down to the "small circle of conspirators" level of trying together to get something done.) As well as SIAI, LW has previously had plugs for GiveWell. What do you spend your time, effort and/or money on?

New Comment
33 comments, sorted by Click to highlight new comments since: Today at 3:51 AM

I spend most of my free time on my FIRST Robotics team.

Dean Kamen founded FIRST (For Inspiration and Recognition of Science and Technology) in order to try and increase interest in science and technology among school children. It does this by sponsoring robotics competitions at the elementary, middle, and high school levels.

Its oldest competition is the FIRST Robotics Competition, which is the one that my team participate in. Every year, FIRST releases a game that we have six weeks to design and build a robot to compete in.

It's basically trying to be an alternative to varsity sports teams, where instead of playing a game kids do engineering, and build technical skills before college (and it helps a lot in college admissions).

Insofar as it does that, it raises the sanity waterline, but not to a particularly high level. It does however, raise it for a lot of people.

It's not really a rationality organization per se, but it might be of interest to the community.

Some statistics about their total support from their site:

  • 248,000+ students

  • 22,475 teams

  • 20,675 robots

  • 66,000+ mentors/adult supporters

  • 33,000+ event volunteers

Interesting things:

  • They've been very successful at getting sponsors, with more than 3,500 participating. FIRST has about $33,000,000 in operating expenses. But that doesn't reflect all of the money that it pulls in because of how many companies directly sponsor teams or provide scholarships.

  • It has a very decentralized structure, easy to scale

  • Dean Kamen does a few other things like technophilic TV shows and inventing

If anyone wants to hear more about it, just ask or something. I believe Randaly is also involved.

A good cause that I would like LW to talk about more is "competitive government", including Patri Friedman's seasteading project and Paul Romer's efforts to establish charter cities.

It would rock really hard to create new states to which people could move and in which people could put down roots that were better than the current options although of course there are very formidable barriers to doing that.

The art of creating and maintaining effective online communities has significant overlap with the art of government and at the current time is probably the best place to advance the art of government because the cost of trying new things and getting feedback from reality is vastly lower than the cost of other ways of trying new things in the field of governance.

Like David Brin says, it is good for each of us to remind ourselves that we or I am a member of a civilization. I take that to mean that for me to have any non-negligible hope of having a persistent (positive) effect on reality depends on my being a part of a sufficiently healthy civilization. There are some unhealthy aspects to American society, and I wish I could relocate to a society with a lower probability of my society's canceling out my efforts to improve the world. (According to my current understanding of the world, such a society probably does not currently exist.)

These sound interesting... but I assume they're politically controversial, and it may not be helpful for LessWrong to back a particular political view.

I think the specifics are controversial, but the idea isn't particularly so.

Like, arguing over which one is best is really political, but the idea of having a lot of states with experimental governments in order to see which ones are empirically more feasible isn't so much.

I'm tempted to say that any idea which could change the political landscape even in potential is going to wind up controversial if it gains any substantial momentum, even if it's not at the moment. Politics is proverbially full of people with a vested interest in the status quo, who'd have obvious incentives not to be friendly to experimental governments; less proverbially, though, it's also full of people so attached to one shiny ideology or another that they're more than willing to preemptively demonize anything which looks like it might disprove it.

A competitive government project would probably be dismissed as the pet project of a bunch of idealistic cranks for the first few years of its existence, before it returns any substantial results, but I'd expect it to meet violent opposition if it ever starts looking like, say, modified Trotskyism, or modified Objectivism, or $BIZARREIDEOLOGYOFCHOICE might actually be a good idea in practice. The worst opposition would be directed at the experimental implementation of the idea, of course, but the system that facilitated it would also catch a lot of flak.


Contributing to Free Software projects, especially scientific ones, could be the best use of time for a lot of rationalists. And (allowing for politics being The Mind Killer and all that) cause- or issue-based political campaigning. For example the current campaign to introduce the Alternative Vote for UK General Elections is a pretty clear example of something where there is an obvious rational and irrational side (AV allowing more information into the system, providing better feedback).

Interesting. It occurs to me that there are a variety of compatible communities out there with quite a bit of overlap with LW - Wikipedeans, Open Source software geeks, even PUA. But most of them already have a community infrastructure in place - members of those communities already know where to go.

Two communities that I might be interested in joining (as a lurker, at first) might be bloggers and free educational material authors. Does anyone know whether these communities have focal web-sites?

One thing that LW could do is to have a resource page on our wiki providing links to resources associated with causes and service communities.

Another one I just ran into: Project Gutenberg and related ebook initiatives.

Oh, Project Gutenberg and Distributed Proofreaders are definite good causes, and the sort of thing one can do in one's spare time as one feels like it - like Wikipedia. Not sure they have fantastic leverage, though making good stuff available and (the leverage) making it seem more normal for reading materials to be free content are both good things. PGDP is probably a more productive brain relaxation than Sudoku. (Though I still pick up the Sudoku and haven't proofread anything in years.)

One thing that occurs to me is that we can approach the word causes on two levels: The wider movement, and a specific community that caters to it. For example the skeptic/humanist/atheist cluster has a tremendous amount of micro-communities that cater to it.

Another thing that may be a potential blocker is that lesswrong has been identified with the whole SIAI/singularity/transhumanism cluster. In order to accomodate other groups, to the extent that this is reversible, we may have to push that aside (for instance, maybe keep it in the discussion section, like Luke did with his latest advice to SIAI post). But I am not sure whether this community wants to downgrade certain topics in the interest of openning up to other related causes.

Another thing that may be a potential blocker is that lesswrong has been identified with the whole SIAI/singularity/transhumanism cluster.

I'd say a prime example of something else (i.e. aside from the friendly AI problem) that lesswrong is well suited for is making more real progress in the development of cryonics.

One thing that occurs to me is that we can approach the word causes on two levels

Yes. I quite explicitly mean everything from two people getting together to get something good done, up. I see it as part of the continuum of getting things done that charities are on.

Another thing that may be a potential blocker is that lesswrong has been identified with the whole SIAI/singularity/transhumanism cluster.

How I tell other people about LW: "Started by transhumanists. So they have these weirdy bits, but it's very good stuff."

But I am not sure whether this community wants to downgrade certain topics in the interest of openning up to other related causes.

I don't see a need to actively reduce it. Let other stuff grow normally. (This post may produce useful suggestions in that direction.)

Actually, why aren't they here already?

I think that this is because they didn't intentionally come here. SIAI is a fairly popular cause around here because of how many people came to the site to read Eliezer Yudkowsky's writing. That group of people is largely sympathetic to SIAI, and hence SIAI is popular.

Edit: The following is very likely to be inaccurate, as its all based on first-glance impressions. I think that the structure of "people don't come unless they have reason to" applies though.

I don't think that any other groups have had that clear of an incentive to be here. GiveWell has their own blogs and other support areas, and though some of them may be interested in rationality, I don't think the particulars of how politics ruins your decision-making or how to dissolve questions appeals to or is that important to them.

I think the same is true of Wikimedia, where they already have some place to go and without any directed recruitment effort they wouldn't bother to come here as a group.

Every site linked to from Future of Humanity Institute seems like they could potentially be interested in things, but they all seem to have slightly different interests than rationality qua rationality. Quite a few of them (practical ethics, bioethics network) seem to have a philosophy bent, and I'd think that lack of using standard philosophy terms, or knowing standard philosophy arguments is probably hurting us with them.

We have a bunch of individuals with non-SIAI interests, but no causes other than SIAI, despite Eliezer repeatedly saying that they would be welcome.

I think that this might just be the non-SIAI individuals not intentionally trying to work with similarly interested people to advocate for their goals. I'm worried that something like that might get political though.

I think the same is true of Wikimedia, where they already have some place to go and without any directed recruitment effort they wouldn't bother to come here as a group.

Sue Gardner (exec director of WMF) plugged LW on her blog: Four essays every Wikimedian should read! (exclamation mark in original.)

Oops, my bad.

Do you know how long any of them stayed?

The selection is pretty interesting (namely, How to Save the World, Defecting by Accident, Why Our Kind can't Cooperate, and Your Price for Joining) in how focused it is on practical issues, particularly how groups work.

I'm seeing them drip over here slowly.

But, y'know, when I came here, I was not entirely surprised to see a pile of names I knew from Wikipedia.

Here's my plug, for the cause I spend lots of my time and energy on, Wikimedia/Wikipedia

Hear, hear!

None of this implying that anything found on Wikipedia should be uncritically accepted as true, but I can't think of any better testimony for the collective win of rationalists everywhere than the product of Wikipedia. For one, we benefit directly from the resources on Wikipedia; secondly, we benefit indirectly whenever other people benefit from Wikipedia.

The more difficult matter is reclaiming Wikipedia from the barbarians :D


Would something like Gameful fit the bill? It's a community to foster the creation of games which are beneficial to humanity. I thought that Urgent Evoke was a great idea, and want to see more in that spirit.

Of course, you might call games a mechanism for your cause, but if your cause is to generally "make it fun to make smart moves in the interest of humanity", or "help people overcome irrational disincentives toward Pareto improvements", then making and spreading games like this is a good strategy. And definitely fun!

Leveraging people's idea of fun definitely counts! It's what drives Wikipedia, after all.

I think if we look at the tagline, the most obvious causes things that either increase our understanding of ratonality in the sense of pushing back new boundaries and increasing the 'maximum possible rationality', or things that increase individuals' rationality so that it approaches or reaches the current maximum rationality. The first is research, the latter generally education or consciousness-raising.

I think there's also a question of whether we can increase rationality in key areas, ranging from academic disciplines which are likely to be important or political decision making. I should declare an interest here: one of my own reasons for being here (possibly a retrospective justification because I just find it interesting) is that I work in the British civil service. There's a drive for 'evidence based policy', analysis of whether reforms are succesful etc., and I think there are interesting arguments to be had about how to best be rational as an individual cog in a policy making machine, and if the machine itself can be tweaked to give better results.

BTW - Wikimedia used to be my handy source of people much smarter than I was to interact with. LessWrong knocks it into the shade.

I'm not proud to admit that I thought that was a Charlie Sheen reference until I clicked through to the lesswrong wiki...

I was referring to »Rationalists should win«. Sheen is not particularly rational, but he might serve as instructive for other purposes.

Of course, I was embarrassed that the first thing to come to my mind was sheen rather than "rationalists should win"

Sheen is not particularly rational

We'll he's definitely winning.

(Although, as always, he could be winning more.)

Depends. Assuming decent money management the marginal utility of another 2*Episodes Million Dollar might be neglectible, especially if he can fill up his time with movies instead which reduces or nullifies the opportunity costs altogether.

Regarding his kids a more sane action might be better. But what he gains atm is a nice amount of press coverage. And maybe self respect. Esp. if all his drug tests come out clean.

I can't help but wonder if Charlie Sheen is deliberately trying to appear crazy in order to achieve some goal; for example, he might have been trying to get CBS to break its contract with him so he could sue them and get the money for being in the television show without actually having to do the work.

And Alec Baldwin, who might know, said the lawsuit might have real grounds.

Also, he's branding himself with easily repeatable memes, making a distinction between winning warlocks with tigerblood and loser trolls who can't process his brain, making a lot appearances on various programs, gathering an online following, and selling merchandise with his slogans. That really looks like political campaigning, but he would have to explain the craziness as a tactic in order to get past it. I think it's more likely he's recruiting a fan base for a future project where he can have more control. When you don't know what you want, take power.


Who's Charlie Sheen?

Well, yes. But the details would also be of interest ;-)

Winning what?

Anywhere you choose to.

New to LessWrong?