Followup to: The Thing That I Protect
Anything done with an ulterior motive has to be done with a pure heart. You cannot serve your ulterior motive, without faithfully prosecuting your overt purpose as a thing in its own right, that has its own integrity. If, for example, you're writing about rationality with the intention of recruiting people to your utilitarian Cause, then you cannot talk too much about your Cause, or you will fail to successfully write about rationality.
This doesn't mean that you never say anything about your Cause, but there's a balance to be struck. "A fanatic is someone who can't change his mind and won't change the subject."
In previous months, I've pushed this balance too far toward talking about Singularity-related things. And this was for (first-order) selfish reasons on my part; I was finally GETTING STUFF SAID that had been building up painfully in my brain for FRICKIN' YEARS. And so I just kept writing, because it was finally coming out. For those of you who have not the slightest interest, I'm sorry to have polluted your blog with that.
There's a number of reasons for this. One of them is simply to restore the balance. Another is to make sure that a forum intended to have a more general audience, doesn't narrow itself down and disappear.
But more importantly - there are certain subjects which tend to drive people crazy, even if there's truth behind them. Quantum mechanics would be the paradigmatic example; you don't have to go funny in the head but a lot of people do. Likewise Godel's Theorem, consciousness, Artificial Intelligence -
The concept of "Friendly AI" can be poisonous in certain ways. True or false, it carries risks to mental health. And not just the obvious liabilities of praising a Happy Thing. Something stranger and subtler that drains enthusiasm.
If there were no such problem as Friendly AI, I would probably be devoting more or less my entire life to cultivating human rationality; I would have already been doing it for years.
And though I could be mistaken - I'm guessing that I would have been much further along by now.
Partially, of course, because it's easier to tell people things that they're already prepared to hear. "Rationality" doesn't command universal respect, but it commands wide respect and recognition. There is already the New Atheist movement, and the Bayesian revolution; there are already currents flowing in that direction.
One has to be wary, in life, of substituting easy problems for hard problems. This is a form of running away. "Life is what happens to you while you are making other plans", and it takes a very strong and non-distractable focus to avoid that...
But I'd been working on directly launching a Singularity movement for years, and it just wasn't getting traction. At some point you also have to say, "This isn't working the way I'm doing it," and try something different.
There are many ulterior motives behind my participation in Overcoming Bias / Less Wrong. One of the simpler ones is the idea of "First, produce rationalists - people who can shut up and multiply - and then, try to recruit some of them." Not all. You do have to care about the rationalist community for its own sake. You have to be willing not to recruit all the rationalists you create. The first rule of acting with ulterior motives is that it must be done with a pure heart, faithfully serving the overt purpose.
But more importantly - the whole thing only works if the strange intractability of the direct approach - the mysterious slowness of trying to build an organization directly around the Singularity - does not contaminate the new rationalist movement.
There's an old saw about the lawyer who works in a soup kitchen for an hour in order to purchase moral satisfaction, rather than work the same hour at the law firm and donate the money to hire 5 people to work at the soup kitchen. Personal involvement isn't just pleasurable, it keeps people involved; the lawyer is more likely to donate real money to the soup kitchen later. Research problems don't have a lot of opportunity for outsiders to get personally involved, including FAI research. (This is why scientific research isn't usually supported by individuals, I suspect; instead scientists fight over the division of money that has been block-allocated by governments and foundations. I should write about this later.)
If it were the Cause of human rationality - if that had always been the purpose I'd been pursuing - then there would have been all sorts of things people could have done to personally help out, to keep their spirits high and encourage them to stay involved. Writing letters to the editor, trying to get heuristics and biases taught in organizations and in classrooms; holding events, handing out flyers; starting a magazine, increasing the number of subscribers; students handing out copies of the "Twelve Virtues of Rationality" at campus events...
It might not be too late to start going down that road - but only if the "Friendly AI" meme doesn't take over and suck out the life and motivation.
In a purely utilitarian sense - the sort of thinking that would lead a lawyer to actually work that extra hour at the law firm and donate the money - someone who thinks that handing out flyers is important to the Cause of human rationality, should be strictly less enthusiastic than someone who thinks that handing out flyers for human rationality has directly rationality-related benefits and might help a Friendly AI project. It's a strictly added benefit; it should result in strictly more enthusiasm...
But in practice - it's as though the idea of "Friendly AI" exerts an attraction that sucks the emotional energy out of its own subgoals.
You only press the "Run" button after you finish coding and teaching a Friendly AI; which happens after the theory has been worked out; which happens after theorists have been recruited; which happens after (a) mathematically smart people have comprehended cognitive naturalism on a deep gut level and (b) a regular flow of funding exists to support these professional specialists; which first requires that the whole project get sufficient traction; for which handing out flyers may be involved...
But something about the fascination of finally building the AI, seems to make all the mere preliminaries pale in emotional appeal. Or maybe it's that the actual researching takes on an aura of the sacred magisterium, and then it's impossible to scrape up enthusiasm for any work outside the sacred magisterium.
If you're handing out flyers for the Cause of human rationality... it's not about a faraway final goal that makes the mere work seem all too mundane by comparison, and there isn't a sacred magisterium that you're not part of.
And this is only a brief gloss on the mental health risks of "Friendly AI"; there are others I haven't even touched on, though the others are relatively more obvious. Import morality.crazy, import metaethics.crazy, import AI.crazy, import Noble Cause.crazy, import Happy Agent.crazy, import Futurism.crazy, etcetera.
But it boils down to this: From my perspective, my participation in Overcoming Bias / Less Wrong has many different ulterior motives, and many different helpful potentials, many potentially useful paths leading out of it. But the meme of Friendly AI potentially poisons many of those paths, if it interacts in the wrong way; and so the ability to shut up about the Cause is more than usually important, here. Not shut up entirely - but the rationality part of it needs to have its own integrity. Part of protecting that integrity is to not inject comments about "Friendly AI" into any post that isn't directly about "Friendly AI".
I would like to see "Friendly AI" be a rationalist Cause sometimes discussed on Less Wrong, alongside other rationalist Causes whose members likewise hang out there for companionship and skill acquisition. This is as much as is necessary to recruit a fraction of the rationalists created. Anything more would poison the community, I think. Trying to find hooks to steer every arguably-related conversation toward your own Cause is not virtuous, it is dangerously and destructively greedy. All Causes represented on LW will have to bear this in mind, on pain of their clever conversational hooks being downvoted to oblivion.
And when Less Wrong starts up, its integrity will be protected in a simpler way: shut up about the Singularity entirely for two months.
...and that's it.
Back to rationality.
(This would be a great time to announce that Less Wrong is ready to go, but they're still working on it. Possibly later this week, possibly not.)