I have said a thing or two about rationality, these past months.  I have said a thing or two about how to untangle questions that have become confused, and how to tell the difference between real reasoning and fake reasoning, and the will to become stronger that leads you to try before you flee; I have said something about doing the impossible.

    And these are all techniques that I developed in the course of my own projects—which is why there is so much about cognitive reductionism, say—and it is possible that your mileage may vary in trying to apply it yourself.  The one's mileage may vary.  Still, those wandering about asking "But what good is it?" might consider rereading some of the earlier posts; knowing about e.g. the conjunction fallacy and how to spot it in an argument, hardly seems esoteric.  Understanding why motivated skepticism is bad for you can constitute the whole difference, I suspect, between a smart person who ends up smart and a smart person who ends up stupid.  Affective death spirals consume many among the unwary...

    Yet there is, I think, more absent than present in this "art of rationality"—defeating akrasia and coordinating groups are two of the deficits I feel most keenly.  I've concentrated more heavily on epistemic rationality than instrumental rationality, in general.  And then there's training, teaching, verification, and becoming a proper experimental science based on that.  And if you generalize a bit further, then building the Art could also be taken to include issues like developing better introductory literature, developing better slogans for public relations, establishing common cause with other Enlightenment subtasks, analyzing and addressing the gender imbalance problem...

    But those small pieces of rationality that I've set out... I hope... just maybe...

    I suspect—you could even call it a guess—that there is a barrier to getting started, in this matter of rationality.  Where by default, in the beginning, you don't have enough to build on.  Indeed so little that you don't have a clue that more exists, that there is an Art to be found.  And if you do begin to sense that more is possible—then you may just instantaneously go wrong.  As David Stove observes—I'm not going to link it, because it deserves its own post—most "great thinkers" in philosophy, e.g. Hegel, are properly objects of pity.  That's what happens by default to anyone who sets out to develop the art of thinking; they develop fake answers.

    When you try to develop part of the human art of thinking... then you are doing something not too dissimilar to what I was doing over in Artificial Intelligence.  You will be tempted by fake explanations of the mind, fake accounts of causality, mysterious holy words, and the amazing idea that solves everything.

    It's not that the particular, epistemic, fake-detecting methods that I use, are so good for every particular problem; but they seem like they might be helpful for discriminating good and bad systems of thinking.

    I hope that someone who learns the part of the Art that I've set down here, will not instantaneously and automatically go wrong, if they start asking themselves, "How should people think, in order to solve new problem X that I'm working on?"  They will not immediately run away; they will not just make stuff up at random; they may be moved to consult the literature in experimental psychology; they will not automatically go into an affective death spiral around their Brilliant Idea; they will have some idea of what distinguishes a fake explanation from a real one.  They will get a saving throw.

    It's this sort of barrier, perhaps, which prevents people from beginning to develop an art of rationality, if they are not already rational.

    And so instead they... go off and invent Freudian psychoanalysis.  Or a new religion.  Or something.  That's what happens by default, when people start thinking about thinking.

    I hope that the part of the Art I have set down, as incomplete as it may be, can surpass that preliminary barrier—give people a base to build on; give them an idea that an Art exists, and somewhat of how it ought to be developed; and give them at least a saving throw before they instantaneously go astray.

    That's my dream—that this highly specialized-seeming art of answering confused questions, may be some of what is needed, in the very beginning, to go and complete the rest.

    A task which I am leaving to you.  Probably, anyway.  I make no promises as to where my attention may turn in the future.  But y'know, there are certain other things I need to do.  Even if I develop yet more Art by accident, it may be that I will not have the time to write any of it up.

    Beyond all that I have said of fake answers and traps, there are two things I would like you to keep in mind.

    The first—that I drew on multiple sources to create my Art.  I read many different authors, many different experiments, used analogies from many different fields.  You will need to draw on multiple sources to create your portion of the Art.  You should not be getting all your rationality from one author—though there might be, perhaps, a certain centralized website, where you went to post the links and papers that struck you as really important.  And a maturing Art will need to draw from multiple sources.  To the best of my knowledge there is no true science that draws its strength from only one person.  To the best of my knowledge that is strictly an idiom of cults.  A true science may have its heroes, it may even have its lonely defiant heroes, but it will have more than one.

    The second—that I created my Art in the course of trying to do some particular thing which animated all my efforts.  Maybe I'm being too idealistic—maybe thinking too much of the way the world should work—but even so, I somewhat suspect that you couldn't develop the Art just by sitting around thinking to yourself, "Now how can I fight that akrasia thingy?"  You'd develop the rest of the Art in the course of trying to do something.  Maybe even—if I'm not overgeneralizing from my own history—some task difficult enough to strain and break your old understanding and force you to reinvent a few things.  But maybe I'm wrong, and the next leg of the work will be done by direct, specific investigation of "rationality", without any need of a specific application considered more important.

    My previous attempt to describe this principle in terms of respect bounded by a secret identity, was roundly rejected by my audience.  Maybe "leave the house" would be more appropriate?  It sounds to me like a really good, healthy idea.  Still—perhaps I am deceived.  We shall see where the next pieces of the Art do, in fact, come from.

    I have striven for a long time now to convey, pass on, share a piece of the strange thing I touched, which seems to me so precious.  And I'm not sure that I ever said the central rhythm into words.  Maybe you can find it by listening to the notes.  I can say these words but not the rule that generates them, or the rule behind the rule; one can only hope that by using the ideas, perhaps, similar machinery might be born inside you.  Remember that all human efforts at learning arcana, slide by default into passwords, hymns, and floating assertions.

    I have striven for a long time now to convey my Art.  Mostly without success, before this present effort.  Earlier I made efforts only in passing, and got, perhaps, as much success as I deserved.  Like throwing pebbles in a pond, that generate a few ripples, and then fade away...  This time I put some back into it, and heaved a large rock.  Time will tell if it was large enough—if I really disturbed anyone deeply enough that the waves of the impact will continue under their own motion.  Time will tell if I have created anything that moves under its own power.

    (Not to mention that—I hope—the thing with the karma will stop the slide into virtual entropy that has destroyed every community I tried to build earlier as soon as I tried to pull back my attention a little.)

    My last essay on having a secret identity was not well-received, so let me try again:  I want people to go forth, but also to return.  Or maybe even to go forth and stay simultaneously, because this is the Internet and we can get away with that sort of thing; I've learned some interesting things on Less Wrong, lately, and if continuing motivation over years is any sort of problem, talking to others (or even seeing that others are also trying) does often help.

    But at any rate, if I have affected you at all, then I hope you will go forth and confront challenges, and achieve somewhere beyond your armchair, and create new Art; and then, remembering whence you came, radio back to tell others what you learned.

    New Comment
    114 comments, sorted by Click to highlight new comments since: Today at 7:01 AM
    Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

    This is strangely moving.

    I almost feel like you're about to die, Eliezer.

    Don't leave us hanging, what's going on? Are you cutting back on writing new pieces for LW? Does this mean work on the book(s)? Is that still happening? Are you cutting back to focus more on your, er, other project? Or am I misreading this post and nothing's changing?


    In particular, it sounds like the wise-old-mentor-tells-protege-he-won't-always-be-around speech from (nearly) every fantasy novel/movie ever. "Master Eliezer, you can't die!" "Strong am I with the Way; but not that strong. Fortunately cryonics will let me come back as a glowing blue ghost."

    It does! I wonder if TVTropes has given this scene a name...
    7Paul Crowley15y
    No doubt it does, but I daren't go and find out - I have other things to do... Edit: oops... Old Master passing the torch

    Cutting back, yes. Not cutting back to zero. Not now, not yet.

    I sense a bout of Deism coming on from our creator/sustainer.
    I have no plans to do so.
    It may be that we'll hear more about that other project after the end of April.

    I think you may be expecting too much from us. We're not mostly trained philosophers or psychologists or neurobiologists. We're not even self-trained supermen who can break holes in brick walls with sheer brainpower. We're mostly computer programmers who are looking for something else to read when we should be working.

    The main danger for LW is that it could become rationalist-porn for daydreamers.

    I suggest a pattern of counterattack:

    1. Find a nonrational aspect of your nature that is hindering you right now.

    2. Determine privately to fix it.

    3. Set a short deadline. Do the necessary work.

    4. Write it up on LW at the deadline. Whether or not it worked.

    Edit: I added this as a top level post.

    LW is largely rationalist-porn for daydreamers, and that's inevitable. However, even if we can avoid that a little bit, if we can be 95% porn and 5% actually effective, then that's still a win worth having. Better than that is worth thinking about but hard to imagine in practice.

    I think we will get more useful results if we have a thread in which we can pre-commit to writing such a thing up by the deadline, for the same reason that there should be a registrar of medical trials: a private commitment won't be as effective in avoiding publication bias.

    I am very wary of pre-commitment of this sort, it sounds like a huge temptation to play Bruce. (Woe is me, akrasia is my downfall, I told you all I'd invent Quantum gravity in a month and I never even cracked open a physics textbook...) Also, it's a crutch even if it works. Burning your boats should be reserved for big problems. You shouldn't get used to coercing yourself into every fiddling little improvement!
    0Paul Crowley15y
    Surely not pre-committing will offset that fear to exactly the extent that it gives people permission not to post if they failed, which is something else you're keen to avoid? I'm not thinking of it as a way to fight backsliding, but as a way to gather better data.
    No, not really, at least that wasn't my intention - the deadline should be absolute. Hmm, how about: if your mind is unclouded with any attempt at self-coercion, you ought to announce in public for the sake of a more reliable strategy->win analysis.
    Require each article to be actionable and there will be no porn.
    Authors should tag actionable posts as "actionable." Then you could do a search on the tag. The recommended action should be explicitly described.
    4Paul Crowley15y
    I think this might rule out some valuable articles which are neither. Is, say, Argument screens off authority actionable? In what way? Is it porn?
    What does actionable even mean in context?
    An article that essentially says "please do try this at home".
    If being less irrationally credentialist counts as an action, yes.
    3Paul Crowley15y
    That makes "actionable" much broader than it might sound at first blush. Could you give an example of an article which is not actionable, by this definition?

    Actually, I don't think we do too badly. There are an awful lot of very smart people here, and I think we manage to have quite a few discussions at a usefully high level; people are also for the most part remarkably polite, which is no small thing.

    I don't think it's implausible to hope that this site might achieve more than entertaining its members, and I think it's worth hoping and working for.

    Trade some of your hours to those people from whom I am not expecting too much, then; and maybe even someday, in this era or another, it will not be too much to expect from you. You're right that not everyone can be my target audience for everything I try to teach - at least not in this moment of their lives.

    Start training! Now!
    Guilty has charged. It does seem more productive than tower defense, at any rate. Ciphergoth does have a point that polite discussions about rationality are an end in their own right.
    3Paul Crowley15y
    I don't mean that; I mean that they are a good sign that we might hope to achieve something genuinely useful.
    How much formal training does Eliezer have?
    That was my point about the "self-trained supermen who can break holes in brick walls with sheer brainpower".

    Does this article imply that I missed the Golden Age of Eliezer's writing on Less Wrong?

    Yes, a golden age of blog posts by Eliezer lasted about 2 years and ended about 17 months ago, but if you reply to an old post, many of us will see your reply (like I did just now). The best way to access the golden age BTW is by clicking on the "SEQUENCES" link, which is probably on the banner on top of every page on the site, which will take you to an overview (written by Eliezer) of Eliezer's blog posts. Eliezer is currently writing a book about rationality, but the book is probably aimed at a broader audience and probably will not go into as much technical detail as the sequences.
    No, it implies that if you lived in pre-writing times you would have missed the Golden Age of Eliezer's writing on Less Wrong. Which is a paradox, so don't worry.

    I couldn't resist reading ahead, and yes, this does indeed deserve a top-level article: David Stove, What Is Wrong With Our Thoughts.

    This article is definitely relevant - I hadn't seen anyone dare being honest about how most of philosopher's thoughts, of old, are not to be blindly revered, and are indeed highly flawed. They aren't right, they aren't even wrong. Thanks for the link.

    I strongly agree that people who try to improve on rationality usually jump off a cliff, but I strongly disagree with the claim that this is the first thing they do even if they are smart.

    Seth Roberts is a great counter-example. He eventually jumps off a cliff, hey you have done so too on occasion, though you always recovered and he hasn't, but his criticisms of existing practice and proposals for improved practice remain valid.

    Hegel is a grossly unfair example, totally unrepresentative the class "great thinkers in philosophy" and recognized as a fraud by very many within philosophy. Many great philosophers are more like Seth Roberts, or early Eliezer for that matter.

    I must disagree with your assessment of Hegel. Folks from the outside often see "philosophy" as something without internal divisions (like people from the outside of any culture). While it's true that 'very many' (for some values of 'very many') think Hegel is a fraud, he's still both popular and influential. I am amongst the ones who don't think very highly of 'continental philosophy' (of which Hegel is an example), but I nonetheless recommend him at times. Specifically, some folks think Marx had interesting things to say about alienation, and I have to point out that Marx pretty much just lifted those parts entirely from Hegel (though mostly reversing their spin). As continental philosophers go, I think Hegel is pretty solid (compare Heidegger). But yes, folks that use terms that lump Isaac Newton and Dan Dennett together with Hegel and Marx are clearly doing something wrong.
    Could you elaborate? It's not obvious, except in the sentence with Heidegger, that you disagree at all. "Fraud" is a bit harsh, but saying that his claim to have a new way of thinking was a pitiful jump off a cliff is not to say that "every word he says is a lie, including 'and' and 'the.'" Maybe the image of jumping off a cliff is too vivid. I take it mean having reached a position where you can convince yourself of anything, but that doesn't mean that you'll use this new tool to actually convince yourself of everything. No discussion of Hegel is complete without posters. I certainly believe that many people will be mislead by Michael Vassar's comment because they don't notice the difference between "very many" and "most."
    "Folks from the outside often see 'philosophy' as something without internal divisions (like people from the outside of any culture)." Aren't those people just straightforwardly wrong? If anything, philosophy has too many internal divisions.
    I presume you mean that non-literally, but I don't know what else you intend it to mean. How is it that he "eventually jumps off a cliff"? Does that mean you disagree with some subset of his conclusions/hypotheses? If so, which ones?
    I disagree with how far he takes his metaconclusions with which he judges his hypotheses. He rightly rejects bad scientific practices which throw away most data in a Manichean fashion and then use bad methods of analysis anyway to reach wrong conclusions but he then ends up with engaging in the massively motivated collection of confirming evidence that the scientific method is intended to prevent, the generation of cheap evolutionary just so stories, etc. His theory of holidays, for instance, is a parody of bad evolutionary psychology. Evolution is all important, ultimately, but it's too dumb to make basic competence at hiding instinctual for toddlers. That's pretty bad!

    If you don't attempt to do something while you develop your rationality then you're not constraining yourself to be scored on your beliefs effectiveness. And we know that this makes you less likely to signal and more likely to predict accurately.

    7Paul Crowley15y
    * I think that for the most part, where rationality is easily assessed it is already well understood; it is in extending the art to hard-to-assess areas that the material here is most valuable. * For all I know all of Eliezer's original work apart from his essays on rationality could be worthless. Both of these things mean that we're assessing this material on a different basis than demonstrated efficacy.
    Indeed - that seems to me like a problem. I am oft reminded of the unemployment worker in History of the World: Part I... "Occupation?" "Did you create an AI last week?" "Did you try to create an AI last week?" Of course, that's not the end of the story, and nobody would expect a job like that to be the sort of thing you can just do. But without demonstrated efficacy, what's the difference between this "Art" and pseudoscience? For an art of rationality to even mean anything, it must have predictable, demonstrable results.
    I agree with everything in your post except the last sentence. That's what a science must have to be meaningful. What I want to know is why there's so little interest in developing a science of rationality.
    This is just false. It's a good property to have, in many cases life-saving one, but it's not always possible. If you seek an Art, but don't expect demonstrable results from the correct one just yet, how do you tell the difference from pseudoscience? How do you move from an idea in your mind to an implementation with demonstrable results, why do you proceed with trying to demonstrate the value of one idea, but not another, before the work is done? Other cues.
    No, it's not just false. It's also very nearly true. It's merely technically false because I was talking in an unnecessarily positivist way about it. But you know what I mean, I hope. That is a good question, and one that I don't have an answer to. Is there one? (you can't?)
    You look at the sanity of the theory. How do you tell a sane academic paper from one written by a crackpot (say, you don't look at where it's published and you don't know the author)? Certainly you don't need to go check experimental results or proofs in overwhelming majority of cases. Doing this reliably where absurdity heuristic breaks is claimed as a large part of the Art of rationality. It's necessarily self-referential.
    Simple - I don't read it either way. There are plenty of papers written by folks I've heard of or in trusted publications to fill up all of the free time I don't have. if the ideas seem novel and useful, I expect empirical evidence. If there is none, I withhold judgement until there is.
    This doesn't sound realistic. Do you deny your ability to tell the difference?
    Between crackpots and valid academic papers, out of context? Yes, I do deny that. Take Louis Savain (please!). I actually have a lot of the same intuitions about the nature of spacetime, and the problems involving even talking about time travel outside of a fictional context. Savain is clearly a crackpot - it's practically painted all over his site. But if I had written a paper discussing some of these intuitions, and nobody noticed that it wasn't published in a peer-reviewed journal nor that I'm not a physicist, then I don't think there would be a clear way to notice that I was wrong. Of course, a physicist could happily come along and say, "No, our theory is experimentally verified, and yours doesn't even make sense. Here are references to the relevant evidence, and some engineering applications you might not have been aware of". ETA: I admit, I had "speak of the devil" in mind when I posted this. But it had to be done!
    You are not doing the categorization as a civil service, you are doing it to efficiently build on gained understanding. If you are not ready to understand, you shouldn't try to do that. But when you are, you play the role of that very physicist, you don't need to see the crackpot's experimental results.
    That's because the physicist has her own experimental results already, and the crackpot has none. If the crackpot has experimental results to back up his theory, then the physicist had bloody well better look at them! (if the paper is even being taken seriously enough to be read in the first place)
    The one crackpot I interacted most strongly did have experimental results, and trumpeted them loudly. The experiment turned out to be a notoriously finnicky one (not quite down to Millikan experiment territory) done in slipshod fashion. This was utterly predictable, given purely theoretical considerations and examination of his style, even before it came to the observations - his theory contradicted, say, the existence of comets. Experiments can be wrong. Maybe even most attempts at experiments are wrong. What makes a scientist a scientist instead of a crackpot is the debugging and validation. Trying to exclude every way the results might not mean what it seems like they mean - not just doing control-experiment comparison and saying you've done your duty. Crackpot experiments, lacking these extra checks, are worthless.
    This wouldn't surprise me much, at least in physics. There are probably more physics students than professional physicists, and those students do lots of tabletop experiments, badly. (My own old lab books document a refractive index measurement of -19.6, a disproof of the equivalence principle, and a laser beam that travelled at (1.05±0.01)c.) Nonetheless... ...this is a bit too strong a distinction between crackpots & non-crackpots, though your basic point is right. The way I'd put it: a non-crackpot confronted with a bizarre result immediately wonders, "what did I do wrong?", but a crackpot confronted with the same result immediately gasps, "I knew it!". I guess I'm just paraphrasing Dear Leader, really: one's strength as a non-crackpot is one's ability to be more confused by bizarre, inexplicable results than predictable results.
    6Eliezer Yudkowsky11y
    This would make most modern professional scientists crackpots which sounds a bit noncentral - they may be no true scientists, but they seem very different from the crackpots I've met.
    There's a bit of a gap between what ordinary not-very-good scientists do to make sure the experiment is right and what they should be doing. There is a colossal gulf between what crackpots do and ordinary not-very-good scientists do.
    Certain branches of physics are nowhere near as bad as medicine when it comes to that.
    On a completely unrelated note, screw the Millikan experiment. That one lab where we had to replicate it in undergrad with the world's shittiest equipment is probably the only reason I'm a mathematician and not a physicist.
    My high school physics class took the Millikan experiment to a new level: we installed a calculator program in which oil drops were simulated by pixels moving down the screen, and you could press buttons to vary the simulated electric charge. I wonder if I can blame becoming a mathematician on that, too.
    /me stares in horror. Undergrad. Millikan. Oil. Drop. Experiment. shiver
    My question is : as well understood as it is, how much of it do any single individual here, know, understand, and is able to use on a recurring basis ? We'll want to develop more than what exists, but we'll build that upon - once we have it - a firm basis. So I wonder, how much knowledge and practice of those well understood parts of rationality, does it require of the would-be builders of the next tier ? Otherwise, we stand the risk, of being so eager as to hurriedly build sky high ivory towers on sand, with untrained hands.
    Can you clarify? Exactly which material are you referring to? What basis would you suggest that you're assessing it on?
    2Paul Crowley15y
    I mean the bulk of Eliezer's 300-odd OB/LW posts. To use an example I've used before, you'd be crazy to say that you think well of Argument screens off authority because you have empirically demonstrated that reading it makes you more rational. I find its argument persuasive. Obviously one must be wary of the many ways you can find something persuasive that are not related to merit, but to carry away from the study of cognitive bias the message that one should not be persuaded by any argument ever would be to give up on thinking altogether.
    I agree that the quality of the argument is an important first screening process in accepting something into the rationality canon. In addition, by truly understanding the argument, it can allow us to generalise or apply it to novel situations. This is how we progress our knowledge. But the most convincing argument means nothing if we apply it to reality and it doesn't map the territory. So I don't understand why I'd be crazy to think well of Argument screens off authority if reading it makes me demonstrably more rational? Could you point me towards the earlier comments you allude to?
    My question is : as well understood as it is, how much of it do any single individual here, know, understand, and is able to use on a recurring basis ? We'll want to develop more than what exists, but we'll build that upon - once we have it - a firm basis. So I wonder, how much knowledge and practice of those well understood parts of rationality, does it require of the would-be builders of the next tier ? Otherwise, we stand the risk, of being so eager as to hurriedly build sky high ivory towers on sand, with untrained hands.
    My question is : as well understood as it is, how much of it do any single individual here, know, understand, and is able to use on a recurring basis ? We'll want to develop more than what exists, but we'll build that upon - once we have it - a firm basis. So I wonder, how much knowledge and practice of those well understood parts of rationality, does it require of the would-be builders of the next tier ? Otherwise, we stand the risk, of being so eager as to hurriedly build sky high ivory towers on sand, with untrained hands.

    Articles longer than, say, 1000 symbols should be prefaced with a tl;dr (kind of like an abstract, but not quite). One fourth into the article and I still don't know what are you talking about.


    • There is a minimum amount of rationality you need to not flunk reason.

    • Humans don't have it by default but I've given it to you now.

    • I'm going to be busy for awhile.

    • Go be rationalists.

    I like it :-) but I'd put it slightly differently myself.

    • People who try to advance the art of rationality usually get stuck in dead ends - big, twisty dead ends that you can stay lost in.
    • I've tried to set out how to know whether a given direction is a dead end
    • I'm going to be busy for a while
    • Please use my dead-end detector to improve itself.

    "And if you generalize a bit further, then building the Art could also be taken to include issues like developing better introductory literature, developing better slogans for public relations, establishing common cause with other Enlightenment subtasks, analyzing and addressing the gender imbalance problem..."

    The issue of racial imbalance on Less Wrong has gotten considerably less attention than gender imbalance. Is this because race is largely socially constructed, and thus not considered a meaningful division? Or is the issue of racial imbalance in this community simply too sensitive to touch?

    I doubt we have good stats on race. Maybe via the Facebook group...?
    Yes I was referring to the Facebook group. I suppose its possible that the racial division is actually in the Facebook group and not in Less Wrong.
    It's also less visible. Some people are sensitive to each, but the people looking for gender differences can claim that any commenter who has a gender neutral name and stance is adding to the apparent dominance of male viewpoints. Only a few have visibly female names. (There aren't many visibly male names, but it's enough given our priors about the on-line population.) It's harder to argue that race-neutral names and viewpoints make the racial or ethnic minorities more clearly minorities, since there aren't enough self-identified of any race or ethnicity to form groupings. Or have I missed a significant discussion of this in some comment thread?
    That seems to be a false dichotomy. The first option implicitly condones disconcern for racial balance and implies that gender is not a social construct, the latter assumes that there is widespread sensitivity over the issue of racial balance. More likely, issues of gender interaction are more salient for members of the community than issues of racial interaction, leading us to focus on the former and overlook the latter.
    I certainly wasn't generating a dichotomy. I was merely offering two suggestions. I never said that I was certain that it was one of those two.
    Apologies for the misunderstanding. Often, when someone says, "Is it because A? or is the issue B?" they intend to be suggesting that the explanation is either A or B. I realize this is not always the case, but I (apparently incorrectly) assumed that you were suggesting those as the possible explanations.
    I should note, this explanation for why there is a disparity between how much we attend to the two issues does not make any assumptions about the degree to which we should be attending to either issue, which is a different question entirely.
    Also, race online tends to be much less visible than gender, which can be revealed in something as simple as my name.
    I'm guessing it's this, yes.
    As far as I'm concerned, yes. The impact that gender has on the way we think on the other hand is more significant and in some cases fascinating.
    Wow. I wonder why this comment was voted down, yet generated so many comments. Is it considered off-topic?
    It's because you're right about issues of race being super-sensitive.

    I'd like to know why rationality is constantly referred to as an art instead of a science.

    The defining feature which distinguishes one from the other is that arts don't have self-referential procedures to improve performance and eliminate error; there are supposedly no universal methods to guarantee the production of art.

    Why isn't this forum concerned with developing the science of rationality?

    Because that doesn't happen on blogs? As a (slightly) more serious response, the "Art" is supposed to be a technique or school of techniques with which you can accomplish something. "Toolbox" might be as good a metaphor. Other folks are already advancing logic, cognitive psychology, decision theory, and such things; what we want to do here is put it all together so people can get better.
    Art contains a whole spectrum of meaning. The 'art of rationality' is closer 'the art of computer programming' to 'the art of expressionist painting'.

    "And if you generalize a bit further, then building the Art could also be taken to include issues like developing better introductory literature, developing better slogans for public relations, establishing common cause with other Enlightenment subtasks, analyzing and addressing the gender imbalance problem..."

    The issue of racial imbalance has gotten considerably less attention than gender imbalance. Is this because race is largely socially constructed, and thus not considered a meaningful division? Or is the issue of racial imbalance simply too sensitive to touch?


    All hail the glorious master! Your disciples hear you and obey!

    Assuming that "Go Forth" is a biblical reference, I think we are a bit past that stage ;-)

    You dare compare me to some pathetic little God?

    My Bible is pretty rusty, but I think it was Noah who most-famously said "Go forth and multiply." In your case, it will probably not be multiplication - but rather something to do with posterior probabilities.

    He's actually just engaged in malapropisms. "Go forth, and create the Art!" "Shut up and multiply!"

    Was I trying to imply here that Noah was misquoting Eliezer? Was that supposed to be funny? Sadly, 2009!thomblake is unlikely to respond.
    Stable time loops can be funny, right?
    God is reported to say it in the first chapter of Genesis. If Noah said it too then he would be quoting. Perhaps your Bible needs a few more sacrificial anodes. ... Did you just say what I think you said? Nice!