I guess we're not disagreeing about much, at this point, though I think that you're basically more optimistic than I, and this might cause us to form different conceptions of the "overcoming bias" enterprise. I agree that we're not Eurisko (and suddenly I'm remembering Lenat's talk at IJCAI-77, explaining AM's fixed-heuristics problem that then led him to Eurisko...I was a graduate student) but my feeling is that we don't in general even have the choice of using a given heuristic less: we don't in general have the choice of becoming a less initially biased person. Sometimes we do, and it's worth a try, I'll admit that. In general, however, I don't think much of my own rationality in speech or action or even writing: it's mainly in proofreading, especially shared proofreading, that we have the chance to overcome our biases. For this purpose, it's perfectly possible to say "this is a valuation by prototype" or whatever, and then think a meta-thought about errors found in association with that heuristic. (Nor do I really believe that we commonly have heuristics that aren't associated with bias--systematic error--it's just a question of identification and of doing the best we can. Not error-free, but error-correction.
Of course in order to do that, you need to be conscious of your heuristics, which isn't always possible either, but when you try to explain your opinions to somebody else, sometimes you notice the rule of inference you're applying, and then take a step backwards. And another. And another, until the metaphor falls off the cliff. :-) But until transhumanism actually works, or until Lenat successfully mixes Eurisko and Cyc (and, as he said in 1977: "It's our last problem. They'll handle the next one"), I think it's the best we can do, and I get the feeling you think we can do better. But I have no confidence in such feelings.
Douglas Knight: for me, thinking of "bias" (as used on this blog) as a result of heuristic processing is moderately useful 'cos (a) mainly, it just gives a general framework, a set of very concrete metaphors and therefore heuristics (and therefore biases) that I've worked with over the years; (b) it suggests that the problem of bias can be ameliorated but not solved, because you'll never get perfect heuristics and you'll never be able to do all the computing that's required to do without heuristics; (c) ah, well, I forget what (c) was gonna be. But it's useful for me, and I wanted to know if anybody here would point to a reason why I shouldn't use it. Nobody did, yet.
When you speak of the cost/value of "overcoming" heuristics, that's interesting...it jars slightly, which is good. I'm used to ideas of balancing out heuristics, of using meta-heuristics (i.e., explicit knowledge of the bias introduced by a particular heuristic) and such for overcoming the bias of a heuristic, but overcoming heuristics...strange. I'm not sure why that jars, but I thank you.
My mention of people-shredders was merely to distinguish that kind of torture, punitive/deterrent torture, from interrogative torture. The distinction matters because when I see people (say, Max Boot) defending practices which others class as torture, they aren't defending punitive/deterrent torture (or confession-inducing torture) at all; they're defending "interrogation techniques". Posts such as this one, I believe, lose some of their impact because they don't go as far as they could in achieving clarity; the people who might be criticized will, if exposed to this post, think of it as a straw-man argument: "That's not me at all". BTW, the people-shredders specifically may never have existed; if I'd remembered that as I typed, I'd have used Saddam's deterrent amputations instead:
nine Iraqi men whose right hands were amputated in 1995 on orders of Saddam Hussein as punishment for their alleged crime of dealing in American dollars.
Yes, of course Winston "survives", in a sense. He's not executed. I was remembering, quite probably misremembering (can't find my copy, I think my eldest son took it years ago) a passage in which execution is represented as too easy: first he has to Love Big Brother, and then after that it doesn't matter if he's executed or not. Something about dominance, as I tried to say. (But is the Winston at the end...hmm. Is that survival? Maybe HA would consider it so; personally I felt that Winston the person had been destroyed. The politics of personal destruction, as it were. Unless I'm misremembering quite drastically, which is always possible.)
Douglas, with regard to systematic but unexplained errors, would you agree that we can (usually) describe these as due to unidentified heuristics? I'm feeling very unsure about that, but I would like to have some fairly concrete way of thinking about this blog's subject matter, and at least this way it's something I've taught. :-) I'm not about to insist that all thought can be modeled with symbol-processing. It may even be that the most fundamental errors are those that arise without any symbol-processing -- I've just been reading a dog-trainer's book which emphasizes the errors we make in dealing with our dogs, simply because we automatically do what other primates do even when we've verbally worked it out. Still, by the time you get to correcting a bias found in less-obviously-symbolic process, say a neural net or genetic learning-algorithm, I think you're describing that bias and its correction via rules on patterns of symbols -- and that's the way I've been seeing this blog. (Is everybody asleep yet? Well, I suspect this thread was drying up anyway -- as you can see, I have a bias in favor of mixed metaphors.)
Douglas and g both respond in terms of cost, apparently agreeing that the value of bias-correction (in any given context) will be limited, and it may not be the best use of whatever resources you've got...this again seems Cowenesque to me, and I'm reasonably happy with it; we have few absolutes showing except for (a) HA's survival (which I tentatively rate as a rhetorical position, unlikely to show itself in HA's real-world behavior), and (b) anti-torture.
I was going to leave the torture theme strictly alone because I've never seen it come to any good end, but maybe it's worth saying: when you say you're "against torture", you haven't actually told me much at all because too many people mean too many things by "torture". In a blog post apparently dedicated to clarity, I think that's extraordinarily important.
Let me clarify: consider a long interrogation under bright lights when you want to sleep; is that torture? Some will very sincerely say yes, some will very sincerely (I believe) say no, and some will draw the line at some specific number of hours of sleep-deprivation. If someone opposes more than 10 hours as "torture", and you oppose more than 5 hours as "torture", you can class them as "pro-torture" but it strikes me as an anti-clarity sort of rhetorical move. Is (fake? real?) menstrual blood-smearing torture? Again, apparently sincere disagreement exists. Is (some specific kind of) fraternity hazing torture? Is boot camp torture? Do I have an opinion on any of those? Not really: my primary torture stance is not anti-torture or pro-torture, it's pro-clarity.
Specifically, I would focus on interrogation -- trying to get leave out O'Brien and Winston, just as I would leave out Saddam's people-shredders, on the grounds that the purposes involved are not obviously those of "enhanced interrogation" -- O'Brien isn't at that point looking for information, he's destroying Winston prior to execution for reasons I've never understood, but which I suspect the pyramid-of-prisoners people understood perfectly well; it has something to do with dominance. (Yes?) And I would not start out with rules that say "10 hours yes, 11 hours no" or "uncomfortable chair yes, stress position no". I would not even start out with a rule against waterboarding. I would start out with a rule that says that (1) interrogation should always be on video, that (2) any sequence of interrogation techniques (insults? lights? Madonna videos? waterboarding?) should be precisely described, that (3) any interrogator should have previously undergone any technique he or she applies on a publicly-available "licensing video", that (4) the interrogation video should immediately begin a chain of viewings and authorizations which ends with actual publication no more than N years (10?) later. I could go on about what I mean by "precisely described" and how the authorizers would be authorized and all that, but I suspect that (a) most people who describe themselves as "anti-torture" are, by now, angry, even though (b) any such pro-clarity rule would end up ruling out whatever it is that ordinary people think of as torture, while allowing the anti-torture people a maximal opportunity to explain specifically what they meant and why all of these (or all but X, Y, and Z) should be outlawed. So...gee. Am I biased in favor of clarity? Yes, but only to a limited extent. And I'm glad to learn from people who are biased against bias more than I am, especially if I can figure out what they mean.
I'm puzzled, as usual, but perhaps more so: this post has helped clarify the lack of clarity in my understanding of "bias" as the word is used here. You see, I don't in general see an a priori distinction between "bias" and other kinds of heuristics; we are talking about computational shortcuts, ways to save on reasoning, and all of them go wrong sometimes. I'm glad to have scope insensitivity pointed out to me, and having seen the discussion on this blog may even keep me from some error at some time, but my reasoning will always be incomplete, my models will always fall short of being "isomorphic with reality." (A modeler's joke, along the lines of using a territory as its own map.) I am tempted to think of "bias" as a label for the systematic errors introduced by any given heuristic, such as valuation by prototype; is this fair?
If so, "overcoming bias" is one of those journey-rather-than-destination unattainables that at any given moment faces me with an economic sort of choice: I can spend more resources trying to overcome the bias introduced by each heuristic I use, i.e. in meta-inference, or I can spend more resources actually carrying out inferences by those heuristics -- expanding and pruning nodes in my actual current search-graph, so to speak, and coming to conclusions even though I know that some of them will turn out to have been wrong.
Is this an obviously bad way to think? Probably so -- because it leans me in Tyler's direction. Well, I dunno. This comment is an extremely imperfect representation of what I want to say...but I post it, on the grounds that I have to go listen to Dougie MacLean, a Scottish singer and songwriter, and achieving a more perfect (but never actually perfect) representation of what I want to say has a finite value, which at the margin somewhere loses out to getting on with the next thing. I thought that was true for everybody.