Note the way I speak with John Baez in the following interview, done months before the present post:
http://johncarlosbaez.wordpress.com/2011/03/25/this-weeks-finds-week-313/
In terms of what I would advocate programming a very powerful AI to actually do, the keywords are “mature folk morality” and “reflective equilibrium”...
In terms of Google keywords, my brand of metaethics is closest to analytic descriptivism or moral functionalism...
I was happy to try and phrase this interview as if it actually had something to do with philosophy.
Although I actually invented the relevant positions myself, on the fly when FAI theory needed it, then Googled around to find the philosophical nearest neighbor.
The fact that you are skeptical about this, and suspect I suppose that I accidentally picked up some analytic descriptivism or mature folk morality elsewhere and then forgot I'd read about it, even though I hadn't gone anywhere remotely near that field of philosophy until I wanted to try speaking their language, well, that strikes at the heart of why all this praise of "mainstream" philosophy strikes me the wrong way. Because the versions of "mature folk morality" and &qu...
With this comment, I think our disagreement is resolved, at least to my satisfaction.
We agree that philosophy can be useful, and that sometimes it's desirable to speak the common language. I agree that sometimes it is easier to reinvent the wheel, but sometimes it's not.
As for whether Less Wrong is a branch of mainstream philosophy, I'm not much interested to argue about that. There are many basic assumptions shared by Quinean philosophy and Yudkowskian philosophy in opposition to most philosophers, even down to some very specific ideas like naturalized epistemology that to my knowledge had not been articulated very well until Quine. And both Yudkowskian philosophy and Quinean naturalism spend an awful lot of time dissolving philosophical debates into cognitive algorithms and challenging intuitionist thinking - so far, those have been the main foci of experimental philosophy, which is very Quinean, and was mostly founded by one of Quine's students, Stephen Stich. Those are the reasons I presented Yudkowskian philosophy as part of the broadly Quinean movement in philosophy.
On the other hand, I'm happy to take your word for it that you came up with most of this stuff on your own, and...
The community definitely needs to work on this whole "virtue of scholarship" thing.
It's not Quinean naturalism. It's logical empiricism with a computational twist. I don't suggest that everyone go out and read Carnap, though. One way that philosophy makes progress is when people work in relative isolation, figuring out the consequences of assumptions rather than arguing about them. The isolation usually leads to mistakes and reinventions, but it also leads to new ideas. Premature engagement can minimize all three.
Philosophy quote of the day:
I am prepared to go so far as to say that within a few years, if there remain any philosophers who are not familiar with some of the main developments in artificial intelligence, it will be fair to accuse them of professional incompetence, and that to teach courses in philosophy of mind, epistemology, aesthetics, philosophy of science, philosophy of language, ethics, metaphysics, and other main areas of philosophy, without discussing the relevant aspects of artificial intelligence will be as irresponsible as giving a degree course in physics which includes no quantum theory.
Aaron Sloman (1978)
According to the link:
Aaron Sloman is a philosopher and researcher on artificial intelligence and cognitive science.
So, we have a spectacular mis-estimation of the time frame - claiming 33 years ago that AI would be seen as important "within a few years". That is off by one order of magnitude (and still counting!) Do we blame his confusion on the fact that he is a philosopher, or was the over-optimism a symptom of his activity as an AI researcher? :)
ETA:
as irresponsible as giving a degree course in physics which includes no quantum theory.
I'm not sure I like the analogy. QM is foundational for physics, while AI merely shares some (as yet unknown) foundation with all those mind-oriented branches of philosophy. A better analogy might be "giving a degree course in biology which includes no exobiology".
Hmmm. I'm reasonably confident that biology degree programs will not include more than a paragraph on exobiology until we have an actual example of exobiology to talk about. So what is the argument for doing otherwise with regard to AI in philosophy?
Oh, yeah. I remember. Philosophers, unlike biologists, have never shied away from investigating things that are not known to exist.
Many mainstream philosophers have been defending Less Wrong-ian positions for decades before Overcoming Bias or Less Wrong existed.
When I read posts on Overcoming Bias (and sometimes also LW) discussing various human frailties and biases, especially those related to status and signaling, what often pops into my mind are observations by Friedrich Nietzsche. I've found that many of them represent typical OB insights, though expressed in a more poetic, caustic, and disorganized way. Now of course, there's a whole lot of nonsense in Nietzsche, and a frightful amount of nonsense in the subsequent philosophy inspired by him, but his insight about these matters is often first-class.
Also, how about William James and pragmatism? I read Pragmatism recently, and had been meaning to post about the many bits that sound like they could've been cut straight from the sequences -- IIRC, there was some actual discussion of making beliefs "pay" -- in precisely the same manner as the sequences speak of beliefs paying rent.
Yup.
Quinean naturalism, and especially Quine's naturalized epistemology, are merely the "fullest" accounts of Less Wrong-ian philosophy to be found in the mainstream literature. Of course particular bits come from earlier traditions.
Parts of pragmatism (Peirce & Dewey) and pre-Quinean naturalism (Sellars & Dewey and even Hume) are certainly endorsed by much of the Less Wrong community. As far as I can tell, Eliezer's theory of truth is straight-up Peircian pragmatism.
My theory of truth is explicitly Tarskian. I'm explicitly influenced by Korzybski on language and by Peirce on "making beliefs pay rent", but I do think there are meaningful and true beliefs such that we cannot experientally distinguish between them and mutually exclusive alternatives, i.e., a photon going on existing after it passes over the horizon of the expanding universe as opposed to it blinking out of existence.
From my small but nontrivial knowledge of Quine, he always struck me as having a critically wrong epistemology.
LW-style epistemology looks like this:
whereas Quine's seems more like
which seems to be missing most of the point.
His boat model always struck me as something confused that should be strongly modified or replaced by a Bayesian epistemology in which posterior follows logically and non-destructively from prior, but I may be in the minority in LW on this.
If you're wondering why I'm afraid of philosophy, look no further than the fact that this discussion is assigning salience to LW posts in a completely different way to I do.
I mean, it seems to me that where I think an LW post is important and interesting in proportion to how much it helps construct a Friendly AI, how much it gets people to participate in the human project, or the amount of confusion that it permanently and completely dissipates, all of this here is prioritizing LW posts to the extent that they happen to imply positions on famous ongoing philosophical arguments.
That's why I'm afraid to be put into any philosophical tradition, Quinean or otherwise - and why I think I'm justified in saying that their cognitive workflow is not like unto my cognitive workflow.
With this comment at least, you aren't addressing the list of 20+ useful contributions of mainstream philosophy I gave.
Almost none of the items I listed have to do with famous old "problems" like free will or reductionism.
Instead, they're stuff that (1) you're already making direct use of in building FAI, like reflective equilibrium, or (2) stuff that is almost identical to the 'coping with cognitive biases' stuff you've written about so much, like Bishop & Trout (2004), or (3) stuff that is dissolving traditional debates into the cognitive algorithms that produce them, which you seem to think is the defining hallmark of LW-style philosophy, or (4) generally useful stuff like the work on catastrophic risks coming out of FHI at Oxford.
I hope you aren't going to keep insisting that mainstream philosophy has nothing useful to offer after reading my list. On this point, it may be time for you to just say "oops" and move on.
After all, we already agree on most of the important points, like you said. We agree that philosophy is an incredibly diseased discipline. We agree that people shouldn't go out and read Quine. We agree that almost everyone should be reading s...
I can't believe how difficult it is to convince some people that some useful things come out of mainstream philosophy. To me, it's a trivial point.
If it's not immediately obvious how an argument connects to a specific implementable policy or empirical fact, default is to covertly interpret it as being about status.
Since there are both good and bad things about philosophy, we can choose to emphasize the good (which accords philosophers and those who read them higher status) or emphasize the bad (which accords people who do their own work and ignore mainstream philosophy higher status).
If there are no consequences to this choice, it's more pleasant to dwell upon the bad: after all, the worse mainstream philosophy does, the more useful and original this makes our community; the better mainstream philosophy does, the more it suggests our community is a relatively minor phenomenon within a broader movement of other people with more resources and prestige than ourselves (and the more those of us whose time is worth less than Eliezer's should be reading philosophy journals instead of doing something less mind-numbing).
I think this community is smart enough to avoid many such biases if...
Personally, I'm finding that avoiding anthropomorphising humans, i.e. ignoring the noises coming out of their mouths in favour of watching their actions, pays off quite well, particularly when applied to myself ;-) I call this the "lump of lard with buttons to push" theory of human motivation. Certainly if my mind had much effect on my behaviour, I'd expect to see more evidence than I do ...
"lump of lard with buttons to push"
I take exception to that: I have a skeletal structure, dammit!
No, they just look like they're doing it; saying humans are athropomorphizing would attribute more intentionality to humans than is justified by the data.
Okay, I read it. It's funny how Dennett's criticism of Skinner partially mirrors Luke's criticism of Eliezer. Because Skinner uses terminology that's not standard in philosophy, Dennett feels he needs to be "spruced up".
"Thus, spruced up, Skinner's position becomes the following: don't use intentional idioms in psychology" (p. 60). It turns out that this is Quine's position and Dennett sort of suggests that Skinner should just shut up and read Quine already.
Ultimately, I can understand and at least partially agree with Dennett that Skinner goes too far in denying the value of mental vocabulary. But, happily, this doesn't significantly alter my belief in the value of Skinner type therapy. People naturally tend to err in the other direction and ascribe a more complex mental life to my daughter than is useful in optimizing her therapy. And I still think Skinner is right that objections to behaviorist training of my daughter in the name of 'freedom' or 'dignity' are misplaced.
Anyway, this was a useful thing to read - thank you, ciphergoth!
Thanks so much. I didn't know about Quine, and from what you've quoted it seems quite clearly in the same vein as LessWrong.
Also, out of curiosity, do you know if anything's been written about whether an agent (natural or artificial) needs goals in order to learn? Obviously humans and animals have values, at least in the sense of reward and punishment or positive and negative outcomes -- does anyone think that this is of practical importance for building processes that can form accurate beliefs about the world?
What you care about determines what your explorations learn about. An AI that didn't care about anything you thought was important, even instrumentally (it had no use for energy, say) probably wouldn't learn anything you thought was important. A probability-updater without goals and without other forces choosing among possible explorations would just study dust specks.
Part of the sequence: Rationality and Philosophy
Despite Yudkowsky's distaste for mainstream philosophy, Less Wrong is largely a philosophy blog. Major topics include epistemology, philosophy of language, free will, metaphysics, metaethics, normative ethics, machine ethics, axiology, philosophy of mind, and more.
Moreover, standard Less Wrong positions on philosophical matters have been standard positions in a movement within mainstream philosophy for half a century. That movement is sometimes called "Quinean naturalism" after Harvard's W.V. Quine, who articulated the Less Wrong approach to philosophy in the 1960s. Quine was one of the most influential philosophers of the last 200 years, so I'm not talking about an obscure movement in philosophy.
Let us survey the connections. Quine thought that philosophy was continuous with science - and where it wasn't, it was bad philosophy. He embraced empiricism and reductionism. He rejected the notion of libertarian free will. He regarded postmodernism as sophistry. Like Wittgenstein and Yudkowsky, Quine didn't try to straightforwardly solve traditional Big Questions as much as he either dissolved those questions or reframed them such that they could be solved. He dismissed endless semantic arguments about the meaning of vague terms like knowledge. He rejected a priori knowledge. He rejected the notion of privileged philosophical insight: knowledge comes from ordinary knowledge, as best refined by science. Eliezer once said that philosophy should be about cognitive science, and Quine would agree. Quine famously wrote:
But isn't this using science to justify science? Isn't that circular? Not quite, say Quine and Yudkowsky. It is merely "reflecting on your mind's degree of trustworthiness, using your current mind as opposed to something else." Luckily, the brain is the lens that sees its flaws. And thus, says Quine:
Yudkowsky once wrote, "If there's any centralized repository of reductionist-grade naturalistic cognitive philosophy, I've never heard mention of it."
When I read that I thought: What? That's Quinean naturalism! That's Kornblith and Stich and Bickle and the Churchlands and Thagard and Metzinger and Northoff! There are hundreds of philosophers who do that!
Non-Quinean philosophy
But I should also mention that LW philosophy / Quinean naturalism is not the largest strain of mainstream philosophy. Most philosophy is still done in relative ignorance (or ignoring) of cognitive science. Consider the preface to Rethinking Intuition:
Conclusion
So Less Wrong-style philosophy is part of a movement within mainstream philosophy to massively reform philosophy in light of recent cognitive science - a movement that has been active for at least two decades. Moreover, Less Wrong-style philosophy has its roots in Quinean naturalism from fifty years ago.
And I haven't even covered all the work in formal epistemology toward (1) mathematically formalizing concepts related to induction, belief, choice, and action, and (2) arguing about the foundations of probability, statistics, game theory, decision theory, and algorithmic learning theory.
So: Rationalists need not dismiss or avoid philosophy.
Update: To be clear, though, I don't recommend reading Quine. Most people should not spend their time reading even Quinean philosophy; learning statistics and AI and cognitive science will be far more useful. All I'm saying is that mainstream philosophy, especially Quinean philosophy, does make some useful contributions. I've listed more than 20 of mainstream philosophy's useful contributions here, including several instances of classic LW dissolution-to-algorithm.
But maybe it's a testament to the epistemic utility of Less Wrong-ian rationality training and thinking like an AI researcher that Less Wrong got so many things right without much interaction with Quinean naturalism. As Daniel Dennett (2006) said, "AI makes philosophy honest."
Next post: Philosophy: A Diseased Discipline
References
Dennett (2006). Computers as Prostheses for the Imagination. Talk presented at the International Computers and Philosophy Conference, Laval, France, May 3, 2006.
Kahneman, Slovic, & Tversky (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press.
Nisbett and Ross (1980). Human Inference: Strategies and Shortcomings of Social Judgment. Prentice-Hall.
Rips (1975). Inductive judgments about natural categories. Journal of Verbal Learning and Behavior, 12: 1-20.
Rosch (1978). Principles of categorization. In Rosch & Lloyd (eds.), Cognition and Categorization (pp. 27-48). Lawrence Erlbaum Associates.
Rosch & Mervis (1975). Family resemblances: studies in the internal structure of categories. Cognitive Psychology, 8: 382-439.
Smith & Medin (1981). Concepts and Categories. MIT Press.