Part of the sequence: Rationality and Philosophy

Despite Yudkowsky's distaste for mainstream philosophy, Less Wrong is largely a philosophy blog. Major topics include epistemology, philosophy of language, free willmetaphysics, metaethics, normative ethics, machine ethicsaxiology, philosophy of mind, and more.

Moreover, standard Less Wrong positions on philosophical matters have been standard positions in a movement within mainstream philosophy for half a century. That movement is sometimes called "Quinean naturalism" after Harvard's W.V. Quine, who articulated the Less Wrong approach to philosophy in the 1960s. Quine was one of the most influential philosophers of the last 200 years, so I'm not talking about an obscure movement in philosophy.

Let us survey the connections. Quine thought that philosophy was continuous with science - and where it wasn't, it was bad philosophy. He embraced empiricism and reductionism. He rejected the notion of libertarian free will. He regarded postmodernism as sophistry. Like Wittgenstein and Yudkowsky, Quine didn't try to straightforwardly solve traditional Big Questions as much as he either dissolved those questions or reframed them such that they could be solved. He dismissed endless semantic arguments about the meaning of vague terms like knowledge. He rejected a priori knowledge. He rejected the notion of privileged philosophical insight: knowledge comes from ordinary knowledge, as best refined by science. Eliezer once said that philosophy should be about cognitive science, and Quine would agree. Quine famously wrote:

The stimulation of his sensory receptors is all the evidence anybody has had to go on, ultimately, in arriving at his picture of the world. Why not just see how this construction really proceeds? Why not settle for psychology?

But isn't this using science to justify science? Isn't that circular? Not quite, say Quine and Yudkowsky. It is merely "reflecting on your mind's degree of trustworthiness, using your current mind as opposed to something else." Luckily, the brain is the lens that sees its flaws. And thus, says Quine:

Epistemology, or something like it, simply falls into place as a chapter of psychology and hence of natural science.

Yudkowsky once wrote, "If there's any centralized repository of reductionist-grade naturalistic cognitive philosophy, I've never heard mention of it."

When I read that I thought: What? That's Quinean naturalism! That's Kornblith and Stich and Bickle and the Churchlands and Thagard and Metzinger and Northoff! There are hundreds of philosophers who do that!



Non-Quinean philosophy

But I should also mention that LW philosophy / Quinean naturalism is not the largest strain of mainstream philosophy. Most philosophy is still done in relative ignorance (or ignoring) of cognitive science. Consider the preface to Rethinking Intuition:

Perhaps more than any other intellectual discipline, philosophical inquiry is driven by intuitive judgments, that is, by what "we would say" or by what seems true to the inquirer. For most of philosophical theorizing and debate, intuitions serve as something like a source of evidence that can be used to defend or attack particular philosophical positions.

One clear example of this is a traditional philosophical enterprise commonly known as conceptual analysis. Anyone familiar with Plato's dialogues knows how this type of inquiry is conducted. We see Socrates encounter someone who claims to have figured out the true essence of some abstract notion... the person puts forward a definition or analysis of the notion in the form of necessary and sufficient conditions that are thought to capture all and only instances of the concept in question. Socrates then refutes his interlocutor's definition of the concept by pointing out various counterexamples...

For example, in Book I of the Republic, when Cephalus defines justice in a way that requires the returning of property and total honesty, Socrates responds by pointing out that it would be unjust to return weapons to a person who had gone mad or to tell the whole truth to such a person. What is the status of these claims that certain behaviors would be unjust in the circumstances described? Socrates does not argue for them in any way. They seem to be no more than spontaneous judgments representing "common sense" or "what we would say." So it would seem that the proposed analysis is rejected because it fails to capture our intuitive judgments about the nature of justice.

After a proposed analysis or definition is overturned by an intuitive counterexample, the idea is to revise or replace the analysis with one that is not subject to the counterexample. Counterexamples to the new analysis are sought, the analysis revised if any counterexamples are found, and so on...

Refutations by intuitive counterexamples figure as prominently in today's philosophical journals as they did in Plato's dialogues...

...philosophers have continued to rely heavily upon intuitive judgments in pretty much the way they always have. And they continue to use them in the absence of any well articulated, generally accepted account of intuitive judgment - in particular, an account that establishes their epistemic credentials.

However, what appear to be serious new challenges to the way intuitions are employed have recently emerged from an unexpected quarter - empirical research in cognitive psychology.

With respect to the tradition of seeking definitions or conceptual analyses that are immune to counterexample, the challenge is based on the work of psychologists studying the nature of concepts and categorization of judgments. (See, e.g., Rosch 1978; Rosch and Mervis 1975; Rips 1975; Smith and Medin 1981). Psychologists working in this area have been pushed to abandon the view that we represent concepts with simple sets of necessary and sufficient conditions. The data seem to show that, except for some mathematical and geometrical concepts, it is not possible to use simple sets of conditions to capture the intuitive judgments people make regarding what falls under a given concept...

With regard to the use of intuitive judgments exemplified by reflective equilibrium, the challenge from cognitive psychology stems primarily from studies of inference strategies and belief revision. (See, e.g., Nisbett and Ross 1980; Kahneman, Slovic, and Tversky 1982.) Numerous studies of the patterns of inductive inference people use and judge to be intuitively plausible have revealed that people are prone to commit various fallacies. Moreover, they continue to find these fallacious patterns of reasoning to be intuitively acceptable upon reflection... Similarly, studies of the "intuitive" heuristics ordinary people accept reveal various gross departures from empirically correct principles...

There is a growing consensus among philosophers that there is a serious and fundamental problem here that needs to be addressed. In fact, we do not think it is an overstatement to say that Western analytic philosophy is, in many respects, undergoing a crisis where there is considerable urgency and anxiety regarding the status of intuitive analysis.

 

Conclusion

So Less Wrong-style philosophy is part of a movement within mainstream philosophy to massively reform philosophy in light of recent cognitive science - a movement that has been active for at least two decades. Moreover, Less Wrong-style philosophy has its roots in Quinean naturalism from fifty years ago.

And I haven't even covered all the work in formal epistemology toward (1) mathematically formalizing concepts related to induction, belief, choice, and action, and (2) arguing about the foundations of probability, statistics, game theory, decision theory, and algorithmic learning theory.

So: Rationalists need not dismiss or avoid philosophy.

Update: To be clear, though, I don't recommend reading Quine. Most people should not spend their time reading even Quinean philosophy; learning statistics and AI and cognitive science will be far more useful. All I'm saying is that mainstream philosophy, especially Quinean philosophy, does make some useful contributions. I've listed more than 20 of mainstream philosophy's useful contributions here, including several instances of classic LW dissolution-to-algorithm.

But maybe it's a testament to the epistemic utility of Less Wrong-ian rationality training and thinking like an AI researcher that Less Wrong got so many things right without much interaction with Quinean naturalism. As Daniel Dennett (2006) said, "AI makes philosophy honest."

 

Next post: Philosophy: A Diseased Discipline

 

 

References

Dennett (2006). Computers as Prostheses for the Imagination. Talk presented at the International Computers and Philosophy Conference, Laval, France, May 3, 2006.

Kahneman, Slovic, & Tversky (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press.

Nisbett and Ross (1980). Human Inference: Strategies and Shortcomings of Social Judgment. Prentice-Hall.

Rips (1975). Inductive judgments about natural categories. Journal of Verbal Learning and Behavior, 12: 1-20.

Rosch (1978). Principles of categorization. In Rosch & Lloyd (eds.), Cognition and Categorization (pp. 27-48). Lawrence Erlbaum Associates.

Rosch & Mervis (1975). Family resemblances: studies in the internal structure of categories. Cognitive Psychology, 8: 382-439.

Smith & Medin (1981). Concepts and Categories. MIT Press.

Less Wrong Rationality and Mainstream Philosophy
New Comment
335 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Note the way I speak with John Baez in the following interview, done months before the present post:

http://johncarlosbaez.wordpress.com/2011/03/25/this-weeks-finds-week-313/

In terms of what I would advocate programming a very powerful AI to actually do, the keywords are “mature folk morality” and “reflective equilibrium”...

In terms of Google keywords, my brand of metaethics is closest to analytic descriptivism or moral functionalism...

I was happy to try and phrase this interview as if it actually had something to do with philosophy.

Although I actually invented the relevant positions myself, on the fly when FAI theory needed it, then Googled around to find the philosophical nearest neighbor.

The fact that you are skeptical about this, and suspect I suppose that I accidentally picked up some analytic descriptivism or mature folk morality elsewhere and then forgot I'd read about it, even though I hadn't gone anywhere remotely near that field of philosophy until I wanted to try speaking their language, well, that strikes at the heart of why all this praise of "mainstream" philosophy strikes me the wrong way. Because the versions of "mature folk morality" and &qu... (read more)

With this comment, I think our disagreement is resolved, at least to my satisfaction.

We agree that philosophy can be useful, and that sometimes it's desirable to speak the common language. I agree that sometimes it is easier to reinvent the wheel, but sometimes it's not.

As for whether Less Wrong is a branch of mainstream philosophy, I'm not much interested to argue about that. There are many basic assumptions shared by Quinean philosophy and Yudkowskian philosophy in opposition to most philosophers, even down to some very specific ideas like naturalized epistemology that to my knowledge had not been articulated very well until Quine. And both Yudkowskian philosophy and Quinean naturalism spend an awful lot of time dissolving philosophical debates into cognitive algorithms and challenging intuitionist thinking - so far, those have been the main foci of experimental philosophy, which is very Quinean, and was mostly founded by one of Quine's students, Stephen Stich. Those are the reasons I presented Yudkowskian philosophy as part of the broadly Quinean movement in philosophy.

On the other hand, I'm happy to take your word for it that you came up with most of this stuff on your own, and... (read more)

5ToddStark
On the general issue of the origin of various philosophical ideas, I had a thought. Perhaps we take a lot of our tacit knowledge for granted in our thinking about attributions. I suspect that abstract ideas become part of wider culture and then serve as part of the reasoning of other people without them explicitly realizing the role of those abstracts. For example, Karl Popper had a concept of "World 3" which was essentially the world of artifacts that are inherited from generation to generation and become a kind of background for the thinking of each successive generation who inherits that culure. That concept of "unconscious ideas" was also found in a number of other places (and has been of course for as far back as we can remember) and has been incorporated into many theories and explanations of varying usefulness. Some of Freud's ideas have a similar rough feel to them and his albeit unscientific ideas became highly influential in popular culture and influenced all sorts of things, including some productive psychology programs that emphasize influences outside of explcit awareness. Our thinking is given shape in part by a background that we aren't explicitly aware of and as a result we can'[t always make accurate attributions of intellectual history except in terms of what has been written down. Some of the influence happens outside of our awareness via various mechanisms of implicit or tacit learning. We know a lot more than we realize we know, we "stand on the shoulders of others" in a somewhat obscure sense as well as the more obvious one. An important implication of this might be that our reasoning starts from assumptions and conceptual schemes that we don't really think about because it is "intuitive" and appears to each of us as "commonsense." However it may be that "commonsense" and "intuition" are forms of ubiquitous expertise that differ somewhat between people. If that is the case, then people reason from different starting points and perhaps can reas
6BobTheBob
You say, and that you prefer to "invent all these things the correct way". From this and your preceding text I understand, * that philosophers have identified some meta-ethical theses and concepts similar to concepts and theses you've invented all by yourself, * that the philosophers' theses and concepts are in some way systematically defective or inadequate, and * that the arguments used to defend the theses are different than the arguments which you would use to defend them. (I'm not sure what you mean in saying the concepts and theses aren't optimized for Friendly-AI thinking.) You imply that you've done a comprehensive survey, to arrive at these conclusions. It'd be great if you could share the details. Which discussions of these ideas have you studied, how do your concepts differ from the philosophers', and what specifically are the flaws in the philosophers' versions? I'm not familiar with these meta-ethical theses but I see that Frank Jackson and Philip Pettit are credited with sparking the debate in philosophy - what in their thinking do you find inadequate? And what makes your method of invention (to use your term) of these things the correct one? I apologize if the answers to these questions are all contained in your sequences. I've looked at some of them but the ones I've encountered do not answer these questions. You disparage the value of philosophy, but it seems to me you could benefit from it. In another of your posts, 'How An Algorithm Feels From Inside', I came across the following: This is false - the claim, I mean, that when you look at a green cup, you are seeing a picture in your visual cortex. On the contrary, the thing you see is reflecting light, is on the table in front of you (say), has a mass of many grams, is made of ceramic (say), and on an on. It's a cup -it emphatically is not in your brainpan. Now, if you want to counter that I'm just quibbling over the meaning of the verb 'to see', that's fine - my point is that it is yo
[-][anonymous]280

The community definitely needs to work on this whole "virtue of scholarship" thing.

2Davorak
LW community or the philosophy community?
7[anonymous]
I was talking about the LW community.
[-]Nisan250

That's Kornblith and Stich and Bickle [...]

Those names are clearly made-up :)

[-]djc200

It's not Quinean naturalism. It's logical empiricism with a computational twist. I don't suggest that everyone go out and read Carnap, though. One way that philosophy makes progress is when people work in relative isolation, figuring out the consequences of assumptions rather than arguing about them. The isolation usually leads to mistakes and reinventions, but it also leads to new ideas. Premature engagement can minimize all three.

1lukeprog
To some degree. It might be more precise to say that many AI programs in general are a computational update to Carnap's The Logical Structure of the World (1937). But logical empiricism as a movement is basically dead, while what I've called Quinean naturalism is still a major force.
2Jack
I'd actually say the central shared features that you're identifying- the dissolving of the philosophical paradox instead of reifying it as well as the centrality of observation and science goes back to Hume.
-5Peterdjones

Philosophy quote of the day:

I am prepared to go so far as to say that within a few years, if there remain any philosophers who are not familiar with some of the main developments in artificial intelligence, it will be fair to accuse them of professional incompetence, and that to teach courses in philosophy of mind, epistemology, aesthetics, philosophy of science, philosophy of language, ethics, metaphysics, and other main areas of philosophy, without discussing the relevant aspects of artificial intelligence will be as irresponsible as giving a degree course in physics which includes no quantum theory.

Aaron Sloman (1978)

According to the link:

Aaron Sloman is a philosopher and researcher on artificial intelligence and cognitive science.

So, we have a spectacular mis-estimation of the time frame - claiming 33 years ago that AI would be seen as important "within a few years". That is off by one order of magnitude (and still counting!) Do we blame his confusion on the fact that he is a philosopher, or was the over-optimism a symptom of his activity as an AI researcher? :)

ETA:

as irresponsible as giving a degree course in physics which includes no quantum theory.

I'm not sure I like the analogy. QM is foundational for physics, while AI merely shares some (as yet unknown) foundation with all those mind-oriented branches of philosophy. A better analogy might be "giving a degree course in biology which includes no exobiology".

Hmmm. I'm reasonably confident that biology degree programs will not include more than a paragraph on exobiology until we have an actual example of exobiology to talk about. So what is the argument for doing otherwise with regard to AI in philosophy?

Oh, yeah. I remember. Philosophers, unlike biologists, have never shied away from investigating things that are not known to exist.

9ata
He didn't necessarily predict that AI would be seen as important in that timeframe; what he said was that if it wasn't, philosophers would have to be incompetent and their teaching irresponsible.
7wedrifid
Full marks... but let's be honest, he doesn't get too many difficulty points for making that prediction...
0lukeprog
I didn't read the whole article. Where did Sloman claim that AI would be seen as important within a few years?
0Perplexed
I inferred that he would characterize it as important in that time frame from: together with a (perhaps unjustified) assumption that philosophers refrain from calling their colleagues "professionally incompetent" unless the stakes are important. And that they generally do what is fair.

Many mainstream philosophers have been defending Less Wrong-ian positions for decades before Overcoming Bias or Less Wrong existed.

When I read posts on Overcoming Bias (and sometimes also LW) discussing various human frailties and biases, especially those related to status and signaling, what often pops into my mind are observations by Friedrich Nietzsche. I've found that many of them represent typical OB insights, though expressed in a more poetic, caustic, and disorganized way. Now of course, there's a whole lot of nonsense in Nietzsche, and a frightful amount of nonsense in the subsequent philosophy inspired by him, but his insight about these matters is often first-class.

3MichaelVassar
I agree with this actually.
[-]pjeby150

Also, how about William James and pragmatism? I read Pragmatism recently, and had been meaning to post about the many bits that sound like they could've been cut straight from the sequences -- IIRC, there was some actual discussion of making beliefs "pay" -- in precisely the same manner as the sequences speak of beliefs paying rent.

Yup.

Quinean naturalism, and especially Quine's naturalized epistemology, are merely the "fullest" accounts of Less Wrong-ian philosophy to be found in the mainstream literature. Of course particular bits come from earlier traditions.

Parts of pragmatism (Peirce & Dewey) and pre-Quinean naturalism (Sellars & Dewey and even Hume) are certainly endorsed by much of the Less Wrong community. As far as I can tell, Eliezer's theory of truth is straight-up Peircian pragmatism.

4Perplexed
I see it as a closer match to Korzybski by way of Hayakawa.
8lukeprog
Eliezer's philosophy of language is clearly influenced by Korzybski via Hayakawa, but what is Korzybski's theory of truth? I'm just not familiar.
2Perplexed
Maybe I'm out of my depth here. But from a semantic standpoint, I thought that a theory of language pretty much is a theory of truth. At least in mathematical logic with Tarskian semantics, the meaning of a statement is given by saying what conditions make the statement true.
5lukeprog
Perplexed, Truth-conditional accounts of truth, associated with Tarski and Davidson, are popular in philosophy of language. But most approaches to language do not contain a truth-conditional account of truth. Philosophy of language is most reliably associated with a theory of meaning: How is it that words and sentences relate to reality? You might be right that Eliezer's theory of truth comes from something like Korzybski's (now defunct) theory of language, but I'm not familiar with Korzybski's theory of truth.

My theory of truth is explicitly Tarskian. I'm explicitly influenced by Korzybski on language and by Peirce on "making beliefs pay rent", but I do think there are meaningful and true beliefs such that we cannot experientally distinguish between them and mutually exclusive alternatives, i.e., a photon going on existing after it passes over the horizon of the expanding universe as opposed to it blinking out of existence.

2lukeprog
Thanks for clarifying! For the record, my own take: As a descriptive theory of how humans use language, I think truth-conditional accounts of meaning are inadequate. But that's the domain of contemporary linguistics, anyway - which tends to line up more with the "speech acts" camp in philosophy of language. But we need something like a Tarskian theory of language and truth in order to do explicit AI programming, so I'm glad we've done so much work on that. And in certain contexts, philosophers can simply adopt a Tarskian way of talking rather than a more natural-language way of talking - if they want to. And I agree about there being meaningful and true beliefs that we cannot experientially distinguish. That is one point at which you and I disagree with the logical positivists and, I think, Korzybski.
2Perplexed
I'm only familiar with it through Hayakawa. The reference you provided to support your claim that the General Semantics theory of language is "defunct" says this about the GS theory of truth: All of which sounds pretty close to Davidson and Tarski to me, though I'm not an expert. And not all that far from Yudkowsky. I made my comment mentioning Language in Thought and Action before reading your post. I now see that your point was to fit Eliezer into the mainstream of Anglophone philosophy. I agree; he fits pretty well. And in particular, I agree (and regret) that he has been strongly influenced, directly or indirectly, by W. V. O. Quine. I'm not sure why I decided to mention Hayakawa's book - since it (like the sequences) definitely is too lowbrow to be part of that mainstream. I didn't mean for my comment to be taken as disagreement with you. I only meant to contribute some of that scholarship that you are always talking about. My point is, simply speaking, that if you are curious about where Eliezer 'stole' his ideas, you will find more of them in Hayakawa than in Peirce.
2lukeprog
Probably, though Yudkowsky quotes Peirce here.
0hairyfigment
Korzybski's theory of language places the source of meaning in non-verbal reactions to 'basic' undefined terms, or terms that define each other. This has two consequences for his theory of truth. First, of course, he thinks we should determine truth using non-verbal experience. Second, he explicitly tries to make his readers adopt 'undefined terms' and the associated reactions from math and science, due to the success of these systems. Korzybski particularly likes the words "structure," "relation," and "order" -- he calls science structural knowledge and says its math has a structure similar to the world. As near as I can tell, he means by this that if b follows a in the theory then those letters should represent some B and A which have the 'same' relation out in the world. I don't know that 2011 science rejects his theory of language. His grand attempt to produce a system like Aristotle's does seem like a sad tale in that, while his verbal formulation of the "logic of probability" seems accurate, he couldn't apply it despite knowing more than enough math to do so.

From my small but nontrivial knowledge of Quine, he always struck me as having a critically wrong epistemology.

LW-style epistemology looks like this:

  1. Let's figure out how a perfectly rational being (AI) learns.
  2. Let's figure out how humans learn.
  3. Let's use that knowledge to fix humans so that they are more like AIs.

whereas Quine's seems more like

  1. Let's figure out how humans learn

which seems to be missing most of the point.

His boat model always struck me as something confused that should be strongly modified or replaced by a Bayesian epistemology in which posterior follows logically and non-destructively from prior, but I may be in the minority in LW on this.

7lukeprog
It's true that Quine lacked the insights of contemporary probability theory and AI, but remember that Quine's most significant work was done before 1970. Quine was also a behaviorist. He was wrong about many things. My point was that both Quine and Yudkowsky think that recursive justification bottoms out in using the lens that sees its own flaws to figure out how humans gain knowledge, and correcting mistakes that come in. That's naturalized epistemology right there. Epistemology as cognitive science. Of course, naturalized epistemology has made a lot of progress since then thanks to the work of Kahneman and Tversky and Pearl and so on - the people that Yudkowsky learned from.

If you're wondering why I'm afraid of philosophy, look no further than the fact that this discussion is assigning salience to LW posts in a completely different way to I do.

I mean, it seems to me that where I think an LW post is important and interesting in proportion to how much it helps construct a Friendly AI, how much it gets people to participate in the human project, or the amount of confusion that it permanently and completely dissipates, all of this here is prioritizing LW posts to the extent that they happen to imply positions on famous ongoing philosophical arguments.

That's why I'm afraid to be put into any philosophical tradition, Quinean or otherwise - and why I think I'm justified in saying that their cognitive workflow is not like unto my cognitive workflow.

With this comment at least, you aren't addressing the list of 20+ useful contributions of mainstream philosophy I gave.

Almost none of the items I listed have to do with famous old "problems" like free will or reductionism.

Instead, they're stuff that (1) you're already making direct use of in building FAI, like reflective equilibrium, or (2) stuff that is almost identical to the 'coping with cognitive biases' stuff you've written about so much, like Bishop & Trout (2004), or (3) stuff that is dissolving traditional debates into the cognitive algorithms that produce them, which you seem to think is the defining hallmark of LW-style philosophy, or (4) generally useful stuff like the work on catastrophic risks coming out of FHI at Oxford.

I hope you aren't going to keep insisting that mainstream philosophy has nothing useful to offer after reading my list. On this point, it may be time for you to just say "oops" and move on.

After all, we already agree on most of the important points, like you said. We agree that philosophy is an incredibly diseased discipline. We agree that people shouldn't go out and read Quine. We agree that almost everyone should be reading s... (read more)

4Emile
I think it would be good for LessWrong to have a bit more academic philosophers and students of philosophy, to have a slightly higher philosophers/programmers ratio (as long as it doesn't come with the expectation that everybody should understand a lot of concepts in philosophy that aren't in the sequences).
2loup-vaillant
I'm late, but… is there substantial chain of cause and effect between the discovery of useful conclusions from mainstream philosophy, and the use of those conclusions by Eliezer? Counter-factually, if those conclusions were not drawn, would it be less likely that Eliezer found them anyway? Eliezer seems to deny this chain of cause and effect. I wonder to what extent you think such a denial is unjustified.
2Vladimir_Nesov
You still haven't given an actual use case for your sense of "useful", only historical priority (the qualifier "come out" is telling, for example), and haven't connected your discussion that involves the word "useful" to the use case Eliezer assumes (even where you answered that side of the discussion without using the word, by agreeing that particular use cases for mainstream philosophy are a loss). It's an argument about definition of "useful", or something hiding behind this equivocation. I suggest tabooing "useful", when applied to literature (as opposed to activity with stated purpose) on your side.
5lukeprog
Eliezer and I, over the course of our long discussion, have come to some understanding of what would constitute useful. Though, Philosophy_Tutor suggested that Eliezer taboo his sense of "useful" before trying to declare every item on my list as useless. Whether or not I can provide a set of necessary and sufficient conditions for "useful", I've repeatedly pointed out that: 1. Several works from mainstream philosophy do the same things he has spent a great deal of time doing and advocating on Less Wrong, so if he thinks those works are useless then it would appear he thinks much of what he has done on Less Wrong is uesless. 2. Quite a few works from mainstream philosophy have been used by him, so presumably he finds them useful. I can't believe how difficult it is to convince some people that some useful things come out of mainstream philosophy. To me, it's a trivial point. Those resisting this truth keep trying to change the subject and make it about how philosophy is a diseased subject (agreed!), how we shouldn't read Quine (agreed!), how other subjects are more important and useful (agreed!), and so on.

I can't believe how difficult it is to convince some people that some useful things come out of mainstream philosophy. To me, it's a trivial point.

If it's not immediately obvious how an argument connects to a specific implementable policy or empirical fact, default is to covertly interpret it as being about status.

Since there are both good and bad things about philosophy, we can choose to emphasize the good (which accords philosophers and those who read them higher status) or emphasize the bad (which accords people who do their own work and ignore mainstream philosophy higher status).

If there are no consequences to this choice, it's more pleasant to dwell upon the bad: after all, the worse mainstream philosophy does, the more useful and original this makes our community; the better mainstream philosophy does, the more it suggests our community is a relatively minor phenomenon within a broader movement of other people with more resources and prestige than ourselves (and the more those of us whose time is worth less than Eliezer's should be reading philosophy journals instead of doing something less mind-numbing).

I think this community is smart enough to avoid many such biases if... (read more)

5lukeprog
Maybe my original post gave the wrong impression of "which side I'm on." (Yay philosophy or no?) Like Quine and Yudkowsky, I've generally considered myself an "anti-philosophy philosopher." But you're right that such vague questions and categorizations are not really the point. The solution is to present specific useful insights of mainstream philosophy, and let the LW community make use of them. I've done that in brief, here, and am working on posts to elaborate some of those items in more detail. What disappoints me is the double standard being used (by some) for what counts as "useful" when presented in AI books or on Less Wrong, versus what counts as "useful" when it happens to come from mainstream philosophy.
-3Vladimir_Nesov
I don't think there is double standard involved. There are use cases (plans) that distinguish LW from mainstream philosophy that make philosophy less useful for those plans. There are other use cases where philosophy would be more useful. Making an overall judgment would depend on which use cases are important. The concept of "useful" that leads to a classification which marks philosophy "not useful" might be one you don't endorse, but we already discussed a few examples that show that such concepts can be natural, even if you'd prefer not to identify them with "usefulness". A double standard would filter evidence differently when considering the things it's double-standard about. If we are talking about particular use cases, I don't think there was significant distortion of attention paid for either case. A point where evidence could be filtered in favor of LW would be focus on particular use cases, but that charge depends on the importance of those use cases and their alternatives to the people selecting them. So far, you didn't give such a selection that favors philosophy, and in fact you've agreed on the status of the use cases named by others. So, apart from your intuition that "useful" is an applicable label, not much about the rules of reasoning and motivation about your claim was given. Why is it interesting to discuss whether mainstream philosophy is "useful" in the sense you mean this concept? If we are to discuss it, what kinds of arguments would tell us more about this fact? Can you find effective arguments about other people's concepts of usefulness, given that the intuitive appeals made so far failed? How is your choice of concept of "usefulness" related to other people's concepts, apart from the use of the same label? (Words/concepts can be wrong, but to argue that a word is wrong with a person who doesn't see it so would require a more specific argument or reasoning heuristic.) Since there seems to be no known easy way of making progress on disc
9lukeprog
I love to read and write interesting things - which is why I take to heart Eliezer's constant warning to be wary of things that are fun to argue. But interestingness was not the point of my post. Utility to FAI and other Less Wrong projects was the point. My point was that mainstream philosophy sometimes offers things of utility to Less Wrong. And I gave a long list of examples. Some of them are things (from mainstream philosophy) that Eliezer and Less Wrong are already making profitable use of. Others are things that Less Wrong had not mentioned before I arrived, but are doing very much the same sorts of things that Less Wrong values - for example dissolution-to-algorithm and strategies for overcoming biases. Had these things been written up as Less Wrong posts, it seems they'd have been well-received. And in cases where they have been written up as Less Wrong posts, they have been well-received. My continuing discussion in this thread has been to suggest that therefore, some useful things do come from mainstream philosophy, and need not be ignored simply because of the genre or industry they come from. By "useful" I just mean "possessing utility toward some goal." By "useful to Less Wrong", then, I mean "possessing utility toward a goal of Less Wrong's/Eliezer's." For example, both reflective equilibrium and Epistemology and the Psychology of Human Judgment possess that kind of utility. That's a very rough sketch, anyway. But no, I don't have time to write up a 30-page conceptual analysis of what it means for something to be "useful." But I think I still don't understand what you mean. Maybe an example would help. A good one would be this: Is there a sense in which reflective equilibrium (a theory or process that happens to come from mainstream philosophy) is not useful to Eliezer, despite the fact that it plays a central role in CEV, his plan to save humanity from unfriendly AI? Another one would be this: Is there a sense in which Eliezer's writing on how to
1Vladimir_Nesov
(I edited the grandparent comment substantially since publishing it, so your reply is probably out of date.)
1lukeprog
Okay, I updated my reply comment.
1Vaniver
Isn't the smart move there not to play? What would make that the LW move?
1Vladimir_Nesov
Sounds plausible, and if true, a useful observation.
0Will_Sawin
"Yay philosophy - yes or no?" and questions of that ilk seem like an interesting question to actually ask people. You could, for instance, make a debate team lay out the pro and con positions.
2[anonymous]
A lot of the "nay philosophy" end up doing philosophy, even while they continue to say "nay philosophy". So I have a hard time taking the opinion at face value. Moreover it's not like there is one kind of thinking, philosophy, and another kind of thinking, non-philosophy. Any kind of evidence or argument could in principle be employed by someone calling himself a philosopher - or, inversely, by someone calling himself a non-philosopher. If you suddenly have a bright idea and start developing it into an essay, I submit that you don't necessarily know whether, once the idea has fully bloomed, it will be considered philosophy or non-philosophy. I don't know whether it's true that science used to be considered a subtopic of philosophy ("natural philosophy"), but it seems entirely plausible that it was all philosophy but that at some point there was a terminological exodus, when physicists stopped calling themselves philosophers. In that older, more inclusive sense, then anyone who says "nay philosophy" is also saying "nay science". Keeping that in mind, what we now call "philosophy" might instead be called, "what's left of philosophy after the great terminological exodus". Of course "what's left" is also called "the dregs". In light of that, what we all "philosophy" might instead be called "the dregs of philosophy".
-1Vladimir_M
That is exactly true. The old term for what we nowadays call "natural science" was "natural philosophy." There are still relics of this old terminology, most notably that in English the title "doctor of philosophy" (or the Latin version thereof) is still used by physicists and other natural scientists. The "terminological exodus" you refer to happened only in the 19th century.
4CuSithBell
This is still happening, right? I once had a professor who suggested that philosophy is basically the process of creating new fields and removing them from philosophy - thence logic, mathematics, physics, and more recently linguistics.
1rabidchicken
Thats an interesting definition of philosophy, but I think philosophy does far more than that.
1CuSithBell
That's true, I may have overstated his suggestion - the actual context was "why has philosophy made so little progress over the past several thousand years?" ("Because every time a philosophical question is settled, it stops being a philosophical question.")
0Will_Sawin
This provides a defense of the claim that luke was attacking earlier on the thread, that "It's totally reasonable to expect philosophy to provide several interesting/useful results [in one or a few broad subject areas] and then suddenly stop."
0CuSithBell
Possibly, yes, but I'd expect philosophy to stop working on a field only after it's recognized as its own (non-philosophy) area (if then) - which, for example, morality is not.
0Marius
Is theology a branch of philosophy?
0CuSithBell
Errr... it seems to me that theology in many ways acts like philosophy, with the addition of stuff like exegesis and apologetics... but any particular religion's theology is distinct from the set of things we'd call "philosophy" as a monolithic institution. This is far from my area of expertise, however!
9Jack
I'm worried part of this debate is just about status. When someone comes in and says "Hey, you guys should really pay more attention to what x group of people with y credentials says about z" it reminds everyone here, most of whom lack y credentials that society doesn't recognize them as an authority on z and so they are some how less valuable than group x. So there is an impulse to say that z is obvious, that z doesn't matter or that having y isn't really a good indicator of being right about z. That way, people here don't lose status relative to group x. Conversely, members of group x probably put money and effort into getting credential y and will be offended by the suggestion that what they know about doesn't matter, that it is obvious or that their having credential y doesn't indicate they know anything more than anyone else. Me, I have an undergraduate degree in philosophy which I value so I'm sure I get a little defensive when philosophy is mocked or criticized around here. But most people here probably fit in the first category. Eliezer, being a human being like everybody else, is likely a little insecure about his lack of a formal education and perhaps particularly apt to deny an academic community status as domain experts in a fields he's worked in (even though he is certainly right that formal credentials are overvalued). I think a lot of this argument isn't really a disagreement over what is valuable and what isn't- it's just people emphasizing or de-emphasizing different ideas and writers to make themselves look higher status. ... These statements have no content they just say "My stuff is better than your stuff".
0lukeprog
I think such debates unavoidably include status motivations. We are status-oriented, signaling creatures. Politics mattered in our ancestral environment. Of course you know that I never said anything like either of the parody quotes provided. And I'm not trying to stay Quinean philosophy is better than Less Wrong. The claim I'm making is a very weak claim: that some useful stuff comes out of mainstream philosophy, and Less Wrong shouldn't ignore it when that happens just because the source happens to be mainstream philosophy.
0Jack
Yes. But you're right so that side had to be a strawman, didn't it?
0lukeprog
I'm sorry; what do you mean?
1Jack
Since I hold a pretty strong pro-mainstream philosophy position (relative to others here, perhaps including yourself) I was a little more creative with that parody than in the other. I was attempting to be self-deprecating to soften my criticism (that the reluctance to embrace your position stems from status insecurities) so as to not set of tribal war instincts. Though on reflection it occurs to me that since I didn't state my position in that comment or in this thread and have only talked about it in comments (some before you even arrived here at Less Wrong) it's pretty unlikely that you or anyone else would remember my position on the matter, in which case my attempt at self-deprecation might look like a criticism of you.
0lukeprog
Yeah... I've apparently missed something important to interpreting you. For the record, if you hold "a pretty strong pro-mainstream philosophy position" then you definitely are more in favor of mainstream philosophy than I am. :)
1Jack
It's all relative. Surround me with academics and I sound like Eliezer. But yes, once or twice I've even had the gall to suggest that some continental philosophers are valuable.
1lukeprog
And for that, two days in the slammer! :)
8Vladimir_Nesov
I agree that you've agreed on many specific things. I suggest that the sense of remaining disagreement is currently confused through refusing to taboo "useful". You use one definition, he uses a different one, and there is possibly genuine disagreement in there somewhere, but you won't be able to find it without again switching to more specific discussion. Also, taboo doesn't work by giving a definition, instead you explain whatever you wanted without using the concept explicitly (so it's always a definition in a specific context). For example: Instead of debating this point of the definition (and what constitutes "being used"), consider the questions of whether Eliezer agrees that he was influenced (in any sense) by quite a few works from mainstream philosophy (obviously), whether they provided insights that would've been unavailable otherwise (probably not), whether they happen to already contain some of the same basic insights found elsewhere (yes), whether they originate them (it depends), etc. It's a long list, not as satisfying as the simple "useful/not", but this is the way to unpack the disagreement. And even if you agree on every fact, his sense of "useful" can disagree with yours.
-1lukeprog
I'll wait to see if Eliezer really thinks we aren't on the same page about the meaning of 'useful'. If reflective equilibrium, which plays a central role in Eliezer's plan (CEV) to save humanity, isn't useful, then I will be very surprised, and we will seem to be using different definitions of the term "useful."
-1ata
Has he repudiated the usefulness of reflective equilibrium (or of the concept, or the term)? I recall that he's used it himself in some of the more recent summaries of CEV.
0Apprentice
Are you, in your view, having The Problem with Non-Philosophers again?
0wnoise
It seems to me that the disagreement might be over the adjective "mainstream". To me, that connotes what's being mentioned (not covered in detail, merely mentioned) in broad overviews such as freshman introductory classes or non-major classes at college. As an analogy, in physics both general relativity and quantum mechanics are mainstream. They get mentioned in these contexts, though not, of course, covered. Something like timeless physics does not. How much of the standard philosophy curriculum covers Quinean Naturalism?
3lukeprog
I dunno, I think Eliezer and I are clear on what mainstream philosophy is. And if anything is mainstream, it's John Rawls and Oxford University professors whose work Eliezer is already making use of.
1wnoise
Well, when I see: That does not make me think that "mainstream philosophy" as a whole is doing useful work. Localized individuals and small strains appear to be. But even when the small strains are taken seriously in mainstream philosophy, that's not the same as mainstream philosophy doing said work, and labeling any advances as "here's mainstream philosophy doing good work" seems to be misleading.
2lukeprog
No, mainstream philosophy "as a whole" is not doing useful work. That's what the central section of my original post was about: Non-Quinean philosophy, and how its entire method is fundamentally flawed. Even quite a lot of Quinean naturalistic philosophy is not doing useful work. I'm not trying to mislead anybody. But Eliezer has apparently taken the extreme position that mainstream philosophy in general is worthless, so I made a long list of useful things that have come from mainstream philosophy - and some of it is not even from the most productive strain of mainstream philosophy, what I've been calling "Quinean naturalism." Useful things sometimes come from unexpected sources.
3Davorak
In the above quote the following replacements have been made. philosophy -> religion Quinean -> Christian There are many ideas from religion that are not useless. It is not often the most productive source to learn from either however. Why filter ideas from religion texts when better sources are available or when it is easier to recreate them in within in a better framework; a framework that actual justifies the idea. This is also important because in my experience people fail to filter constantly and end up accepting bad ideas. I do not see EY arguing that main stream philosophy has not useful nuggets. I seem him arguing that filtering for those nugets in general makes the process too costly. I see you arguing that "Quinean naturalism" is a rich vien of philosophy and worth mining for nuggets. If you want to prove the worth of mine "Quinean naturalism" you have to display nuggets that EY has not found through better means already.
4lukeprog
I did list such nuggets that EY has not found through other means already, including several instances of "dissolution-to-algorithm", which EY seems to think of as the hallmark of LW-style philosophy. I wouldn't call mainstream philosophy a "rich vein" that is (for most people) worth mining for nuggets. I've specifically said that people will get far more value reading statistics and AI and cognitive science. I've specifically said that EY should not be mining mainstream philosophy. What I'm saying is that if useful stuff happens to come from mainstream philosophy, why ignore it? It's people like myself who are already familiar with mainstream philosophy, and for whom it doesn't take much effort to list 20+ useful contributions of mainstream philosophy, who should bring those useful nuggets to the attention of Less Wrong. What seems strange to me is to draw an arbitrary boundary around mainstream philosophy and say, "If it happens to come from here, we don't want it." And I think Eliezer already agrees with this, since of course he is already making use of several things from mainstream philosophy. But on the other hand, he seems to be insisting that mainstream philosophy has nothing (or almost nothing) useful to offer.
4Davorak
In that post you labeled that list as "useful contributions of mainstream philosophy:" Which does not fit the criteria of nuggets not found by other means. Nor "here are things you have not figured out yet" or "see how this particular method is simpler and more elegant then the one you are currently using." This is similar to what I think EY is expressing in: Show me this field's power! At list of 20 topics that are similar to LW is suggestive but not compelling. Compelling would be more predictive power or correct predictions where LW methods have been known to fail. Compelling would be just one case covered in depth fitting the above criteria. Frankly, and not ment to reflect on you, listing 20 topics that are suggestive reminds me of fast talk manipulation and/or an infomercial. I want to see a conversation digging deep on one topic. I want depth of proof not breadth, because breadth by itself is not compelling only suggestive. I see you repeating this in many places, but I have yet to see EY suggest the useful parts of philosophy should be ignored. I see EY arguing philosophy is a field "whose poison a novice should avoid". Note the that novices should avoid, not that well grounded rationalists should ignore. I have followed the conversations of EY's and I do not see him saying what you assert. I see you repeatedly asserting he or LW in general is though. In theory it should not be hard to dissolve the problem if you can provide links to where you believe this assertions have been made.
6lukeprog
I don't understand. Explanation of cognitive biases and how to battle against them on Less Wrong? "Useful." Explanation of cognitive biases and how to battle against them in a mainstream philosophy book? "Not useful." Dissolution of common (but easy) philosophical problem like free will to cognitive algorithm on Less Wrong? "Useful, impressive." Dissolution of common (but easy) philosophical problems in mainstream philosophy journals? "Not useful." Is this seriously what is being claimed? If it's not what's being claimed, then good - we may not disagree on anything. Also: as I stated, several of the things I listed are already in use at Less Wrong, and have been employed in depth. Is this not compelling for now? I'm planning in-depth explanations, but those take time. So far I've only done one of them: on SPRs. As for my interpretation of Eliezer's views on mainstream philosophy, here are some quotes: One: "It seems to me that people can get along just fine knowing only what philosophy they pick up from reading AI books." But maybe this doesn't mean to avoid mainstream philosophy entirely. Maybe it just means that most people should avoid mainstream philosophy, which I agree with. Two: "I expect [reading philosophy] to teach very bad habits of thought that will lead people to be unable to do real work." Three: "only things of that level [dissolution to algorithm] are useful philosophy. Other things are not philosophy or more like background intros." Reflective equilibrium isn't "of that level" of dissolution to cognitive algorithm, in any way that I can tell, and yet it plays a useful role in Eliezer's CEV plan to save humanity. Epistemology and the Psychology of Human Judgment doesn't say much about dissolution to cognitive algorithm, and yet its content reads like a series of Less Wrong blog posts on overcoming cognitive biases with "ameliorative psychology." If somebody claims that those Less Wrong posts are useful but the Epistemology book isn't, I th
6Perplexed
Here is one interpretation. * The standard sequences explanation of cognitive biases and how to battle against them on Less Wrong? "Useful." * Yet another explanation of cognitive biases and how to battle against them in a mainstream philosophy book? "Not useful." * Dissolution of difficult philosophical problem like free will to cognitive algorithm on Less Wrong? "Useful, impressive." * Continuing disputation about difficult philosophical problems like free will in mainstream philosophy journals? "Not useful." * Dissolution of common (but easy) philosophical problem arising from language misuse in mainstream philosophy journals? "Not useful." * Explanation of how to dissolve common (but easy) philosophical problems arising from language misuse in LessWrong? "Useful". * Good stuff of various kinds, surrounded by other good stuff on LessWrong? "Useful". * Good stuff of various kinds, surrounded by error, confusion, and nonsense in mainstream philosophy journals? "Not useful." I'm not sure I agree with all of this, but it is pretty much what I hear Eliezer and others saying.
3lukeprog
Yeah, if that's what's being claimed, that's the double standard stuff I was talking about. Of course there's error, confusion, and nonsense in just about any large chunk of literature. Mainstream philosophy is particularly bad, but of course what I plan to do is pluck the good bits out and share just those things on Less Wrong.
3Davorak
I no longer remember your original post did you get that format from Perplexed? Or did he get it from you? You state Perplexed example i a double standard here. Perplexed discribes what is happen LW as different from what happens in mainstream philosophy, which does not fit the standard definition of double standard. Double standard: a rule applied differently to essentially the same thing/idea/group. Perplexed statements imply that LW and mainstream philosophy are considerably different which does not fit the description of a double standard. As of yet I have not interpreted anything on LW as meaning the content of the quote above. No it is not compelling. In science a theory which merely reproduces previous results is not compelling only suggestive. A new theory must have predictive power in areas the old one did not or be simpler(aka:more elegant) to be considered compelling. That is how you show the power of a new theory. Your assertion was: Your quote one does not seem to support your assertion by your own admission. My interpretation was most people should avoid mainstream philosophy, perhaps the vast majority and certainly novices. If possibly learn from a better source, since there is a vast amount from better sources and there is a vast amount of work to be done with those sources why focus on lesser sources? This does not support your assertion either. It only claims the methods of mainstream philosophy are bad habits for people who want to get things done. This one does not seem to a "daw arbitrary boundary" either so it does not support your assertion. Maybe a boundary but then EY then does on to describe the boundary so you have not supported your discriptor of arbitrary. I think the difference more and less useful is defiantly being claimed. Having everything in one self consistant system has many advantages. Only one set of terminology to learn. It is easier to build groups when everyone is using or familiar with the same terminology. Out of
-1lukeprog
Yeah, I just disagree with your comment from beginning to end. Yeah, and my claim is that LW content and some useful content from mainstream philosophy is not relevantly different, hence to praise one and ignore the other is to apply a double standard. Epistemology and the Psychology of Human Judgment, which reads like a sequence of LW posts, is a good example. So is much of the work I listed that dissolves traditional philosophical debates into the cognitive algorithms that produce the conflicting intuitions that philosophers use to go in circles for thousands of years. This is a change of subject. I was talking about the usefulness of certain work in mainstream philosophy already used by Less Wrong, not proposing a new scientific theory. If your point applied, it would apply to the re-use of the ideas on Less Wrong, not to their origination in mainstream philosophy. The strongest support for my interpretation of EY comes from quote #3, for reasons I explained in detail and you ignored. I suspect much of our confusion came from Eliezer's assumption that I was saying everybody should go out and read Quinean philosophy, which of course I never claimed and in fact have specifically denied. In any case, EY and I have come to common ground, so this is kinda irrelevant. I'm fine with that. What counts as a 'centralized repository' is pretty fuzzy. Quinean naturalism counts as a 'centralized repository' in my meaning, but if Eliezer means something different by 'centralized repository', then we have a disagreement in words but not in fact on that point.
1Davorak
In the mind of EY, i assume, and some others there is a difference. If the difference is not relevant there would be a double standard. If there is a relevant difference no double standard exists. I did not see you point out what that difference was and why it was not relevant before calling it a double standard. Not a change of subject at all. Just let you know what standards I use for judging something suggestive vs compelling and that I think EY might be using a similar standard. Just answering your question "Is this not compelling for now?", a no with exposition. I was giving you the method by which I often judge how useful a work is and suggesting that EY may use a similar method. If so it would explain some of why you were not communicating well. It is to be applied within the development of an individuals evolving beliefs. So someone holding LW beliefs then introduce to mainstream philosophy would use this standard before adopting mainstream philosophy's beliefs. I do not like the idea, I think it is unproductive, of having conversation with people who think they magically know what I pay attention to and what I do not. If you meant that I did not address your point please say so and how instead. I did not ignore it. I did think it supported an argument that EY draws a boundary between mainstream philosophy and LW, but did not support the argument that he drew a arbitrary boundary. My interpretation was that he skeptical with the grade of repository not the centralness of it.
2wnoise
I don't understand the distinction you're making. These two statements mean the exact same thing to me: in general, mainstream philosophy is useless, though exceptions exist. Admittedly. That's not a good reason to look there, until the expected sources are exhausted.
3lukeprog
What I'm trying to say is that the vast majority of mainstream philosophy is useless, but some of it is useful, and I gave examples. I've also repeatedly agreed that most people should not be reading mainstream philosophy. Much better to learn statistics and AI and cognitive science. But for those already familiar with philosophy, for whom it's not that difficult to name 20 useful ideas from mainstream philosophy, then... why not make use of them? It makes no sense to draw an arbitrary boundary around mainstream philosophy and say "If it comes from here, I don't want it." That's silly.
0[anonymous]
I don't understand the distinction you're making. These two statements mean the exact same thing to me: in general, mainstream philosophy is useless, though exceptions exist.
5XiXiDu
I've frequently been criticized for suggesting that you hold that attitude. The usual response is that LW is not about friendly AI or has not much to do with the SIAI.
-2[anonymous]
I don't think you're being fair to a lot of philosophers. I think you're being fair to some philosophers, the ones who are sowing confusion. But you can't just wave away the sophists, the charlatans, with a magic wand. They are out there creating confusion and drawing people away from useful and promising lines of thought. There are other philosophers out there who are doing what they can to limit the damage. It's a bit like war. Think of yourself as a scientist who is trying to build a rocket that will take us to Mars. But in the meantime there is a war going on. You might say, "this war is not helpful, because a stray missile might blow up my rocket, damn those generals and their toys." But the problem is, without the generals like Dennett who are protecting your territory, your positions, the enemy generals will overrun your project and strip your rocket for parts. You may think that the philosophers don't matter, that they are just arguing in obscurity among themselves, but I don't think that's the case. I think that there is a significant amount of leakage, that ideas born and nurtured in the academy frequently spread to the wider society and infect essentially everyone's way of thinking.
9MichaelVassar
Who cares when his work was done. We want to know how to find work that helps us to understand things today. It's not about how smart he was, but about how much his ideas can help us.
0lukeprog
And my answer is "not much." Like I say, all the basics of Quinean philosophy are already assumed by Less Wrong. I don't recommend anyone read Quine. It's (some of) the stuff his followers have done in the last 30 years that is useful - both stuff that is already being used by SIAI people, and stuff that is useful but (previously) undiscovered by SIAI people. I listed some of that stuff here.
7Apprentice
What's wrong with behaviorism? I was under the impression that behaviorism was outdated but when my daughter was diagnosed as speech-delayed and borderline autistic we started researching therapy options. The people with the best results and the best studies (those doing 'applied behavior analysis') seem to be pretty much unreconstructed Skinnerists. And my daughter is making good progress now. I'll take flawed philosophy with good results over the opposite any day of the week. But I'm still curious about flaws in the philosophy.

Personally, I'm finding that avoiding anthropomorphising humans, i.e. ignoring the noises coming out of their mouths in favour of watching their actions, pays off quite well, particularly when applied to myself ;-) I call this the "lump of lard with buttons to push" theory of human motivation. Certainly if my mind had much effect on my behaviour, I'd expect to see more evidence than I do ...

"lump of lard with buttons to push"

I take exception to that: I have a skeletal structure, dammit!

6NancyLebovitz
I think the reference is to the brain rather than to the whole body.
5TheOtherDave
(blink) (nods) Yes, indeed. Exception withdrawn. Well played!
4[anonymous]
It sounds like what you are describing is rationalization, either doing it yourself or accepting people's rationalization about themselves.
0David_Gerard
Pretty much. I'm saying "mind" for effect, and because people think the bit that says "I" has much more effect than it appears to from observed behaviour.
2MichaelVassar
Yep. Anthropomorphizing humans is a disasterously wrong thing to do. Too bad everyone does it.

No, they just look like they're doing it; saying humans are athropomorphizing would attribute more intentionality to humans than is justified by the data.

1David_Gerard
Well, the mind seems to. I'm using "mind" here to mean the bit that says "I" and could reflect on itself it if it bothered to and thinks it runs the show and comes up with rationalisations for whatever it does. Listening to these rationalisations, promises, etc. as anything other than vague pointers to behaviour is exceedingly foolish. Occasionally you can encourage the person to use their "mind" less annoyingly. I think they anthropomorphise as some sort of default reflex. Possibly somewhere halfway down the spinal cord, certainly not around the cerebrum.
4Tyrrell_McAllister
I may be wrong, but I think that SilasBarta is pointing out, maybe with some tongue-in-cheek, that you can't accuse humans of anthropomorphizing other humans without yourself being guilty of anthropomorphizing those humans whom you accuse. Edit: Looks like this was the intended reading.
1David_Gerard
I am finding benefits from trying not to anthropomorphise myself. That is, rather than thinking of my mind as being in control of my actions, I think of myself as a blob of lard which behaves in certain ways. This has actually been a more useful model, so that my mind (which appears to be involved in typing this, though I am quite ready to be persuaded otherwise) can get the things it thinks it wants to happen happening.
2SilasBarta
I was joking. :-P
0David_Gerard
Ha ha only serious ;-p
0[anonymous]
.
-1David_Gerard
I'd watch their behaviour, which I would also have classed as expression of the intent. Do they show they care? That being the thing you actually want.
7Paul Crowley
May I recommend Dennett's "Skinner Skinned", in Brainstorms?

Okay, I read it. It's funny how Dennett's criticism of Skinner partially mirrors Luke's criticism of Eliezer. Because Skinner uses terminology that's not standard in philosophy, Dennett feels he needs to be "spruced up".

"Thus, spruced up, Skinner's position becomes the following: don't use intentional idioms in psychology" (p. 60). It turns out that this is Quine's position and Dennett sort of suggests that Skinner should just shut up and read Quine already.

Ultimately, I can understand and at least partially agree with Dennett that Skinner goes too far in denying the value of mental vocabulary. But, happily, this doesn't significantly alter my belief in the value of Skinner type therapy. People naturally tend to err in the other direction and ascribe a more complex mental life to my daughter than is useful in optimizing her therapy. And I still think Skinner is right that objections to behaviorist training of my daughter in the name of 'freedom' or 'dignity' are misplaced.

Anyway, this was a useful thing to read - thank you, ciphergoth!

2Apprentice
Thank you, holding the book in my hand and reading it now.
5lukeprog
No, I'm talking about behaviorist psychology. Behaviorist psychology denied the significance (and sometimes the existence) of cognitive states. Showing that cognitive states exist and matter was what paved the way to cognitive science. Many insights from behaviorist psychology (operant conditioning) remain useful, but it's central assumption is false, and it must be false for anyone to be doing cognitive science.
7Apprentice
Okay, but now I'm getting a bit confused. You seem to me to have come out with all the following positions: * The worthwhile branch of philosophy is Quinean. (this post) * Quine was a behaviorist. (a comment on this post) * Behaviorism denies the possibility of cognitive science. (a comment on this post) * The worthwhile part of philosophy is cognitive science. ("for me, philosophy basically just is cognitive science" - Lukeprog) Those things don't seem to go well together. What am I misunderstanding?
0lukeprog
Quinean naturalism does not have an exclusive lock on useful philosophy, but it's the most productive because it starts from a bunch of the right assumptions (reductionism, naturalized epistemology, etc.) Like I said, Quine was wrong about lots of things. Behaviorism was one of them. But Quine still saw epistemology as a chapter of the natural sciences on how human brains came to knowledge - the field we now know as "cognitive science."
2Apprentice
Quine apparently said, "I consider myself as behavioristic as anyone in his right mind could be". That sounds good, can I subscribe to that?
6Will_Sawin
Bayesian inference is not a big step up from Laplace, and the idea of an optimal model that humans should try to approximate is a common philosophical position.
[-][anonymous]120

Thanks so much. I didn't know about Quine, and from what you've quoted it seems quite clearly in the same vein as LessWrong.

Also, out of curiosity, do you know if anything's been written about whether an agent (natural or artificial) needs goals in order to learn? Obviously humans and animals have values, at least in the sense of reward and punishment or positive and negative outcomes -- does anyone think that this is of practical importance for building processes that can form accurate beliefs about the world?

What you care about determines what your explorations learn about. An AI that didn't care about anything you thought was important, even instrumentally (it had no use for energy, say) probably wouldn't learn anything you thought was important. A probability-updater without goals and without other forces choosing among possible explorations would just study dust specks.

1[anonymous]
That was my intuition. Just wanted to know if there's more out there.
5Eliezer Yudkowsky
What, you mean in mainstream philosophy? I don't think mainstream philosophers think that way, even Quineans. The best ones would say gravely, "Yes, goals are important" and then have a big debate with the rest of the field about whether goals are important or not. Luke is welcome to prove me wrong about that.
8utilitymonster
I actually don't think this is about right. Last time I asked a philosopher about this, they pointed to an article by someone (I.J. Good, I think) about how to choose the most valuable experiment (given your goals), using decision theory.
7lukeprog
Yes, that's about right. AI research is where to look in regards to your question, SarahC. Start with chapter 2 and the chapters with 'decisions' in the title in AI: A Modern Approach.
1[anonymous]
Thank you!
5komponisto
My first exposure was his mathematical logic book. At the time, I didn't even realize he had a reputation as a philosopher per se. (I knew from the back cover of the book that he was in the philosophy department at Harvard, but I just assumed that that was where anyone who got sufficiently "foundational" about their mathematics got put.)
4[anonymous]
Ah, see, when I learned a little logic, I shuddered, muttered "That is not dead which can unsleeping lie," and moved on. I'll come back to it if it ever seems useful though.
9cousin_it
Yah, I sometimes joke that logicians are viewed by mathematicians in the same way that mathematicians are viewed by normal people. Logic makes complete sense to me, but some of my professional mathematician friends cannot understand my tastes at all. I, on the other hand, cannot understand how one can get interested in homological algebra or other such things, when there are all these really pressing logical issues to solve :-)
7Will_Sawin
That is exactly why I enjoy learning about logic.
2MichaelVassar
Will Sawin, aspiring necromancer... That should be on your business card.
1Will_Sawin
I should have a business card.
2lukeprog
Could you clarify what you mean? When I parse your second paragraph, it comes across to my mind as three or four separate questions...
6[anonymous]
Ok, this is actually an area on which I'm not well-informed, which is why I'm asking you instead of "looking it up" -- I'd like to better understand exactly what I want to look up. Let's say we want to build a machine that can form accurate predictions and models/categories from observational data of the sort we encounter in the real world -- somewhat noisy, and mostly "uninteresting" in the sense that you have to compress or ignore some of the data in order to make sense of it. Let's say the approach is very general -- we're not trying to solve a specific problem and hard-coding in a lot of details about that problem, we're trying to make something more like an infant. Would learning happen more effectively if the machine had some kind of positive/negative reinforcement? For example, if the goal is "find the red ball and fetch it" (which requires learning how to recognize objects and also how to associate movements in space with certain kinds of variation in the 2d visual field) would it help if there was something called "pain" which assigned a cost to bumping into walls, or something called "pleasure" which assigned a benefit to successfully fetching the ball? Is the fact that animals want food and positive social attention necessary to their ability to learn efficiently about the world? We're evolved to narrow our attention to what's most important for survival -- we notice motion more than we notice still figures, we're better at recognizing faces than arbitrary objects. Is it possible that any process needs to have "desires" or "priorities" of this sort in order to narrow its attention enough to learn efficiently? To some extent, most learning algorithms have cost functions associated with failure or error, even the one-line formulas. It would be a bit silly to say the Mumford-Shaw functional feels pleasure and pain. So I guess there's also the issue of clarifying exactly what desires/values are.
1Vladimir_Nesov
Practical importance for what purpose? Whatever that purpose is, adding heuristics that optimize the learning heuristics for better fulfillment of that purpose would be fruitful for that purpose. It would be of practical importance to the extent the original implementation of the learning heuristics is suboptimal, and to the extent the implementable learning-heuristic-improving heuristics can work on that. If you are talking of autonomous agents, self-improvement is a necessity, because you need open-ended potential for further improvement. If you are talking about non-autonomous tools people write, it's often difficult to construct useful heuristic-improvement heuristics. But of course their partially-optimized structure is already chosen while making use of the values that they're optimized for, purpose in the designers.
0NancyLebovitz
What do you mean by a goal? Or learning?