Part of the sequence: Rationality and Philosophy

Eliezer's anti-philosophy post Against Modal Logics was pretty controversial, while my recent pro-philosophy (by LW standards) post and my list of useful mainstream philosophy contributions were massively up-voted. This suggests a significant appreciation for mainstream philosophy on Less Wrong - not surprising, since Less Wrong covers so many philosophical topics.

If you followed the recent very long debate between Eliezer and I over the value of mainstream philosophy, you may have gotten the impression that Eliezer and I strongly diverge on the subject. But I suspect I agree more with Eliezer on the value of mainstream philosophy than I do with many Less Wrong readers - perhaps most.

That might sound odd coming from someone who writes a philosophy blog and spends most of his spare time doing philosophy, so let me explain myself. (Warning: broad generalizations ahead! There are exceptions.)

Failed methods

Large swaths of philosophy (e.g. continental and postmodern philosophy) often don't even try to be clear, rigorous, or scientifically respectable. This is philosophy of the "Uncle Joe's musings on the meaning of life" sort, except that it's dressed up in big words and long footnotes. You will occasionally stumble upon an argument, but it falls prey to magical categories and language confusions and non-natural hypotheses. You may also stumble upon science or math, but they are used to 'prove' things irrelevant to the actual scientific data or the equations used.

Analytic philosophy is clearer, more rigorous, and better with math and science, but only does a slightly better job of avoiding magical categories, language confusions, and non-natural hypotheses. Moreover, its central tool is intuition, and this displays a near-total ignorance of how brains work. As Michael Vassar observes, philosophers are "spectacularly bad" at understanding that their intuitions are generated by cognitive algorithms.

A diseased discipline

What about Quinean naturalists? Many of them at least understand the basics: that things are made of atoms, that many questions don't need to be answered but instead dissolved, that the brain is not an a priori truth factory, that intuitions come from cognitive algorithms, that humans are loaded with bias, that language is full of tricks, and that justification rests in the lens that can see its flaws. Some of them are even Bayesians.

Like I said, a few naturalistic philosophers are doing some useful work. But the signal-to-noise ratio is much lower even in naturalistic philosophy than it is in, say, behavioral economics or cognitive neuroscience or artificial intelligence or statistics. Why? Here are some hypotheses, based on my thousands of hours in the literature:

  1. Many philosophers have been infected (often by later Wittgenstein) with the idea that philosophy is supposed to be useless. If it's useful, then it's science or math or something else, but not philosophy. Michael Bishop says a common complaint from his colleagues about his 2004 book is that it is too useful.
  2. Most philosophers don't understand the basics, so naturalists spend much of their time coming up with new ways to argue that people are made of atoms and intuitions don't trump science. They fight beside the poor atheistic philosophers who keep coming up with new ways to argue that the universe was not created by someone's invisible magical friend.
  3. Philosophy has grown into an abnormally backward-looking discipline. Scientists like to put their work in the context of what old dead guys said, too, but philosophers have a real fetish for it. Even naturalists spend a fair amount of time re-interpreting Hume and Dewey yet again.
  4. Because they were trained in traditional philosophical ideas, arguments, and frames of mind, naturalists will anchor and adjust from traditional philosophy when they make progress, rather than scrapping the whole mess and starting from scratch with a correct understanding of language, physics, and cognitive science. Sometimes, philosophical work is useful to build from: Judea Pearl's triumphant work on causality built on earlier counterfactual accounts of causality from philosophy. Other times, it's best to ignore the past confusions. Eliezer made most of his philosophical progress on his own, in order to solve problems in AI, and only later looked around in philosophy to see which standard position his own theory was most similar to.
  5. Many naturalists aren't trained in cognitive science or AI. Cognitive science is essential because the tool we use to philosophize is the brain, and if you don't know how your tool works then you'll use it poorly. AI is useful because it keeps you honest: you can't write confused concepts or non-natural hypotheses in a programming language.
  6. Mainstream philosophy publishing favors the established positions and arguments. You're more likely to get published if you can write about how intuitions are useless in solving Gettier problems (which is a confused set of non-problems anyway) than if you write about how to make a superintelligent machine preserve its utility function across millions of self-modifications.
  7. Even much of the useful work naturalistic philosophers do is not at the cutting-edge. Chalmers' update for I.J. Good's 'intelligence explosion' argument is the best one-stop summary available, but it doesn't get as far as the Hanson-Yudkowsky AI-Foom debate in 2008 did. Talbot (2009) and Bishop & Trout (2004) provide handy summaries of much of the heuristics and biases literature, just like Eliezer has so usefully done on Less Wrong, but of course this isn't cutting edge. You could always just read it in the primary literature by Kahneman and Tversky and others.

Of course, there is mainstream philosophy that is both good and cutting-edge: the work of Nick Bostrom and Daniel Dennett stands out. And of course there is a role for those who keep arguing for atheism and reductionism and so on. I was a fundamentalist Christian until I read some contemporary atheistic philosophy, so that kind of work definitely does some good.

But if you're looking to solve cutting-edge problems, mainstream philosophy is one of the last places you should look. Try to find the answer in the cognitive science or AI literature first, or try to solve the problem by applying rationalist thinking: like this.

Swimming the murky waters of mainstream philosophy is perhaps a job best left for those who already spent several years studying it - that is, people like me. I already know what things are called and where to look, and I have an efficient filter for skipping past the 95% of philosophy that isn't useful to me. And hopefully my rationalist training will protect me from picking up bad habits of thought.

Philosophy: the way forward

Unfortunately, many important problems are fundamentally philosophical problems. Philosophy itself is unavoidable. How can we proceed?

First, we must remain vigilant with our rationality training. It is not easy to overcome millions of years of brain evolution, and as long as you are human there is no final victory. You will always wake up the next morning as homo sapiens.

Second, if you want to contribute to cutting-edge problems, even ones that seem philosophical, it's far more productive to study math and science than it is to study philosophy. You'll learn more in math and science, and your learning will be of a higher quality. Ask a fellow rationalist who is knowledgeable about philosophy what the standard positions and arguments in philosophy are on your topic. If any of them seem really useful, grab those particular works and read them. But again: you're probably better off trying to solve the problem by thinking like a cognitive scientist or an AI programmer than by ingesting mainstream philosophy.

However, I must say that I wish so much of Eliezer's cutting-edge work wasn't spread out across hundreds of Less Wrong blog posts and long SIAI articles written in with an idiosyncratic style and vocabulary. I would rather these ideas were written in standard academic form, even if they transcended the standard game of mainstream philosophy.

But it's one thing to complain; another to offer solutions. So let me tell you what I think cutting-edge philosophy should be. As you might expect, my vision is to combine what's good in LW-style philosophy with what's good in mainstream philosophy, and toss out the rest:

  1. Write short articles. One or two major ideas or arguments per article, maximum. Try to keep each article under 20 pages. It's hard to follow a hundred-page argument.
  2. Open each article by explaining the context and goals of the article (even if you cover mostly the same ground in the opening of 5 other articles). What topic are you discussing? Which problem do you want to solve? What have other people said about the problem? What will you accomplish in the paper? Introduce key terms, cite standard sources and positions on the problem you'll be discussing, even if you disagree with them.
  3. If possible, use the standard terms in the field. If the standard terms are flawed, explain why they are flawed and then introduce your new terms in that context so everybody knows what you're talking about. This requires that you research your topic so you know what the standard terms and positions are. If you're talking about a problem in cognitive science, you'll need to read cognitive science literature. If you're talking about a problem in social science, you'll need to read social science literature. If you're talking about a problem in epistemology or morality, you'll need to read philosophy.
  4. Write as clearly and simply as possible. Organize the paper with lots of heading and subheadings. Put in lots of 'hand-holding' sentences to help your reader along: explain the point of the previous section, then explain why the next section is necessary, etc. Patiently guide your reader through every step of the argument, especially if it is long and complicated.
  5. Always cite the relevant literature. If you can't find much work relevant to your topic, you almost certainly haven't looked hard enough. Citing the relevant literature not only lends weight to your argument, but also enables the reader to track down and examine the ideas or claims you are discussing. Being lazy with your citations is a sure way to frustrate precisely those readers who care enough to read your paper closely.
  6. Think like a cognitive scientist and AI programmer. Watch out for biases. Avoid magical categories and language confusions and non-natural hypotheses. Look at your intuitions from the outside, as cognitive algorithms. Update your beliefs in response to evidence. [This one is central. This is LW-style philosophy.]
  7. Use your rationality training, but avoid language that is unique to Less Wrong. Nearly all these terms and ideas have standard names outside of Less Wrong (though in many cases Less Wrong already uses the standard language).
  8. Don't dwell too long on what old dead guys said, nor on semantic debates. Dissolve semantic problems and move on.
  9. Conclude with a summary of your paper, and suggest directions for future research.
  10. Ask fellow rationalists to read drafts of your article, then re-write. Then rewrite again, adding more citations and hand-holding sentences.
  11. Format the article attractively. A well-chosen font makes for an easier read. Then publish (in a journal or elsewhere).

Note that this is not just my vision of how to get published in journals. It's my vision of how to do philosophy.

Meeting journals standards is not the most important reason to follow the suggestions above. Write short articles because they're easier to follow. Open with the context and goals of your article because that makes it easier to understand, and lets people decide right away whether your article fits their interests. Use standard terms so that people already familiar with the topic aren't annoyed at having to learn a whole new vocabulary just to read your paper. Cite the relevant positions and arguments so that people have a sense of the context of what you're doing, and can look up what other people have said on the topic. Write clearly and simply and with much organization so that your paper is not wearying to read. Write lots of hand-holding sentences because we always communicate less effectively then we thought we did. Cite the relevant literature as much as possible to assist your most careful readers in getting the information they want to know. Use your rationality training to remain sharp at all times. And so on.

That is what cutting-edge philosophy could look like, I think.

Next post: How You Make Judgments

Previous post: Less Wrong Rationality and Mainstream Philosophy

Philosophy: A Diseased Discipline
New Comment
447 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]djc390

As a professional philosopher who's interested in some of the issues discussed in this forum, I think it's perfectly healthy for people here to mostly ignore professional philosophy, for reasons given here. But I'm interested in the reverse direction: if good ideas are being had here, I'd like professional philosophy to benefit from them. So I'd be grateful if someone could compile a list of significant contributions made here that would be useful to professional philosophers, with links to sources.

(The two main contributions that I'm aware of are ideas about friendly AI and timeless/updateless decision theory. I'm sure there are more, though. Incidentally I've tried to get very smart colleagues in decision theory to take the TDT/UDT material seriously, but the lack of a really clear statement of these ideas seems to get in the way.)

Yes, this is one reason I'm campaigning to have LW / SIAI / Yudkowsky ideas written in standard form!

[-][anonymous]130

As a professional philosopher who's interested in some of the issues discussed in this forum. . .

Oh wow. The initials 'djc' match up with David (John) Chalmers. Carnap and PhilPapers are mentioned in this user's comments. Far from conclusive evidence, but my bet is that we've witnessed a major analytic philosopher contribute to LW's discussion. Awesome.

4enye-word
In the comment he links to above, djc states "One way that philosophy makes progress is when people work in relative isolation, figuring out the consequences of assumptions rather than arguing about them. The isolation usually leads to mistakes and reinventions, but it also leads to new ideas." When asked about LessWrong in a reddit AMA, David Chalmers stated "i think having subcommunities of this sort that make their own distinctive assumptions is an important mechanism of philosophical progress" and an interest in TDT/UDT. (See also: https://slatestarcodex.com/2017/02/06/notes-from-the-asilomar-conference-on-beneficial-ai/) (Sorry to dox you, David Chalmers. Hope you're doing well these days.)
8XiXiDu
Actually in one case this "forum" could benefit from the help of professional philosophers, as the founder Eliezer Yudkowsky especially asks for help on this problem: I think that if you show that professional philosophy can dissolve that problem then people here would be impressed.
5Vladimir_Nesov
Do you know about the TDT paper?
1radical_negative_one
Just in case you haven't seen it, here is Eliezer's Timeless Decision Theory paper. It's over a hundred pages so i'd hope that it represents a "clear statement". (Although i can't personally comment on anything in it because i don't currently have time to read it.)
[-]djc460

That's the one. I sent it to five of the world's leading decision theorists. Those who I heard back from clearly hadn't grasped the main idea. Given the people involved, I think this indicates that the paper isn't a sufficiently clear statement.

8[anonymous]
It's somewhat painful to read. I've tried to read it in the past and get a bit eyesore after the first twenty pages. Doing the math, I realize it's probably irrational for Yudkowsky-san to spend time learning LaTeX or some other serious typesetting system, but I can dream, right?

Your dream has come true.

3[anonymous]
Happiness is too general a term to express my current state of mind. May the karma flow through you like so many grains of sand through a sieve.
1wedrifid
Not quite sure how this one works. Usually I associate sieve with "leaking like a sieve", generally a bad thing---do you want all his karma to be assassinated away as fast as it comes?
2[anonymous]
Oh, no. Lukeprog is the sieve, and the grains of sand are whatever fraction of a hedon he gets from being upvoted.
0gmpalmer
I hope this is corrected later in the paper and my apologies if this is a stupid question but could you please explain how the example of gum chewing and abscesses makes sense? That is, in the explanation you are making your decision based on evidence. Indeed, you'd be happy--or anyone would be happy--to hear you're chewing gum once the results of the second study are known. How is that causal and not evidential? I see later in the paper that gum chewing is evidence for the CGTA gene but that doesn't make any sense. You can't change whether or not you have the gene and the gum chewing is better for you at any rate. Still confused about the value of the gum chewing example.
6Richard_Kennaway
The LaTeX to format a document like that can be learnt in an hour or two with no previous experience, assuming at least basic technically-minded smarts.
6RHollerith
And the learning (and formatting of the document) does not have to be done by the author of the document.
[-]prase290

Unfortunately, many important problems are fundamentally philosophical problems. Philosophy itself is unavoidable.

Isn't this true just because the way philosophy is effectively defined? It's a catch-all category for poorly understood problems which have nothing in common except that they aren't properly investigated by some branch of science. Once a real question is answered, it no longer feels like a philosophical question; today philosophers don't investigate motion of celestial bodies or structure of matter any more.

In other words, I wonder what are the fundamentally philosophical questions. The adverb fundamentally creates the impression that those questions will be still regarded as philosophical after being uncontroversially answered, which I doubt will ever happen.

[-]ata230

Strongly agreed. I think "philosophical questions" are the ones that are fun to argue endlessly about even if we're too confused to actually solve them decisively and convincingly. Thinking that any questions are inherently philosophical (in that sense) would be mind projection; if a question's philosophicalness can go away due to changes in facts about us rather than facts about the question, then we probably shouldn't even be using that as a category.

6prase
I would say that the sole thing which philosophical questions have in common is that it is only imaginable to solve them using intuition. Once a superior method exists (experiment, formal proof), the question doesn't belong to philosophy.
4Vladimir_Nesov
Nice pattern.
2shokwave
I think that's a good reason to keep using the category. By looking at current philosophy, we can determine what facts about us need changing. Cutting-edge philosophy (of the kind lukeprog wants) would be strongly determining what changes need to be made. To illustrate: that there is a "philosophy of the mind" and a "free will vs determinism debate" tells us there are some facts about us (specifically, what we believe about ourselves) that need changing. Cutting-edge philosophy would be demonstrating that we should change these facts to ones derived from neuroscience and causality. Diagrams like this would be cutting-edge philosophy.
2Perplexed
The thing that I find attractive about logic and 'foundations of mathematics' is that no one argues endlessly about philosophical questions, even though the subject matter is full of them. Instead, people in this field simply assume the validity of some resolution of the philosophical questions and then proceed on to do the real work. What I think that most fans of philosophy fail to realize is that answers to philosophical questions are like mathematical axioms. You don't justify them. Instead, you simply assume them and then work out the consequences. Don't care for the consequences? Well then choose a different set of axioms.
0robertzk
Are you suggesting that philosophy lies in the orthogonal complement to science and potential science (the questions science is believed to be capable of eventually answering)?
4prase
I am suggesting that the label philosophical is usually attached to problems where we have no agreed upon methodology of investigation. Therefore whether a question belongs to philosophy or science isn't defined solely by its objective properties, but also by our knowledge, and as our knowledge grows the formerly philosophical question is more likely to move into "science" category. The point thus was that potential science isn't orthogonal to philosophy, on the contrary, I have expressed belief that those categories may be identical (when nonsensical parts of philosophy are excluded). On the other hand, I assume philosophy and actual (in contrast to potential) science are disjoint. This is just how the words are used.
-1quen_tin
In a sense, science is nothing but experimental philosophy (in a broad sense), and the job of non-experimental-philosophy (what we label philosophy) is to make any question become an experimental question... But I would say that philosophy remains important as the framework where science and scientific fundamental concepts (truth, reality, substance) are defined and discussed.
0prase
Not universally. It's hard to find experiments in mathematics.
4[anonymous]
You'd have to look inside mathematicians' heads.
2Vladimir_M
In a sense, computers are nothing but devices for doing experimental mathematics.
8Clippy
In a sense, apes are nothing but devices for making ape DNA.
9Vladimir_M
I think Richard Dawkins made that observation a while ago at book length.
8Clippy
In a sense, Richard Dawkins is nothing but a device for making books.
4Friendly-HI
In a sense, a book is nothing but a device for copying memes into other brains.
1Richard_Kennaway
Experimental Mathematics.
0Clippy
I do a lot of that when I experiment with various strings to find preimages for a class of hashes.
1Marius
Which is why mathematics isn't science.
4cousin_it
I sense an argument about definitions of words. Please don't.
-3Marius
"what is science" is not a mere matter of definitions. It's fundamental to how we decide how certain we are of various propositions.
7cousin_it
Um... no it isn't? A Bayesian processes evidence the same way whether or not it's labeled "science". If you're talking about the word "science" as some sort of FDA seal of approval, invented so people can quickly see who to trust without examining the claims in detail, then I see no reason to exclude math. Do you think math gives less reliable conclusions than empirical disciplines?
4Marius
A Bayesian may process probabilities the same way, but information is not evaluated the same way. Determining that a piece of information was derived scientifically does not provide a "seal of approval", it tells us how to evaluate the likelihood of that information being true. For instance, if I know that a piece of information was derived via scientific methods, I know to look at related studies. A single study is never definitive, because science involves reproducible results based on empirical evidence. Further studies may alter my understanding of the information the first study produced. On the other hand, if I know that a piece of information was derived mathematically, I need only look at a single proof. If the proof is sound, I know that the premises lead inexorably to the conclusion. On the other hand, encountering a single incorrect premise or step means that the conclusion has zero utility to the Bayesian - a new proof must be created. On the other hand, experiments may yield some useful evidence even if the study has flawed premises or methods; precisely what parts are useful requires an understanding of what science is. So this is actually important - it's not just a matter of definitions.
4cousin_it
Thanks, that's a valid argument that I didn't think of. But it's sorta balanced by the fact that a lot of established math is really damn established. For example, compare Einstein's general relativity with Brouwer's fixed point theorem. Both were invented at about the same time, both are really important and have been used by lots and lots of people. Yet I think Brouwer's theorem is way more reliable and less likely to be overturned than general relativity, and I'm not sure if anyone anywhere thinks otherwise.
3Dreaded_Anomaly
I'm not sure if "overturning" general relativity is the appropriate description. We may well find a broader theory which contains general relativity as a limiting case, just as general relativity has special relativity and Newtonian mechanics as limiting cases. With the plethora of experimental verifications of general relativity, however, I wouldn't expect to see it completely discarded in the way that, e.g., phlogiston theory was.
2Marius
Oh, I'm not calling mathematics more or less reliable than science. I'm saying that the ways in which one would overturn an established useful theorem would be very different from the ways in which one would overturn an established scientific theory. Another way in which mathematics is more reliable is that bias is irrelevant. Scientists have to disclose their conflicts of interest because it's easy for those conflicts to interfere with their objectivity during data collection or analysis, and so others must pay special attention. Mathematicians don't need to because all their work can be contained in one location, and can be checked in a much more rigorous fashion.
3JoshuaZ
This doesn't follow. If for example, one does have a single proof and one encounters a hole in it and the hole looks like it makes plausible assumptions then one should still increase one's confidence that the claim is true. Thus, physicists are very fond of assuming that terms in series are of lower order even when they can't actually prove it. Very often, under reasonable assumptions, their claims are correct. To use a specific example, Kempe's "proof" of the four color theorem had a hole and so a repaired version could only prove that planar maps require at most five colors. But, the general thrust of the argument provided a strong plausibility heuristic for believing the claim as a whole. Similarly, from a Bayesian stand-point, seeing multiple distinct proofs of a claim should make one more confident in the claim since even if one of the proofs has an unseen flaw, the others are likely to go through. (There are complicating factors here. No one seems to have a good theory of confidence for mathematical statements which allows for objective priors since most standard objective priors (such as those based on some notion of computability) only make sense if one can perform arbitrary calculations correctly. Similarly it isn't clear how one meaningfully can talk about say the probability that Peano arithmetic is consistent.)
0Marius
I don't think we actually disagree at all. Your "hole" is really the introduction of additional premises. If the premises are true and the reasoning sound, the conclusions follow. If they are shown to be untrue, you can discard the conclusion. Mathematics rarely has a way to evaluating the likelihood its premises are true - usually the best it can do is to show that certain premises are or are not compatible with one another. What you are saying regarding multiple distinct proofs of a claim is true according to some informal logic, but not in any strict mathematical sense. Mathematically, you've either proven something or you haven't. Mathematicians may still be convinced by scientific, theologic, literary, financial, etc. arguments of course.
0JoshuaZ
Not really. Consider for example someone who has seen Kempe's argument. They should have a higher confidence that say "The four color theorem is true in ZFC" then someone who has not seen Kempe's argument. There's no additional premise being added but Kempe's argument is clearly wrong. Not sure what you mean here. It looks like the sentence was cut off?
0Marius
Would you mind explain in a little more detail why you say a person who has seen Kempe's flawed proof should have higher confidence than one who has not? Do you mean that it's so emotionally compelling that one's mind is convinced even if the math doesn't add up? Or that the required (previously-hidden) premise that allows Kempe to ignore the degree 5 vertex has some possibility of truth, so that the conclusion has an increased likelihood of truth? also: fixed the end.
1JoshuaZ
Hmm, I'm not sure how to do so without just going through the whole proof. Essentially, Kempe's proof showed that a smallest counterexample graph couldn't have certain properties. One part of the proof was showing that the graph could not contain a vertex of degree 5. But this part was flawed. But Kempe did show that it couldn't contain a vertex of degree 4, and moreover, it showed that any minimal counterexample must have a vertex of degree 5. This makes us more confident in the original claim since a minimal counterexample has to have a very restricted looking form. Replying to the fixed end here so as to minimize confusion: Well, yes but the claim I was addressing was that the claim you made that "encountering a single incorrect premise or step means that the conclusion has zero utility to the Bayesian" which is wrong. I agree that a flawed proof is not a proof. And yes, the logic is in any case informal. See my earlier parenthetical remark. I actually consider the problem of confidence in mathematical reasoning to be one of the great difficult open problems within Bayesianism. One reason I don't (generally) self-identify as a Bayesian is due to an apparent lack of this theory. (This itself deserves a disclaimer that I'm by no means at all an expert in this field and so there may be work in this direction but if so I haven't seen any that is at all satisfactory.)
0Marius
I think you are assuming I count a dubious premise as an incorrect premise. Obviously, a merely dubious premise allows the conclusion to have some utility to the Bayesian. I really don't think we actually disagree.
0JoshuaZ
Really? Even incorrect premises can be useful. For example, one plausibility argument for the Riemann hypothesis rests on assuming that the Mobius function behaves like a random variable. But that's a false statement. Nevertheless, it acts close enough to being a random variable that many find this argument to be evidence for RH. And there's been very good work trying to take this false statement and make true versions of it. Similarly, if one believes what you have said then one would have to conclude that if one lived in the 1700s that all of calculus would have been useless because it rests on the notion of infinitesimals which didn't exist. The premise was incorrect, but the results were sound.
2Sniffnoy
Incidentally, as more evidence, apparently this AC0 conjecture has just been proved true by Ben Green (rather, he noticed that other people had already done stuff that had this as a consequence, which the people asking the question hadn't known about).
0Marius
Ok, I need to refine my description of math a bit. I'd claimed that an incorrect premise gives useless conclusions; actually as you point out if we have a close-to-correct premise instead, we can have useful conclusions. The word "instead" is important there, because otherwise we can then add in a correct contradictory premise, generating new and false conclusions. In some sense this is necessary to all math, most evidently geometry: we don't actually have any triangles in the world, but we use near-triangles all the time, pretending they're triangles, with great utility. Also, to look again at Kempe's "proof": we can see where we can construct a vertex of degree 5 where his proof does not hold up. And we can try to turn that special case back into a map. The fact that nobody's managed to construct an actual map relying on that flaw does not give any mathematical evidence that an example can't exist. Staying within the field of math, the Bayesian is not updated and we can discard his conclusion. But we can step outside math's rules and say "there's a bunch of smart mathematicians trying to find a counterexample, and Kempe shows them exactly where the counterexample would have to be, and they can't find one." That fact updates the Bayesian, but reaches outside the field of math. The behavior of mathematicians faced by a math problem looks like part of mathematics, but actually isn't.
2twanvl
That simply doesn't follow: why does involving reproducible results imply not being definitive? Empirical results are never 'definitive' as in being 100.0% certain, but we can get very close. Whether this is done in a single study or with multiple studies doesn't matter at all. In practice there are good reasons to want multiple studies, but they have more to do with questions not addressed in a single study, trustworthiness of the authors, etc. Even wrong mathematical proofs have a non-zero utility, because they often lead to new insights. For example, if only the last of 100 steps is wrong, then you are 99 steps closer to some goal.
2Marius
A single study can't get close to 100% certainty, because that's just not how science works. If you look at all the studies that were true with 95% certainty, you'll find that well over 5% have found conclusions now believed to be false. There are issues of trust, issues of data collection errors, issues of statistical evaluation, the fact that scientific methods are designed under the assumption that studies will be repeated, etc. The steps within unsound mathematical proofs may be valuable, but their conclusions are not.
0twanvl
The current scientific method is in no way ideal. If a study were properly Bayesian, then you should be able to confidently learn from its results. That still leaves issues of trust and the possibility of human error, but there might also be ways to combat those. But in a human society, repeating studies is perhaps the best thing one can hope for. Agreed. That is the one part of an unsound proof that is useless.
0Marius
Can you describe a better, more Bayesian scientific method? The main way I would change it is to increase the number of studies that are repeated, to improve the accuracy of our knowledge. How would you propose to improve our confidence other than by showing that an experiment has reproducible results?
-1ksolez
In a recent interview on Singularity One on One http://singularityblog.singularitysymposium.com/question-everything-max-more-on-singularity-1-on-1/ (first video) Max More, one of the founders of transhumanism talks about how important philosophy was as the starting point for the important things he has done. Philosophy provided an important vantage point from which he wrote the influential papers which started transhumanism. Philosophy is not something to give up or shun, you just need to know what parts of it to ignore in pursuing important objectives.
2prase
I am not questioning the importance of philosophy, but the use of the label "philosophical" together with "fundamental". If someone draw a map of human knowledge, mathematics and biology and physics and history would form wedges starting from well-established facts near the center and reaching more recent and complex theories further away; philosophy, on the other hand, would be the whole ever receding border region of uncertain conjectures still out of reach of scientific methods. To expand the human knowledge these areas indeed must be explored, but once that happens, some branch of science will claim their possession and there will be no reason to further call them philosophy.

Eliezer's anti-philosophy rant Against Modal Logics was pretty controversial, while my recent pro-philosophy (by LW standards) post and my list of useful mainstream philosophy contributions were massively up-voted. This suggests a significant appreciation for mainstream philosophy on Less Wrong - not surprising, since Less Wrong covers so many philosophical topics.

This opening paragraph set off a huge warning claxon in my bullshit filter. To put it generously it is heavy on 'spin'. Specifically:

  • It sets up a comparison based on upvotes between a post written in the last month and a post written on a different blog.
  • Luke's post is presented as a contrast to controversy despite being among the most controversial posts to ever appear on the site. This can be measured based on the massive series of replies and counter replies, most of which were heavily upvoted - which is how controversy tends to present itself here. (Not that controversy is a bad thing.)
  • Upvotes for a well written post that contains useful references are equivocated with support for the agenda that prompted the author to write it.
  • The first 3.5 words were "Eliezer's anti-philosophy rant". Enough said.

All of the above is unfortunate because the remainder of this post was overwhelmingly reasonable and a promise of good things too come.

1lukeprog
Interesting, thanks. By the way, what is 'the agenda that prompted the author to write it'?
9lukeprog
I just realized that 'rant' doesn't have the usual negative connotations for me that it probably does for others. For example, here is my rant about people changing the subject in the middle of an argument. For the record, the article originally began "Eliezer's anti-philosophy rant..." but I'm going to change that.
4FAWS
Rant doesn't necessarily have negative connotations for me either, it really depends on the context. Your usage didn't look pejorative at all to me. It's sort of like a less intensive version of "vitriol" and there is no problem (implied) if the target deserves it (or is presented so).
1lessdazed
It is similar to the word "extremist", the technical definition is rarely only what people mean to invoke, and it's acquiring further connotations. Losing precise meaning is the way to newspeak, and it distresses me. It is sometimes the result of being uncomfortable with or incapable of discussing specific facts, which is harder than the inside view.

Note that this is not just my vision of how to get published in journals. It's my vision of how to do philosophy.

Your vision of how to do philosophy suspiciously conforms to how philosophy has traditionally been done, i.e. in journals. Have you read Michael Nielsen's Doing Science Online? It's written specifically about science, but I see no reason why it couldn't be applied to any kind of scholarly communication. He makes a good argument for including blog posts into scientific communication, which, at present, doesn't seem to be amenable with writing journal articles (is it kosher to cite blog posts?):

Many of the best blog posts contain material that could not easily be published in a conventional way: small, striking insights, or perhaps general thoughts on approach to a problem. These are the kinds of ideas that may be too small or incomplete to be published, but which often contain the seed of later progress.

You can think of blogs as a way of scaling up scientific conversation, so that conversations can become widely distributed in both time and space. Instead of just a few people listening as Terry Tao muses aloud in the hall or the seminar room about the Navier-Stokes e

... (read more)

No, I agree that much science and philosophy can be done in blogs and so on. Usually, it's going to be helpful to do some back-and-forth in the blogosphere before you're ready to publish a final 'article.' But the well-honed article is still very valuable. It is much easier for people to read, it cites the relevant literature, and so on.

Articles could be, basically, very well-honed and referenced short summaries of positions and arguments that have developed over dozens of conversations and blog posts and mailing list discussions and so on.

4Dustin
I often get lost in back-and-forth on blogs because it jumps from here to there and assumes the reader has kept track of everything everyone involved has said on the subject. My point being, that I agree that both the blogosphere and article are important.
6alfredmacdonald
YeahOKButStill has an interesting take on the interaction between philosophy done in blogs and philosophy done in journals:
[-]FAWS190

Eliezer's anti-philosophy rant Against Modal Logics hovers near 0 karma points, while my recent pro-philosophy (by LW standards) post and my list of mainstream philosophy contributions were massively upvoted.

The karma of pre-LW OvercomingBias posts that were ported over should not be compared to that of LW post proper. Most of Eliezer's old posts are massively under-voted that way, though some frequently linked to posts less so.

5lukeprog
True, but most of Eliezer's substantive pre-LW posts seem to have karma in the low teens, and the comments section of Against Modal Logics also shows that post was highly controversial.
4Vladimir_Nesov
Not exactly. Most posts published around the same time have similar Karma level. Earliest posts or highly linked-to posts get more Karma, but people either rarely get far in reading of the archives, or their impulse to upvote atrophies by the time they've read hundreds of posts, and as a result Karma level of a typical post starting from about April 2008 currently stands at about 0-10. The post in question currently ranks 4 Karma.
4Normal_Anomaly
Also, many users read the early posts while still in the lurker stage, at which point they can't upvote.
0David_Gerard
Do we actually know this?
0Normal_Anomaly
Well, whenever somebody starts posting and doesn't act like they've already read the sequences, they get told to go read the sequences and come back afterward. Also, in the past year or so many new users have joined the site from MoR, and the link in the MoR author's notes goes to the main sequences list. I know that I at least decided to join LW when MoR linked me to the sequences and I liked them.
6David_Gerard
I haven't seen this in several months (and I've been watching); the admonishment seems to have vanished from the local meme selection. More often, someone links to a specific apposite post, or possibly sequence. It's just entirely unclear how we'd actually measure whether people who read the sequences do so before or after logging in. (I'd suspect not, given they're a million words of text and a few million of accompanying comments, but then that's not even an anecdote ...)

Poll: If you read the sequences before opening your account, upvote this comment.

If you read the sequences before LessWrong was created upvote this comment.

Poll: If you read the sequences after opening your account, upvote this comment.

2Normal_Anomaly
You may be right. I think there has been less of that lately. I wouldn't say it's entirely unclear. I'm curious enough to start a poll.
0David_Gerard
Could also do with "Poll: If you still haven't read the sequences, upvote this comment."
0Normal_Anomaly
I'd been considering that, and since you agree I went and added it.
1Desrtopa
I think this has mainly declined after a number of posts discussing the sheer length of the sequences and the deceptive difficulty of the demand, and potential ways to make the burden easier.
1Normal_Anomaly
Poll: If you haven't read the sequences yet, upvote this comment.
2Desrtopa
Should this perhaps be made into a discussion article where it will be noticed more?