Part of the sequence: Rationality and Philosophy
Eliezer's anti-philosophy post Against Modal Logics was pretty controversial, while my recent pro-philosophy (by LW standards) post and my list of useful mainstream philosophy contributions were massively up-voted. This suggests a significant appreciation for mainstream philosophy on Less Wrong - not surprising, since Less Wrong covers so many philosophical topics.
If you followed the recent very long debate between Eliezer and I over the value of mainstream philosophy, you may have gotten the impression that Eliezer and I strongly diverge on the subject. But I suspect I agree more with Eliezer on the value of mainstream philosophy than I do with many Less Wrong readers - perhaps most.
That might sound odd coming from someone who writes a philosophy blog and spends most of his spare time doing philosophy, so let me explain myself. (Warning: broad generalizations ahead! There are exceptions.)
Failed methods
Large swaths of philosophy (e.g. continental and postmodern philosophy) often don't even try to be clear, rigorous, or scientifically respectable. This is philosophy of the "Uncle Joe's musings on the meaning of life" sort, except that it's dressed up in big words and long footnotes. You will occasionally stumble upon an argument, but it falls prey to magical categories and language confusions and non-natural hypotheses. You may also stumble upon science or math, but they are used to 'prove' things irrelevant to the actual scientific data or the equations used.
Analytic philosophy is clearer, more rigorous, and better with math and science, but only does a slightly better job of avoiding magical categories, language confusions, and non-natural hypotheses. Moreover, its central tool is intuition, and this displays a near-total ignorance of how brains work. As Michael Vassar observes, philosophers are "spectacularly bad" at understanding that their intuitions are generated by cognitive algorithms.
A diseased discipline
What about Quinean naturalists? Many of them at least understand the basics: that things are made of atoms, that many questions don't need to be answered but instead dissolved, that the brain is not an a priori truth factory, that intuitions come from cognitive algorithms, that humans are loaded with bias, that language is full of tricks, and that justification rests in the lens that can see its flaws. Some of them are even Bayesians.
Like I said, a few naturalistic philosophers are doing some useful work. But the signal-to-noise ratio is much lower even in naturalistic philosophy than it is in, say, behavioral economics or cognitive neuroscience or artificial intelligence or statistics. Why? Here are some hypotheses, based on my thousands of hours in the literature:
- Many philosophers have been infected (often by later Wittgenstein) with the idea that philosophy is supposed to be useless. If it's useful, then it's science or math or something else, but not philosophy. Michael Bishop says a common complaint from his colleagues about his 2004 book is that it is too useful.
- Most philosophers don't understand the basics, so naturalists spend much of their time coming up with new ways to argue that people are made of atoms and intuitions don't trump science. They fight beside the poor atheistic philosophers who keep coming up with new ways to argue that the universe was not created by someone's invisible magical friend.
- Philosophy has grown into an abnormally backward-looking discipline. Scientists like to put their work in the context of what old dead guys said, too, but philosophers have a real fetish for it. Even naturalists spend a fair amount of time re-interpreting Hume and Dewey yet again.
- Because they were trained in traditional philosophical ideas, arguments, and frames of mind, naturalists will anchor and adjust from traditional philosophy when they make progress, rather than scrapping the whole mess and starting from scratch with a correct understanding of language, physics, and cognitive science. Sometimes, philosophical work is useful to build from: Judea Pearl's triumphant work on causality built on earlier counterfactual accounts of causality from philosophy. Other times, it's best to ignore the past confusions. Eliezer made most of his philosophical progress on his own, in order to solve problems in AI, and only later looked around in philosophy to see which standard position his own theory was most similar to.
- Many naturalists aren't trained in cognitive science or AI. Cognitive science is essential because the tool we use to philosophize is the brain, and if you don't know how your tool works then you'll use it poorly. AI is useful because it keeps you honest: you can't write confused concepts or non-natural hypotheses in a programming language.
- Mainstream philosophy publishing favors the established positions and arguments. You're more likely to get published if you can write about how intuitions are useless in solving Gettier problems (which is a confused set of non-problems anyway) than if you write about how to make a superintelligent machine preserve its utility function across millions of self-modifications.
- Even much of the useful work naturalistic philosophers do is not at the cutting-edge. Chalmers' update for I.J. Good's 'intelligence explosion' argument is the best one-stop summary available, but it doesn't get as far as the Hanson-Yudkowsky AI-Foom debate in 2008 did. Talbot (2009) and Bishop & Trout (2004) provide handy summaries of much of the heuristics and biases literature, just like Eliezer has so usefully done on Less Wrong, but of course this isn't cutting edge. You could always just read it in the primary literature by Kahneman and Tversky and others.
Of course, there is mainstream philosophy that is both good and cutting-edge: the work of Nick Bostrom and Daniel Dennett stands out. And of course there is a role for those who keep arguing for atheism and reductionism and so on. I was a fundamentalist Christian until I read some contemporary atheistic philosophy, so that kind of work definitely does some good.
But if you're looking to solve cutting-edge problems, mainstream philosophy is one of the last places you should look. Try to find the answer in the cognitive science or AI literature first, or try to solve the problem by applying rationalist thinking: like this.
Swimming the murky waters of mainstream philosophy is perhaps a job best left for those who already spent several years studying it - that is, people like me. I already know what things are called and where to look, and I have an efficient filter for skipping past the 95% of philosophy that isn't useful to me. And hopefully my rationalist training will protect me from picking up bad habits of thought.
Philosophy: the way forward
Unfortunately, many important problems are fundamentally philosophical problems. Philosophy itself is unavoidable. How can we proceed?
First, we must remain vigilant with our rationality training. It is not easy to overcome millions of years of brain evolution, and as long as you are human there is no final victory. You will always wake up the next morning as homo sapiens.
Second, if you want to contribute to cutting-edge problems, even ones that seem philosophical, it's far more productive to study math and science than it is to study philosophy. You'll learn more in math and science, and your learning will be of a higher quality. Ask a fellow rationalist who is knowledgeable about philosophy what the standard positions and arguments in philosophy are on your topic. If any of them seem really useful, grab those particular works and read them. But again: you're probably better off trying to solve the problem by thinking like a cognitive scientist or an AI programmer than by ingesting mainstream philosophy.
However, I must say that I wish so much of Eliezer's cutting-edge work wasn't spread out across hundreds of Less Wrong blog posts and long SIAI articles written in with an idiosyncratic style and vocabulary. I would rather these ideas were written in standard academic form, even if they transcended the standard game of mainstream philosophy.
But it's one thing to complain; another to offer solutions. So let me tell you what I think cutting-edge philosophy should be. As you might expect, my vision is to combine what's good in LW-style philosophy with what's good in mainstream philosophy, and toss out the rest:
- Write short articles. One or two major ideas or arguments per article, maximum. Try to keep each article under 20 pages. It's hard to follow a hundred-page argument.
- Open each article by explaining the context and goals of the article (even if you cover mostly the same ground in the opening of 5 other articles). What topic are you discussing? Which problem do you want to solve? What have other people said about the problem? What will you accomplish in the paper? Introduce key terms, cite standard sources and positions on the problem you'll be discussing, even if you disagree with them.
- If possible, use the standard terms in the field. If the standard terms are flawed, explain why they are flawed and then introduce your new terms in that context so everybody knows what you're talking about. This requires that you research your topic so you know what the standard terms and positions are. If you're talking about a problem in cognitive science, you'll need to read cognitive science literature. If you're talking about a problem in social science, you'll need to read social science literature. If you're talking about a problem in epistemology or morality, you'll need to read philosophy.
- Write as clearly and simply as possible. Organize the paper with lots of heading and subheadings. Put in lots of 'hand-holding' sentences to help your reader along: explain the point of the previous section, then explain why the next section is necessary, etc. Patiently guide your reader through every step of the argument, especially if it is long and complicated.
- Always cite the relevant literature. If you can't find much work relevant to your topic, you almost certainly haven't looked hard enough. Citing the relevant literature not only lends weight to your argument, but also enables the reader to track down and examine the ideas or claims you are discussing. Being lazy with your citations is a sure way to frustrate precisely those readers who care enough to read your paper closely.
- Think like a cognitive scientist and AI programmer. Watch out for biases. Avoid magical categories and language confusions and non-natural hypotheses. Look at your intuitions from the outside, as cognitive algorithms. Update your beliefs in response to evidence. [This one is central. This is LW-style philosophy.]
- Use your rationality training, but avoid language that is unique to Less Wrong. Nearly all these terms and ideas have standard names outside of Less Wrong (though in many cases Less Wrong already uses the standard language).
- Don't dwell too long on what old dead guys said, nor on semantic debates. Dissolve semantic problems and move on.
- Conclude with a summary of your paper, and suggest directions for future research.
- Ask fellow rationalists to read drafts of your article, then re-write. Then rewrite again, adding more citations and hand-holding sentences.
- Format the article attractively. A well-chosen font makes for an easier read. Then publish (in a journal or elsewhere).
Note that this is not just my vision of how to get published in journals. It's my vision of how to do philosophy.
Meeting journals standards is not the most important reason to follow the suggestions above. Write short articles because they're easier to follow. Open with the context and goals of your article because that makes it easier to understand, and lets people decide right away whether your article fits their interests. Use standard terms so that people already familiar with the topic aren't annoyed at having to learn a whole new vocabulary just to read your paper. Cite the relevant positions and arguments so that people have a sense of the context of what you're doing, and can look up what other people have said on the topic. Write clearly and simply and with much organization so that your paper is not wearying to read. Write lots of hand-holding sentences because we always communicate less effectively then we thought we did. Cite the relevant literature as much as possible to assist your most careful readers in getting the information they want to know. Use your rationality training to remain sharp at all times. And so on.
That is what cutting-edge philosophy could look like, I think.
Next post: How You Make Judgments
Previous post: Less Wrong Rationality and Mainstream Philosophy
As a professional philosopher who's interested in some of the issues discussed in this forum, I think it's perfectly healthy for people here to mostly ignore professional philosophy, for reasons given here. But I'm interested in the reverse direction: if good ideas are being had here, I'd like professional philosophy to benefit from them. So I'd be grateful if someone could compile a list of significant contributions made here that would be useful to professional philosophers, with links to sources.
(The two main contributions that I'm aware of are ideas about friendly AI and timeless/updateless decision theory. I'm sure there are more, though. Incidentally I've tried to get very smart colleagues in decision theory to take the TDT/UDT material seriously, but the lack of a really clear statement of these ideas seems to get in the way.)
Yes, this is one reason I'm campaigning to have LW / SIAI / Yudkowsky ideas written in standard form!
Oh wow. The initials 'djc' match up with David (John) Chalmers. Carnap and PhilPapers are mentioned in this user's comments. Far from conclusive evidence, but my bet is that we've witnessed a major analytic philosopher contribute to LW's discussion. Awesome.
That's the one. I sent it to five of the world's leading decision theorists. Those who I heard back from clearly hadn't grasped the main idea. Given the people involved, I think this indicates that the paper isn't a sufficiently clear statement.
Your dream has come true.
Isn't this true just because the way philosophy is effectively defined? It's a catch-all category for poorly understood problems which have nothing in common except that they aren't properly investigated by some branch of science. Once a real question is answered, it no longer feels like a philosophical question; today philosophers don't investigate motion of celestial bodies or structure of matter any more.
In other words, I wonder what are the fundamentally philosophical questions. The adverb fundamentally creates the impression that those questions will be still regarded as philosophical after being uncontroversially answered, which I doubt will ever happen.
Strongly agreed. I think "philosophical questions" are the ones that are fun to argue endlessly about even if we're too confused to actually solve them decisively and convincingly. Thinking that any questions are inherently philosophical (in that sense) would be mind projection; if a question's philosophicalness can go away due to changes in facts about us rather than facts about the question, then we probably shouldn't even be using that as a category.
This opening paragraph set off a huge warning claxon in my bullshit filter. To put it generously it is heavy on 'spin'. Specifically:
All of the above is unfortunate because the remainder of this post was overwhelmingly reasonable and a promise of good things too come.
Your vision of how to do philosophy suspiciously conforms to how philosophy has traditionally been done, i.e. in journals. Have you read Michael Nielsen's Doing Science Online? It's written specifically about science, but I see no reason why it couldn't be applied to any kind of scholarly communication. He makes a good argument for including blog posts into scientific communication, which, at present, doesn't seem to be amenable with writing journal articles (is it kosher to cite blog posts?):
... (read more)No, I agree that much science and philosophy can be done in blogs and so on. Usually, it's going to be helpful to do some back-and-forth in the blogosphere before you're ready to publish a final 'article.' But the well-honed article is still very valuable. It is much easier for people to read, it cites the relevant literature, and so on.
Articles could be, basically, very well-honed and referenced short summaries of positions and arguments that have developed over dozens of conversations and blog posts and mailing list discussions and so on.
The karma of pre-LW OvercomingBias posts that were ported over should not be compared to that of LW post proper. Most of Eliezer's old posts are massively under-voted that way, though some frequently linked to posts less so.
Poll: If you read the sequences before opening your account, upvote this comment.
If you read the sequences before LessWrong was created upvote this comment.
Poll: If you read the sequences after opening your account, upvote this comment.
Have you considered taking some of EY's work and jargon-translating it into journal-suitable form?
I'd love to do that if I had time, and if Yudkowsky was willing to answer lots of questions.
You could probably find other philosophers to help out. The end result, if supported properly by Eliezer, could be very helpful to SIAI's cause.
If SIAI donations could be earmarked for this purpose I would double my monthly contribution.
.
I'd add "Learn LaTeX" to this one; if you're publishing in a journal, that matters more than your font preferences and formatting skills (which won't be used in the published version), and if you're publishing online, it can make your paper look like a journal article, which is probably good for status. Even TeX's default Computer Modern font, which I wouldn't call beautiful, has a certain air of authority to it — maybe due to some of its visual qualities, but possibly just by reputation.
I agree with a lot of the content -- or at least the spirit -- of the post, but I worry that there is some selectivity that makes philosophy come off worse than it actually is. Just to take one example that I know something about: Pearl is praised (rightly) for excellent work on causation, but very similar work developed at the same time by philosophers at Carnegie Mellon University, especially Peter Spirtes, Clark Glymour, and Richard Scheines, isn't even mentioned.
Lots of other philosophers could be added to the list of people making interesting, useful... (read more)
I posted this on Reddit r/philosophy, if anyone would like to upvote it there.
A few points:
Philisophy is (by definition, more or less) meta to everything else. By its nature, it has to question everything, including things that here seem to be unuqestionable, such as rationality and reductionism. The elevation of these into unquestionable dogma creates a somewhat cult-like environment.
Often people who dismiss philosophy end up going over the same ground philosophers trode hundreds or thousands of years ago. That's one reason philosophers emphasize the history of ideas so much. It's probably a mistake to think you are so sma
This paragraph, from Eugene Mills' 'Are Analytic Philosophers Shallow and Stupid?', made me laugh out loud:
Mills goes on to defend philosophers, with two sections entitl... (read more)
Could you please write a translation key for these?
I think it would help LWers read mainstream philosophy, and people with philosophy backgrounds read LW.
I'd welcome more quality discussion of philosophical topics such as morality here. You occasionally see people pop up and say confused things about morality, like
... that got downvoted, but I still get the impression that confused thinking like that pops up more often on the topic of morality than on others (except Friendly AI), and that Eliezer didn't do a good eno... (read more)
Seems like an appropriate article to relay a bit of wisdom from E.T. Jaynes.
Jaynes quotes a colleague: “Philosophers are free to do whatever they please, because they don’t have to do anything right.”
Is it possible to think "like an AI programmer" without being an AI programmer ? If the answer is "no", as I suspect it is, then doesn't this piece of advice basically say, "don't be a philosopher, be an AI programmer instead" ? If so, then it directly contradicts your point that "philosophy is not useless".
To put it in a slightly different way, is creating FAI primarily a philosophical challenge, or an engineering challenge ?
What makes you think this? It's true that many philosophers recognize the genetic fallacy, and hence don't take "you judge that P because of some fact about your brain" to necessarily undermine their judgment. But it's ludicrously uncharitable to interpret this principled epistemological disagreement as a mere factual misunderstanding.
Again: We can agree on all the facts about how human psychology works. What we disagr... (read more)
Richard Chappell,
Of course, you know how intuitions are generally used in mainstream philosophy, and why I think most such arguments are undermined by facts about where our intuitions come from, which undermine the epistemic usefulness of those intuitions. (So does the cross-checking problem.)
I'll break the last part into two bits:
What I'm saying with the 'people are made of atoms' bit is that it looks like a slight majority of philosophers may now think that is at least a component of a person that is not made of atoms - usually consciousness.
As for intuitions trumping science, that was unclear. What I mean is that, in my view, philosophers still often take their intuitions to be more powerful evidence than the trends of science (e.g. reductionism) - and again I can point to this example.
I'm sure this post must have been highly annoying to a pro such as yourself, and I appreciate the cordial tone of your reply.
The first question should really be: what does the apparent conceivability of zombies by humans imply about their possibility?
Philosophers on your side of the debate seem to take it for granted (or at least end up believing) that it implies a lot, but those of us on the other side think that the answer to the cogsci question undermines that implication considerably, since it shows how we might think zombies are conceivable even when they are not.
It's been quite a while since I was actively reading philosophy, so maybe you can tell me: are there any reasons to believe zombies are logically possible other than people's intuitions?
At least around here, "evidence (for X)" is anything which is more likely to be the case under the assumption that X is true than under the assumption that X is false. So if zombies are more likely to be conceivable if non-physicalism is true than if physicalism is true, then I for one am happy to count the conceivability of zombies as evidence for non-physicalism.
But again, the question is: how do you know that zombies are conceivable? You say that this is a non-psychological fact; that's fine perhaps, but the only evidence for this fact that I'm aware of is psychological in nature, and this is the very psychological evidence that is undermined by cognitive science. In other words, the chain of inference still seems to be
people think zombies are conceivable => zombies are conceivable => physicalism is false
so that you still ultimately have the "work" being done by people's intuitions.
I believe he's trying to draw a distinction between two potential sources of evidence:
Richard is saying that his justification for his belief that p-zombies are conceivable lies in his successful conception of p-zombies. So what licenses him to believe that he's successfully conceived of zombies after all? His answer is that he has direct access to the contents of his conception, in the same way that he has access to the contents of his perception. You don't need to ask, "How do I know I'm really seeing blue right now, and not red?" Your justification for your belief that you're seeing blue just is your phenomenal act of noticing a real, bluish sensation. This justification is "direct" insofar as it comes directly from the sensation, and not via some intermediate process of reasoning which involve inferences (which can be valid or invalid) or premises (which can be true or false). Similarly, he thinks his justification for his belief that p-zombies are conceivable just is his p-zombie-ish conception.
A couple of things to note. One is that this e... (read more)
Kind of. I was drawing on my observations about how the karma system is used. I've generally noticed (as have others) that people with outlier views do get modded up very highly, so long as they articulate their position clearly. For example: Mitchell Porter on QM, pjeby on PCT, lukeprog on certain matters of mainstream philosophy, Alicorn on deontology and (some) feminism, byrnema on theism, XiXiDu on LW groupthink.
Given that history, I felt safe in chalking up his "insufficiently" high karma to inscrutability rather than "He's deviating from the party line -- get him!" And you don't get to ignore that factor (of controversial, well-articulated positions being voted up) by saying you "weren't referring to that speculation".
My response is that, to the extent that convoluted, error-obscuring posting is discouraged, I'm perfectly fine with such discouragement, and I don't... (read more)
Only for people who haven't properly internalized that they are brains. Just like people who haven't internalized that heat is molecular motion could imagine a cold object with molecules vibrating just as fast as in a hot object.
If you met someone who said with a straight face "Of course I can imagine something that is physically identical to a chair, but lacks the fundamental chairness that chairs in our experience partake of... and is therefore merely a fake chair, although it will pass all our physical tests of being-a-chair nevertheless," would you consider that claim sufficient evidence for the existence of a non-physical chairness?
Or would you consider other explanations for that claim more likely?
Would you change your mind if a lot of people started making that claim?
But these kinds of imagining are importantly dissimilar. Compare:
1) imagine the physical properties without imagining consciousness
2) imagine a microphysical duplicate of our world that's lacking chairs
The key phrases are: "without imagining" and "that's lacking". It is one thing to imagine one thing without imagining another, and quite another to imagine one thing that's lacking another. For example, I can imagine a ball without imagining its color (indeed, as experiments have shown, we can see a ball without seeing its color), but I may not be able to imagine a ball that's lacking color.
This is no small distinction.
To bring (2) into line with (1) we would need to change it to this:
2a) imagine a microphysical duplicate of our world without imagining chairs
And this, I submit, is possible. In fact it is possible not only to imagine a physical duplicate of our world wit... (read more)
This feels more like a style guide than a "vision of how to do philosophy".
Most, probably not all. Universal statements like this are brittle and rarely correct.
Why would it be bad for philosophy to work (primarily) with intuitions? And why would philosophy need empirical evidence? (Relating to the point in the linked post on criticism of dualists not having any evidence). Empirical evidence is not what is (primarily) used in mathematics. If everything could be solved with empirical evidence, there would be no need for philosophy. I don’t see how scientific evidence is better than intuition. Or even possible without them... In case you mean not only empirical evidence but also logical/ mathematical (?) evidence: W
... (read more)Hm, that makes a nifty quote.
One thing I mean by saying that philosophers could benefit from 'thinking like AI programmers' is that forcing yourself to think about the algorithm that would generate a certain reality can guard against superstition, because magic doesn't reduce to computer code.
I recently came across Leibniz saying much the same thing in a passage where he imagines a future language of symbolic logic that had not yet been invented:
... (read more)This statement seems misleading, since justification doesn't actually "hit bottom", doesn't stop. For contrast, a quotation from the post:
... (read more)What is an example of a magical category being used in philosophy? (That is, a convenient handle that I can use to represent the term, 'magical category' when I read it).
As a scientist, not a philosopher, I still don't see much virtue in writing "simply". This is a particulary Anglo-Saxon tradition, whereas I (and most of the German-Russian tradition, AFAIK) have always felt that when you try writing simply you lose at least speed of the train of thought and quite likely some of your arguments' power. "No math - no science" is a specific example, but not the only one.
Sorry, I'm not a professional philosopher but did study it at university and still retain an interest in it. I was interested to read this statement. "Many philosophers have been infected (often by later Wittgenstein) with the idea that philosophy is supposed to be useless.".
I take that to mean you consider Tractatus Logico-Philosophicus to be his better work. I do too and have been mocked for saying so when I was a student. I was taught by some very famous professors at a well placed university but I wasn't much of a student, not the ... (read more)
Peter Hacker is not somebody who thinks "philosophy should be useless." Of the list of "basics" that you cite Peter Hacker would agree that "things are made of atoms", "that many questions don't need to be answered but instead dissolved" and "that language is full of tricks." He also explicitly states that "Philosophical Foundations of Neuroscience" should be judged on its usefulness (which is why methodological concerns are relegated to the back pages). Indeed, it seems you equate dissolving prob... (read more)
"3. Philosophy has grown into an abnormally backward-looking discipline."
Indeed. One of the salutory roles that philosophy served until about the 18th century (think e.g. "natural philosophy") was to serve as an intellectual context within new disciplines could emerge and new problems could be formulated into coherent complexes of issues that became their own academic disciplines.
In a world where cosmology and quantum physics and neuroscience and statistics and scientific research methods and psychology and "law and whatever"... (read more)
lukeprog wrote "philosophers are 'spectacularly bad' at understanding that their intuitions are generated by cognitive algorithms." I am pretty confident that minds are physical/chemical systems, and that intuitions are generated by cognitive algorithms. (Furthermore, many of the alternatives I know of are so bizarre that given that such an alternative is the true reality of my universe, the conditional probability that rationality or philosophy is going to do me any good seems to be low.) But philosophy as often practiced values questioning ever... (read more)
It may just be my physician's bias, but "diseased" seems like a very imprecise term. The title would be more informative and more widely quoted with another word choice. In medicine you would not find that word in an article title.
There needs to be more cross-talk between philosophy and science. It is not an "either or" choice; we need an amalgam of the two. As a scientist I object strongly to your statement "Second, if you want to contribute to cutting-edge problems, even ones that seem philosophical, it's far more productive to study math and science than it is to study philosophy." Combined approaches are what is needed, not abandonment of philosophy.
Since evolution, in particular, formed our moral inclination and our reasoning ability, this statement sounds a bit unfair/one-sided.
Fix: The link in the sentence “This is philosophy of the "Uncle Joe's musings on the meaning of life" sort, except that it's dressed up in big words and long footnotes.”, namely http://el-prod.baylor.edu/certain_doubts/?p=453, is wrong, should be something else.
Which Less Wrong post do I need to read to find out how to do that? Also is there a hard definition of an AI programmer?
The difference between much of mainstream philosophy and LessWrongian philosophy: http://www.lulztruck.com/43901/the-thinker-and-the-doer/
This is my viewpoint as a philosophical laymen. I've liked a lot of the philosophy I've read, but I'm thinking about what the counter-proposal to what your post might be, and I don't know that it wouldn't result in a better state of affairs. I don't believe we'd have to stop reading writers from prior eras, or keep reinventing the wheel for "philosophical" questions. But why not just say, from here on out, the useful bits of philosophy can be categorized into other disciplines, and the general catch all term is no longer warranted? Philosophy cov... (read more)
The traditional definition of philosophy (in Greek) implied that philosophy's purpose was not to convey information, but to produce a transformation in the individual who practices it. In that sense, it is not supposed to be "useless", but it may appear so to someone who is looking to it for "information" about reality. By this standard, very little of what goes on in academic Philosophy departments today would qualify.
Philosophy is usually negative. Change my mind.
The philosophers I study under criticise the sciences for not being rigorous enough. The problem goes both ways. The sciences often do not understand the basic concepts from which they are functioning. A good scientist will also have a rudimentary understanding of philosophy, in order to fiddle with the background epistemology of their work.
You are correct in thinking that Continental philosophy is not continuous with the sciences, because it is the core of the humanities and as such being continuous with the sciences would be unnatural for it. I still thi... (read more)
Acid test 1: Are they complaining about experimenters using arbitrary subjective "statistical significance" measures instead of Bayesian likelihood functions?
Acid test 2: Are they chiding physicists for not decisively discarding single-world interpretations of quantum mechanics?
Acid test 3: Are all of their own journals open-access?
It may be ad hominem tu quoque, but any discipline that doesn't pass the three acid tests has not impressed me with its superiority to our modern, massively flawed academic science.
What's weird is that you begin criticizing continental philosophy. Then you say that philosophers do not understand how their brain work, and what their intuition is (linking to an article which explains that our intuition of reality is not reality). But one of the main topic of continental philosophy, long before cognitive science existed, was to argue that we are in a sense trapped inside our cognitive situation with no way out, and for that reason, we cannot know what reality-in-itself is. It feels like you rediscovered Kant... I agree that continental ... (read more)