Very definitely, its easy to forget the level of knowledge necessary t work at for this stuff. For example I recently realised that in a room of competitive debaters (college educated well read people) no-one knew what I meant by epistemic uncertainty. And very few philosophers know anything about QM or neurology...
TL;DR Illusion of transparency is a bitch.
For example I recently realised that in a room of competitive debaters (college educated well read people) no-one knew what I meant by epistemic uncertainty.
Wait, what do you mean by "epistemic uncertainty"? The top Google results for the phrase contrast it with "aleatoric uncertainty" which is so esoteric that it's not even in LW's vocabulary (zero results for "aleatoric" on LW search).
Wikipedia says:
Aleatoric uncertainty, aka statistical uncertainty, which is unknowns that differ each time we run the same experiment. For an example of simulating the take-off of an airplane, even if we could exactly control the wind speeds along the run way, if we let 10 planes of the same make start their trajectories would still differ due to fabrication differences. Similarly, if all we knew is that the average wind speed is the same, letting the same plane start 10 times would still yield different trajectories because we do not know the exact wind speed at every point of the runway, only its average. Aleatoric uncertainties are therefore something an experimenter cannot do anything about: they exist, and they cannot be suppressed by more accurate measurements.
Epistemic uncertainty, aka systematic uncertainty, which is due to things we could in principle know but don't in practice. This may be because we have not measured a quantity sufficiently accurately, or because our model neglects certain effects, or because particular data are deliberately hidden.
Aleatoric uncertainty is basically seeing randomness as a property of the universe, rather than a property of minds. Unless you verge into quantum territory, basically all randomness is actually epistemic uncertainty, and even if you verge into quantum territory, you can view quantum randomness as epistemic uncertainty.
Bayesians are comfortable viewing all uncertainties as epistemic. Non-Bayesians aren't, and all of the people I know who do professional decision-making under uncertainty dread someone even mentioning aleatoric uncertainty because it's a dead giveaway that the person mentioning it isn't Bayesian, and thus a long, unproductive philosophical discussion may be necessary before they can get anywhere.
I realize that LW collectively doesn't like unreferenced definitions, but in this case maybe it's OK... a friend of mine whose PhD is in decision theory explained aleatory uncertainty to me as the uncertainty of chance with known parameters: if you roll a normal six-sided die, you know it's going to come up with a value in the range 1-6, but you don't know what it will be. There's no chance it will come up 7. Epistemic uncertainty is the uncertainty of chance with unknown parameters: there may not be enough data to know the bounds of an event, or it may have such large and random bounds that trying to place them is not very meaningful.
Isn't it simply the extent to which one is not certain about some (piece of) knowledge? At least that was my intuition when I first read that.
After googling, the closest definition I could find was on Wikipedia under systemic uncertainty--in contrast to statistical uncertainty (aleatoric uncertainty) apparently.
QM potentially answers cool philosophical questions like, "does cut & paste transportation preserves identity" (it looks like it does, for our universe doesn't seem to encode any identity at all).
Neurology will most probably tell us nearly everything we will ever know about how humans actually work. I expect many questions formerly considered "philosophical" will be answered by this piece of science.
Therefore, I think nearly all philosophers need to know some QM and neurology.
I agree with your first statement.
However, as for your second statement, I would really like an example, because I am not entirely sure what you mean. (I am sincerely requesting examples.)
Unfortunately, I strongly disagree with your third statement. The time it would take to learn QM with sufficient rigor to be interesting could be better spent reading the findings of experimental psychology or learning more mathematics. For the majority of philosophers, their subject matter simply does not overlap with QM in such a way that knowing rigorous QM would help them.
Further, I agree with what paper-machine seemed to imply in their post. A little QM can make a philosopher stupid.
Of course, in certain subjects, knowing QM or neurology should be mandatory.
However, as for your second statement, I would really like an example, because I am not entirely sure what you mean. (I am sincerely requesting examples.)
Few quick examples:
A lot of philosophy of mind assumes there is a singular unified self, whereas neurology might lead you to think of the mind as a group of systems, and this could resolve some dilmnas.
Lots of traditional moral theories assume people make choices in certain ways not backed by observation of their brains.
Your willingness to accept materialist explanations for the mind probably increases exponentially the more you know about the mechanics of the brain. (Are the any dualist neuroscientists?)
A lot of philosophy uses 'armchair' reflection and introspection to get foundational intuitions and make judgements. Knowing the hardware you're running that on is probably helpful. (E.g. showing how easy it is to trigger people's intuitions one way or the other changed the debate about Gettier cases massively.)
The AMA may have received comments form curious people outside of r/futurology since there was an announcement for it on the front page. One thing about r/futurology, too, is that it recently tripled in size - only a few months ago it has around 6k subscribers. A lot of the growth came a week or two ago from a thread featured on r/bestof that got a lot of attention. Those things probably contributed to the inferential distance... If the AMA had happened a few months ago it may have been less, or indeed if it had happened a few months from now, counting on there being significant attrition of those new subscribers.
I was very pleasantly surprised to see the AMA announcement on Reddit's frontpage, given how relatively non-mainstream the S.I. is and how many page views Reddit gets (and gives).
Also, although there is a large inferential distance between Luke and most Redditors (as siodine noted), I thought Luke did a great job trying to bridge the intuition gap--with the usual abundance of links and all.
I'm sure most of us are used to just being able to badger him about things in the comments
Huh? I'm not. In my case, he either pretends he doesn't see my stuff, never again loads the page where it was made, or answers with a non-specific citation.
I'm sure most of us are used to just being able to badger him about things in the comments here on LW, but for anyone interested here's the link.