...this overwhelming evidence coming from paraphsychology studies, and parapsychology studies only.
Before people did these, all we had was overwhelming anecdotal evidence in favour of parapsychology. Every culture, nay, every family is chock-full of reliable witnesses that give accounts of how they personally experienced paranormal phenomena. In the face of such persistent, recurring reports, you can hardly blame people for wanting to investigate. It is only after you do studies under laboratory conditions that you can begin to show that this anecdotal evidence is a product of selection bias.
While I am personally quite convinced that selection bias is all that is needed to explain the phenomena, this doesn't take away the immense cultural significance of the phenomena that were selected in this way. In this sense, parapsychology is not "wrong", it's just cultural (as opposed to supernatural). At the end of the day, science doesn't attach value to anything. It is just capable of describing what arises from what. Meaning arises from subjective choice alone, and as humans we are much more interested in meaning and made-up patterns than in a full list of all hydrogen atoms in the biosphere, no matter how "objective".
actually, this is precisely how I would like people to discuss parapsychology.
What, are you going to defend science or rationalism using unscientific or irrational tactics just because you think that is going to work better? Even if that wasn't detrimental to your own agenda in the long run, you would need to ask yourself at that point what makes you different from any politician defending any ideology at all. Parapsychology isn't "wrong" because it is obvious to the bigwigs in your camp (the "rationalists") that it is wrong. It is "wrong" (or, unsubstantiated) because and only because positive results are not exceeding the positive results expected assuming the null hypothesis. If positive results DID exceed these, we WOULD need to recognize there is an effect. Actually, most people here would probably just see this as proof that we do indeed live in a simulation and would actually be pretty cool with that as they had half-hoped that we did all along.
I must agree with GabeEisenstein 100%. It is annoying to keep reading arguments against fundamentalist religion phrased as arguments "against religion".
I must also note that Gabe did not get any meaningful reply to his point "that orthogonal-to-facts religion can be valuable, and that it is not a modern phenomenon". He was told to "read all antitheism posts". Well, how about a link to a specific paragraph in a specific post that addresses the very specific issues he raised? Namely, why do people keep focussing on debunking fundamentalist religion (reinterpret the fossils, believe in talking snakes, etc.) and then pretend they have debunked "religion" or "theism", completely ignoring the deep intellectual history within religious thought dealing with exactly these questions? ("you concentrate on fundamentalist or other strange examples, never the work of thinkers like Buber, Merton, Campbell, Watts, [and].... Wittgenstein's views on religion, as found in his essay on Frazer's Golden Bough.") Where in the "antitheism posts" do I find a treatment of these aspects, and why is everything I come across always tailored to debunking fundamentalism instead of dealing with the questions that will crop up if you ignore the fundamentalists and talk to religionist philosphers who are actually intelligent? And even apart from points that may be covered in other posts which I have not seen, GabeEisenstein has pointed to a number of glaring flaws or mistakes in the current post standing on its own, which would merit some attention in themselves, first of all the implication that religious ethics has not evolved over the centuries, and that it'ts a choice between the Iron Age and atheism. That's a false dichotomy if I have ever seen one.
"Estimate a 10% current AI risk"... wait, where did that come from? You say "Let A be the probability that an AI will be created", but actually your A is the probability that an AI will be created which then goes on to wipe out humanity unless precautions are taken, but which will also fail to wipe out humanity if the proper precautions are taken.
Your estimate for that is a whopping 10%? Without any sort of substantiating argument??
... Let's say I claim 0.000001% is a much more reasonable figure for this: what would be your rationale supporting that your estimate is more plausible than mine? Using my estimate, it suddenly becomes much more worthwhile in terms of lives saved per dollar to just build wells in Africa.
(addendum, in fact, I would argue that utility is not to me measured in lives-saved-per-dollar, or else you would need to invest in increasing fertility in Africa so you can then go on to save more lives by building wells. Instead your utility should be a stable and happy Africa (Africa because that's the most unhappy continent right now, so your payoff will tend to be greatest if you invest in Africa) -- for which end the rational thing to do will be invest in birth control rather than wells. But that's a different story)
I agree this is an excellent post. In fact, I just created an account and came out of lurking just to vote it up. Yes, the example came out a little forced and unnecessarily convoluted, but the point made is extremely important. To those who clamp down on the post on grounds of lack of formal rigour are missing the point entirely. You are so preoccupied with formulating your rationality in mathematically pleasing ways, applying it to matrix-magic and Knuth-arrow-quasi-infinity situations, that you are in danger of missing the real-life applications where just a modest bit of rationality will result in a substantial gain to yourself or to society.