If the "boring view" of reality is correct, then you can never predict anything irreducible because you are reducible.  You can never get Bayesian confirmation for a hypothesis of irreducibility, because any prediction you can make is, therefore, something that could also be predicted by a reducible thing, namely your brain.

    Benja Fallenstein commented:

    I think that while you can in this case never devise an empirical test whose outcome could logically prove irreducibility, there is no clear reason to believe that you cannot devise a test whose counterfactual outcome in an irreducible world would make irreducibility subjectively much more probable (given an Occamian prior).

    Without getting into reducibility/irreducibility, consider the scenario that the physical universe makes it possible to build a hypercomputer —that performs operations on arbitrary real numbers, for example —but that our brains do not actually make use of this: they can be simulated perfectly well by an ordinary Turing machine, thank you very much...

    Well, that's a very intelligent argument, Benja Fallenstein.  But I have a crushing reply to your argument, such that, once I deliver it, you will at once give up further debate with me on this particular point:

    You're right.

    Alas, I don't get modesty credit on this one, because after publishing yesterday's post I realized a similar flaw on my own—this one concerning Occam's Razor and psychic powers:

    If beliefs and desires are irreducible and ontologically basic entities, or have an ontologically basic component not covered by existing science, that would make it far more likely that there was an ontological rule governing the interaction of different minds—an interaction which bypassed ordinary "material" means of communication like sound waves, known to existing science.

    If naturalism is correct, then there exists a conjugate reductionist model that makes the same predictions as any concrete prediction that any parapsychologist can make about telepathy.

    Indeed, if naturalism is correct, the only reason we can conceive of beliefs as "fundamental" is due to lack of self-knowledge of our own neurons—that the peculiar reflective architecture of our own minds exposes the "belief" class but hides the machinery behind it.

    Nonetheless, the discovery of information transfer between brains, in the absence of any known material connection between them, is probabilistically a privileged prediction of supernatural models (those that contain ontologically basic mental entities).  Just because it is so much simpler in that case to have a new law relating beliefs between different minds, compared to the "boring" model where beliefs are complex constructs of neurons.

    The hope of psychic powers arises from treating beliefs and desires as sufficiently fundamental objects that they can have unmediated connections to reality.  If beliefs are patterns of neurons made of known material, with inputs given by organs like eyes constructed of known material, and with outputs through muscles constructed of known material, and this seems sufficient to account for all known mental powers of humans, then there's no reason to expect anything more—no reason to postulate additional connections.  This is why reductionists don't expect psychic powers.  Thus, observing psychic powers would be strong evidence for the supernatural in Richard Carrier's sense.

    We have an Occam rule that counts the number of ontologically basic classes and ontologically basic laws in the model, and penalizes the count of entities.  If naturalism is correct, then the attempt to count "belief" or the "relation between belief and reality" as a single basic entity, is simply misguided anthropomorphism; we are only tempted to it by a quirk of our brain's internal architecture.  But if you just go with that misguided view, then it assigns a much higher probability to psychic powers than does naturalism, because you can implement psychic powers using apparently simpler laws.

    Hence the actual discovery of psychic powers would imply that the human-naive Occam rule was in fact better-calibrated than the sophisticated naturalistic Occam rule.  It would argue that reductionists had been wrong all along in trying to take apart the brain; that what our minds exposed as a seemingly simple lever, was in fact a simple lever.  The naive dualists would have been right from the beginning, which is why their ancient wish would have been enabled to come true.

    So telepathy, and the ability to influence events just by wishing at them, and precognition, would all, if discovered, be strong Bayesian evidence in favor of the hypothesis that beliefs are ontologically fundamental.  Not logical proof, but strong Bayesian evidence.

    If reductionism is correct, then any science-fiction story containing psychic powers, can be output by a system of simple elements (i.e., the story's author's brain); but if we in fact discover psychic powers, that would make it much more probable that events were occurring which could not in fact be described by reductionist models.

    Which just goes to say:  The existence of psychic powers is a privileged probabilistic assertion of non-reductionist worldviews—they own that advance prediction; they devised it and put it forth, in defiance of reductionist expectations.  So by the laws of science, if psychic powers are discovered, non-reductionism wins.

    I am therefore confident in dismissing psychic powers as a priori implausible, despite all the claimed experimental evidence in favor of them.

    New Comment
    89 comments, sorted by Click to highlight new comments since:
    Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

    How much could any experimental evidence whatsoever really raise your estimate of psychic powers, given the possibility of 'Matrix' type abilities in a simulation?

    If anyone here is interested in psi from a nonskeptic viewpoint, I'd sooner recommend Damien Broderick's "Outside the Gates of Science". (I haven't read it myself, but I don't want to leave you with just Matthew's recommendation.)

    If there's an online page with central references and abstracts for allegedly repeatable psi experiments, I'd be interested in glancing through that - fodder for future posts.

    dont worry eliezer, no editor in this blog is getting any modesty points either.

    But if there are repeatable psi experiments, then why hasn't anyone won the million dollars? (or even passed the relatively easy first round?)

    Uh, "The Irreducible Mind" is garbage.

    I don't see how you can shrink the number of rules even in the non-reductionist case. You'd need enough rules to describe, not a simple-behaving ontologically basic psychic power (like quantum spooky action at a distance seen by a Copenhagen theorist) but a complex one (like statistically barely noticeable psychokinesis) that does a nearly-perfect imitation of a meat brain, down to the quarks. You have to model the whole of the reductionist case AND the psychic power as well. That's necessarily more entities.

    I took psi seriously back when I thought that the scientific method defined rationality. Once I learned about Bayes I realized that the sort of reports of psi that science turns up would be expected if psi isn't real while much more blatant things would be expected if real psi inspired the investigation. I also noticed that priors matter and psi really should be ignored without very large effects based on low priors. Somewhat earlier pre-Bayes psi had blended somewhat into the category "Everything you know is wrong" and loose specific identity as 'psi'. Post-Bayes the "Everything you know is wrong" itself split into a few categories and psi went in the "reason is a mistake" extreme category.

    "If the 'boring view' of reality is correct, then you can never predict anything irreducible because you are reducible."

    Maybe I missed this yesterday, or in another reductionism post, but doesn't that imply that there is no fundamental level of reality - nothing which is not reducible to something else? It could also be that I'm just not understanding what you mean.

    Eliezer, what if psi phenomena are real, but they work through as-yet-unknown laws of physics? In this case reductionism could still be true (and probable), even if psi is real. I can't really see why psi phenomena rule out a reductionist universe (and I guess Damien Broderick agrees...).

    By the way, I don't believe in psi, and think that all effects found thus far are based on the misapplication of statistics and related errors.

    Pyramid: The point is that sure, that's possible, but we shouldn't bet on that. That is, if we do discover psi is real, without having discovered a reduction for it, then we should increase our belief that the universe has irreducible mental (or mental like) components.

    It is not absolute proof. The point is that it actually would be evidence favoring that position. It's not quite obvious to me that it would be strong evidence, but the argument does seem convincing that it would be evidence.

    Post-Bayes the "Everything you know is wrong" itself split into a few categories and psi went in the "reason is a mistake" extreme category.

    I don't quite see this one. Telepathy and telekinesis would be easy enough to implement via the Matrix or even lesser technologies. Even precognition holds out the possibility of expanding our account of causality to allow loops, which General Relativity occasionally seems to threaten. How is psi on the same order as 2 + 2 = 3, or Jehovah as the one true God of all reality?

    What's this "the Matrix" everyone in this thread is talking about? The movie? The idea that we're all in a computer simulation? Btw, as for causality loops, Feynman describes antimatter as "just like regular matter, only traveling backwards in time", which means if we allow for time travel, we've just reduced the number of types of particles in our description of reality by half =].
    The latter. If you see telepathy, it's more likely that you're in a simulation in a reductionist universe than in an irreducible universe.

    The supposed evidence consists of stigmata, hypnotic suggestion, automatic writing, multiple personality disorders, near-death experiences, out-of-body experiences, apparitions, visions, genius level creativity and ecstatic states of consciousness. Since the stated aim is:

    For an enlarged scientific picture of human mind and personality to emerge, two things need to happen: First, it must be demonstrated that the currently dominant physicalist theories of mind-brain relations are inadequate in principle; and second, an alternative theory must be found tha
    ... (read more)
    But I have a crushing reply to your argument, such that, once I deliver it, you will at once give up further debate with me on this particular point: You're right.


    [...what's that? Foul! Foul! You can't do that! Now I shall have to find a new nit to pick!]

    It's not on that level, that's the level which I respond to with the forbidden bet, e.g. p = 0, along with all the other stuff that implies strongly that our concepts of probability are simply broken.

    Reason is a mistake for less extreme reasons such as "I'm dreaming" or "I'm a Boltzman Brain" or some forms of "my life is not merely a simulation but a psychological experiment".

    The possibility that many "paranormal" or "psi" experiences are caused by undiagnosed or transient temporal lobe disorders should not be overlooked. Epilepsy is still poorly understood, underdiagnosed, and misdiagnosed. These "supernatural" things could be caused by natural but unusual brain states.

    Vassar, I don't understand why psi is on that level. Unless you're presuming that someone is telepathically influencing you to make mistakes.

    You pretty much said it. Hypotheses suggested by mind-projection priors turning out to be true pretty much refutes Occam and consequentially science.


    I considered going anonymous for this because I know I'll be decimated here among you guys, but I decided to be bold because I think it's an argument worth making.

    I have a world view that's very similar to many of you here, with reductionism as one of the center pieces.

    So now to queue the lamentation and ridicule which I bring willingly: I am a psychic as well.

    Many, many wishful people come at this from the fairy tail perspective of wishing paranormal things to exist, and therefore convincing themselves that they do.

    I came from an opposite perspective.

    I be... (read more)

    Ken, I look forward to hearing about your lottery wins.


    I hear your cry. I take you seriously and have no interest in insulting you. If you think this is an issue for you, may I suggest you consider a neurologist? Have you ever had a brain scan? There are many kinds of temporal lobe events, and you may benefit from diagnosis and possibly treatment. You may find relief with Tegretol or a similar agent.

    Of course you know what your wife is imagining: you know her well and are obviously adept at reading her subconscious facial and body cues. Many of us often know what our friends are thinking, but I assure you ... (read more)

    Ken: Do the experiment with your wife repeatedly and see what happens.

    Alternately: do you right now have "visions"/guesses/whatever of say, tomorrow? Write down a list of them, say, ten of them. Tomorrow note which were accurate and which were inaccurate.

    Alt alt: I have written down on a small piece of paper a four digit number, and underneath this, drawn something. What is the number and what have I drawn? (Alternately, have your wife do that experiment with you a few times)


    A few things.

    First, I have actually been through a process of diagnosis that I submitted myself to for this very purpose -- to uncover whatever underlying neurological issue I had. They found nothing out of the ordinary, and I function perfectly well. I am well adjusted, not on medication, and otherwise "normal."

    Second, comments like Eli's about the lottery aren't fair, because I never claimed to be omniscient, only to have some sort of extra perception.

    Imagine a scenario in which the world is filled with deaf people. Human beings have never had ... (read more)

    Is your thought-reading ability equally effective against strangers, or people whose presence you're aware of but who you can't make eye contact with? If eye contact is required (or helpful), what about looking at the other person through a narrow opening, such as a mail slot, so that only their eyes are visible? Could it be used to determine the presence or absence of a person on the opposite side of an opaque, soundproofed barrier?

    In response to Ken saying a 90% win at Rock Paper Scissors is impossible, Rock Paper Scissors is not a very good test of the statistical significance of psychic powers. Rock Paper Scissors is something of a game of skill, especially when you are a playing against someone you know well that does not intentionally try to predict the other player's thought process. Ken's wife probably had something of a predictable pattern in that game -- maybe she got bored, maybe she subconsciously played poorly to make Ken seem like more of a psychic.

    http://www.worldrps.com/ It started as a joke, but it's one of those jokes that became too serious for its own good. I would be very surprised if Ken could consistently beat any of the world's top ranked RPS players.

    It is much easier than you think to fool yourself about this sort of thing. Unless you've done a lot more experiments than the rock-paper-scissors one, and much more tightly controlled, you don't have enough evidence to believe what you believe. (Edit: grammar.)
    4Paul Crowley
    Call us back when you win the JREF $1M.
    Strangely, lots of folks replying to a year and a half old comment imported from OB.
    There are a bunch of new users, and people keep getting told to "read the sequences".
    1Paul Crowley
    I saw the other responses and assumed the comment was new; I'll check next time. Thanks.
    Probably silly replying at this late date, but I'm going to do it anyway: Texas Holdem against strangers would be a much more compelling demonstration than RPS with your wife, and lucrative, too, if your powers are real. Surface thoughts should be sufficient to tell you when people are bluffing and when they genuinely have a strong hand, even if they don't tell you exactly what cards they hold. Better yet, they should tell you when your opponents are confident enough to call your bluff, and when they're not. That would give you a devastating advantage in the game. So I won't hold my breath for your lottery wins, but if you genuinely have the abilities you describe I would expect to hear about your World Series of Poker bracelets.

    Ah. Well, I look forward to hearing the news of your lottery wins, then.

    @Ken: I am interested in your claim. You can understand that your personal testimony is not really enough to convince, but I will assume that you are posting in good faith and are serious about (dis)proving your psychic abilities to your own satisfaction.

    You may wish to attempt the following modification on the rock-paper-scissors experiment: Your wife (or another party) will roll a six-sided die. 1-2, she will throw rock; 3-4, she will throw paper; 5-6, she will throw scissors. In this way, her throw will be entirely random (and so not predictable through... (read more)

    Tim, that was fascinating. I don't know how he did it. I certainly don't have a "trick," but of course you can't know that.

    Ian: that's a great idea, I'll try it tonight if I have some time. I'll report back honestly. I think I'll be able to perform under those circumstances, but it'll be interesting to see.

    Actually it's plain old psychology in action. If you watch, every opponent repeats the last move Derren made. He starts it off by explaining the rules of the game by throwing scissors as an example. He uses a bit of fast talk to keep his opponent from thinking about what his own best move should be and instead thinking about what Derren is going to do. He also makes a very big deal about what move won, going so far as to demonstrate that rock blunts scissors and paper covers rock and scissors cut paper. This practically guarantees that his opponent will copy Derren's last move. To win, all Derren has to do is beat his last move. So it goes like this in the video: Derren explains the rules, shows scissors. Opponent throws scissors and Derren beats it with rock. Opponent throws rock and Derren beats it with paper. Opponent throws paper and Derren beats it with scissors. Now he asks the audience if they want him to win, lose, or draw. They say win, so he beats scissors with rock. Next someone in the crowd wanted a draw, so he draws rock with rock. He has several examples where he turns away, closes his eyes, but it's all childs play because he has their minds wrapped around his little finger. I doubt this works on anybody who plays RPS on a regular basis.
    Can't believe this got three upvotes on lesswrong. Derren Brown doesn't use "psychological techniques" for his tricks. They are just tricks plain and simple. Either this was a confederate, or he repeated it until he got the result he wanted. His whole schtick is to pretend to be using "NLP" or some mind trick, when in reality it's your old fashion I've-got-a-camera-looking-at-your-answer trick. He's pretty upfront about this in his books. The genius of it is that precisely by not pretending to be "magic", he actually draws in a sophisticated audience who genuinely thinks he's using psychological mind games. Precisely by eliminating his status as an omniscient magical guru, he gains status as an intuitive social genius which is more impressive for a modern audience.


    First, great blog.

    Second, it would be nice to hear back from Ken. I'd like to know if the experiment suggested by Ian yielded any results (even though I think that it could be done much more rigorously than what he's suggested with little additional effort).

    Third, I want to raise two points about Eliezer's post:

    a) Nothing can raise the probability of something being true if this something isn't logically/mathematically possible. No matter how much evidence we find that apparently supports the claim that there's a logical contradiction in our universe... (read more)


    The SF writer Catherine Asaro came up with a workable explanation of empathy/telepathy that doesn't require non-reductionism, though I don't think it's all that plausible; it's based around quantum entanglement between microstructures in the brains of psions in close proximity to one another (and a lot of hand-waving, of course). In her books, psi powers didn't evolve naturally, but were the result of extensive genetic tinkering by aliens with a far more advanced knowledge of genetics, neurology, and quantum physics than humans presently possess, enabling ... (read more)

    The conclusion is rather strong one, Eliezer destroys the dreams of millions of people who are reading books about meditation, mind-control and other stuff. But this conclusion is stated at the end of the sequence which was preparing us all the way through - so it is good and gives a good chance to reflect over it.

    That seems naive. Why do you think ths argument would convince someone who meditates and has his spiritual experiences?
    I said that with a humor. But as there are a lot of people who believe in dragons, who are on the supernatural end of the scale, there are rational people who are on the opposite end of the scale AND there are a lot of people in the middle. They are partially rational and partially they can believe that e.g. by practicing meditation or some other practices they may achieve SUPERNATURAL abilities. So Eliezer's post may convince some of them to abandone their "dreams" of supernatural. Sayint this I don't mean that meditation or other practices are irrational and bad, things are not black and white :)
    I don't think the talk about ontologically basic mental entities has much bearing on the expected amount of abilities you get through meditation. It has much more to do with whether you believe that certain people who meditate a lot of gained extraordinary abilities. Whether or not those are due to ontologically basic mental entities is not that important.
    Some abilities are much easier to believe in if you already believe in ontologically basic mental entities or something very like them, just because they're hard to fit into a more modern/scientific/reductionist/naturalist understanding of the world.
    I think it's reasonable to believe that there are no ontologically basic mental entities because you don't believe that anybody demostrated telepathy. If you however believe that the data supports telepathy, then I find it strange to say "I defy the data, because I don't believe in tologically basic mental entities" as your whole case for there not being ontologically basic mental entities was about there not being telepathy.
    I don't think it's true for many people that their main reason for not believing in OBMEs is that there appears to be no telepathy. If I disbelieve in OBMEs because I don't see how to fit them into a reductionist understanding of the world that has, on my view, achieved such stunning empirical success that it would need overwhelming evidence to overturn it, then defying the data when presented with apparent evidence for telepathy isn't so unreasonable. (Someone doing that should of course consider possible mechanisms for telepathy that don't involve OBMEs, and should reconsider their objection to OBMEs if enough apparent evidence for them turns up. I am not defending outright immovability.)
    Steam-engine weren't build because of reductionist thinking but because of empirical experimentation. When medicine was reductionist based instead of empirical based it is commonly believed that it killed more people than it cured. When it comes to new drugs 90% of those where there reductionist reason to believe they work turn out to flawed. I think you get very soon into problems if you think that only things that you can explain from the ground up exist. Pratically I think it's very worthwhile to have a state of non-judgement where you let experience speak for itself without commiting to any deeper notion of the way things are. Of course I grant that there are people who deeply believe in the naturalist view of the world and therefore will reject telepathy on those grounds. On the other hand I don't see why someone who has had a few spiritual experiences and seeks for more spiritual experiences should have that committment or why he should adopt it based on the reasoning of this article.
    It sounds to me like you're arguing against a straw man. Reductionism doesn't mean believing the proposition "Nothing exists that I can't explain from the ground up". It means a commitment to trying to explain things from the ground up (or, actually, from the top down, but with the intention of getting as near as possible to whatever ground there may be), and to remaining dissatisfied with explanations in so far as they appeal to things whose properties aren't clearly specified. You say that as if "reductionist" and "empirical" are opposing ideas somehow. Of course they aren't; reductionism and empiricism are two of the key ideas that make science work. You do everything you can to find out what actually happens, and you try to build theories as detailed and bullshit-free as you can that explains what you've found, and then you look for more empirical evidence to help decide between those theories, and then you look for better theories that match what you've found, and so on. Not being empirical is a terrible mistake. It's not clear exactly what and when you're talking about, but do you have any grounds for thinking that the bad results you describe were the result of too much reductionism rather than of not enough empiricism? Most new drugs don't work, quite true. Do you have any reason to think drug discovery would work better if it were somehow driven by a less reductionist view of how drugs work? Would you, if so, like to be more specific about what you have in mind? (And ... has anyone actually done it, saved lots of lives, and got rich?) Who thinks that? (Thinking that certainly isn't what I mean by reductionism.) The article isn't claiming to make a compelling case for naturalism, so I think Eliezer would agree with the last part of that. As to the first part, it sounds (but maybe I'm misunderstanding) as if you are saying that having had "a few spiritual experiences" constitutes strong evidence against naturalism. It's probably true that having "spiritu
    The QS movement is an alternative to reductionism. As a concrete example I believe that we should fund trials for vitamin D3 in the morning vs. vitamin D3 in the evening based on self-reports that people found vitamin D3 in the morning to be more helpful. I think those empiric experience should drive research priorities instead of research priorities being driven by molecular biological findings. QS profits a lot from better technical equipment. Additionally we likely want to get better at developing phenomelogical abilities of select individuals to perceive and write down what goes on in their own bodies. In addition to qualitative descriptions those people also should do quantitave predictions over various QS metrics and calibrate their credence on those metrics. The position for which I'm arguing is empiricism. Letting real world feedback guide your actions instead of being committed to theories. I think that there are cases where committment to naturalism leads to people making worse predictions than people who are committed to empiricism and simply letting the data speak for itself. If I take someone with a standard STEM background and put him in an enviroment conductive to spiritual experiences I think that the person who's more open to updating their beliefs through data will make better predictions than one committed to his preconveived notions. At the process updating would optimally more about letting go off beliefs than about changing beliefs.
    I think perhaps we mean very different things by "reductionism". I see absolutely no conflict between the QS movement and reductionism. Fine with me, at least in principle. (Whether I'd actually be on board with funding those trials would depend on how much money is available, what other promising things there are to spend money on, etc.; it could be that those other things have stronger evidence that they're worth funding.) I don't see why we shouldn't have both. Research should be directed at things that, on the basis of the available evidence, have the best chance of producing the most valuable results. Some of the available evidence comes from direct observation. Some comes from theoretical analysis or modelling of molecular-bio systems. Different kinds of evidence will be differentially relevant to different kinds of desired effects. (If you want to maximize your chance of living to 100, you may do best to look at lifestyles of different communities. If you want to maximize your chance of living to 200, you probably need something -- no one has a very good idea what yet -- for which direct empirical evidence doesn't exist yet, because no one is living to anything like 200. Maybe what's needed is some kind of funky nanotech. If so, it's probably going to need those molecular biologists.) Splendid. I'm all in favour of empiricism. But again, perhaps we mean different things by that word. You speak of not being committed to theories, but the further we go in that direction the less ability we have to generalize the things we discover empirically. To make any statement that goes beyond just repeating simple empirical observations we've already made, we need theories. Our attachment to our theories shouldn't go beyond the evidence we have for them. We should be on the lookout for signs that our theories are wrong. But that doesn't mean giving up on theories; it just means being rational about them. If the evidence for (say) ghosts is good enough, I will (I hope)
    The question isn't "why shouldn't we have both" it's rather "why don't we have both in a way that reasonable founded". If you train calibration you can generalize without theories. Generalizing isn't something that you need to do explicitely through theories. Phenomelogical investigation provides a way to have knowledge that your brain can generalize on system I level. That not the direction in which I'm arguing. I'm arguing that you should focus on predictions instead of concepts like whether or not ghosts exists. Being for empiricism is not the same thing as practicing it. Actually practicing means valuing experience higher than theories. The framing "things that naturalists get wrong" suggests that I think "naturalists get belief X wrong and should believe Y instead". That not the main position that I advocate. Studies consistently show that people get things wrong by being overconfident. The key is to become more open to accept that reality tends to unfold in ways that your theories wouldn't predict.
    That is a rather astonishing claim. What does achieving a 60% success rate on yes-no decisions when I am 60% confident have to do with extrapolation without theories?
    OK, so your response to "system 1 makes a lot of big mistakes" is not "get system 2 in charge in those situations" but "try to train system 1 to do better". Once again I have to ask: why not both? Now, let's apply some empiricism to your suggestion here. Making theories, making them precise, getting detailed predictions out of them and comparing with experiment has been at the heart of the scientific enterprise since, say, Galileo. It's worked incredibly well. Not instead of empirical investigation; not instead of well-trained Systems 1 generating intuitive predictions and ideas. What do we have on the other side? Perhaps "be more specific about some things naturalists get wrong" was the wrong challenge. But so far everything you've offered is, well, theories. Maybe you'd rather call them predictions. But what they clearly aren't is empirical evidence. First of all, if you read the sentence I wrote immediately after the one you quoted, you will see that I endorsed exactly that idea before you mentioned it: given substantial evidence for ghosts but not enough to justify a change of overall theory, I should adopt the belief that the world behaves in something like the way it would if it contained ghosts. Second: it turns out that concepts are really useful. They are especially useful when more than one person is involved. Suppose I am good at predicting the weather. If all I have is a well-trained system 1, I can't communicate my expertise to you at all; I can just demonstrate it and hope you catch on. If I have half-baked folk theories, I can say "when the sky is such-and-such a colour the weather the following day tends to be such-and-such", and you can test how well those claims hold up and use them to predict a bit yourself. If I have a full-blown scientific theory, you can put it into a big computer and take lots of measurements and use them to predict where hurricanes will make landfall. This actually works pretty well considering what a big hairy system glo
    There are multiple issues here. You can throw a bunch of weather data into a machine learning algorithm and get results even if you don't have a good scientific theory. I don't need a commitment to the underlying structure of the weather and decide whether it's atoms or air/water/fire/earth. If the machine learning algorithm includes a node for which you don't have any reductionist reason for the node being useful for predicting the weather, I don't think you should cut that node when the model with the node fits the data better. Secondly we have access to information about our bodies through perception in a way we don't have perception of the weather. I can't give you experiences via this medium. When I speak about the value of experience I mean using actual experience. More practically I can't effectively tell you a story about a territory for which you don't have a map. You can reach for maps that you know but in which you don't believe like ghosts did it but that doesn't help. Imagine I tell you a story of a card magician. His audience makes all sorts of predictions that turn out to be wrong. I could tell you about the experience the audience has and how things that violate their reductionist driven predictions constantly happen. Then you would tell me: "But the magician isn't really doing it, that example doesn't count", "Please tell me what the magician is really doing". If I would try to explain a card trick that wouldn't shift your underlying beliefs at all. If I tell you: "A workshop facilitator can hold the support point for the movement of the whole room", then apart from "A workshop facilitator" you would likely get a different meaning for every following word than the one that's intended because you lack the relevant mental map to make sense of the sentence. No, it's certainly also something I practice myself.
    But can you get results as good as you can get with the theory? I notice that the world's major meteorological offices all seem to have big simulations rather than throwing everything into a machine learning algorithm and hoping for the best, so I'm thinking probably not. I'm not asking you to give me experiences. I'm asking you to be more specific about these allegedly better predictions you say people can make if they are less committed to naturalism. I don't think you have any idea what maps I have for what territories. But in any case I'm not asking you to tell me an effective story. I am asking you to give me some examples where being less committed to naturalism has led to better predictions. If the only examples of better predictions you can find are ones that you can't even describe without technical jargon, and whose technical jargon you are unable to explain to anyone who hasn't had the same experiences you have, I hope I will be forgiven for being a bit skeptical about these alleged better predictions. Please don't tell me what I would do unless you actually know. It's rude and it's counterproductive. I'd be interested, though, if you'd say a little more about this card magician example, because if you're suggesting that some such example would support your argument here (which I appreciate you might not be) then again I wonder whether you're using terms like "reductionist" differently from me, to denote some kind of straw-man naive reductionism that I think few people here would endorse. But maybe not; you haven't exactly made it clear what you have in mind. I regret to say that in this thread it doesn't look that way. You are making claims and dispensing advice in the name of empiricism but completely refusing to give a shred of empirical evidence supporting either the claims or the advice.
    I believe that making explicit predictions is very helpful and that the reason that one might be theoretically wrong shouldn't stop prediction making. The key here is the meaning of the word 'skeptic'. If you use the word meaning that you don't know whether the claims I'm making are true that's completely fine. If you mean with it that you reject the claims than I think that's a wrong conclusion to draw. I don't think that's true skepticism. If you simply would believe in less stuff I would be okay with that outcome. You don't need to believe that the specific claim that I made is true. There might be a contest in the future where you make experience that verify what I told you, but I'm okay with the fact that you haven't yet made them. When I'm saying: "Don't reject theories because you believe them to be impossible based on reductionist reasoning and instead be open (with is something different than accepting)" I'm advocating skepticism. I believe that emprirical evidence is about actually experiencing something and that's not something I can give you. I would prefer to live in a world where I could transfer the evidence that I have for believing what I believe over the internet but I don't believe I live in such a world. I'm also okay with pluralism where other people don't believe in what I believe. While thinking about specific examples, what do you think will the average person say when I ask them: "Is it possible to perceive the sound of silence in a way that's different from simply hearing the absence of sounds?"
    Sure. I suggest that when you make explicit predictions about someone you are in conversation with, you take the trouble to (1) make your level of confidence explicit and (2) acknowledge that you are extrapolating and could be wrong. Because otherwise you are at risk of being obnoxiously rude, and you are likely to be wrong. What I meant on this occasion is that (1) you have given me no reason to believe the confident-sounding claims you are making about better predictions, (2) I think it likely that if you had actual good support for those claims you would be showing some of it, and (3) on the whole I think it very likely that in fact those claims are false. But of course I don't know they're false. (You made some remarks earlier about mental maps I allegedly don't have. Here's something you seem to be lacking: you write as if my only options are "believe true", "believe false", and "no opinion", but in fact there are many more. If I think there's a 40% chance that you actually have something a reasonable person could regard as good evidence that less-naturalist people make better predictions in any situations it's reasonable to care about, and a 20% chance that in fact less-naturalist people do make better predictions in any situations it's reasonable to care about -- have I "rejected" your claims, or just "don't know whether the claims are true"? I suggest: not exactly either.) I'm afraid you are still failing to be clear. (Whether the problem is that you aren't expressing yourself clearly, or that you aren't thinking clearly, I don't know.) If "reject theories" and "believe them to be impossible" mean "consider them certainly false", then: that's just not a thing I do, and it's not a thing the standard-issue LW position advocates, and it's not something any good reasoner should be doing in any but the most extreme cases. If you're arguing against that then you are fighting a straw man. If those phrases mean "consider them at least a bit less likely", then:
    I think norms of conversation that prevent honest communication by labeling it as rude are not useful for discussions that are about learning about the world. You should express different beliefs because your beliefs are rude kills an atmosphere of learning. Of course managing the resulting emotions with empathy is something that's much easier in person and it might very well prevent anything positive to happen in this online conversation. The problem is that I'm refering to concepts that are likely not in your map. I know that various people have taken months of in person teaching to get the concepts to which I'm refering, so it's not suprising to me that the ideas don't feel clear to you. If what I'm saying what feel clear to you, you would ignore what I'm saying. Successfully pointing somewhere that's outside of your present map feels inherently unclear. For me it's a success that you don't feel like I'm meaning of those those things that are inside your map. At one of the meditations I lead in an LW context I made the point to focus on perception of silence as something besides simply absence of sound. Afterwards I checked with the person in the room where I was predicting that they least likely got something from the experience and they did experience a silence that was distinct from the absence of sound. It's no big shiny effect, but I would suspect that many committed naturalists think silence = absence of sound and any suggestion that it isn't is emitting deep-sounding word salad. The person developed a new phenomological category for listening to silence that's distinct from not hearing sounds. Now, that's an experience I gave the person in a 20 minute meditation and it wasn't the only thing I did in that 20 minutes. In multiple days, especially with a teacher that has more skill than I have at the moment, more new experiences are possible.
    We're all empiricists here, so let's run an experiment. You've got this theory that gjm won't understand if you try to explain. How 'bout you stop rehashing that, actually try to explain some of those technical terms you mentioned earlier, and see how your theory holds up?
    Perhaps I wasn't clear; I certainly wasn't suggesting you should say things you don't believe for fear of rudeness. I was suggesting you shouldn't make baseless claims about other people for fear of rudeness. Actually, I think there are more important reasons than rudeness (making confident false statements can mislead others or even yourself), but your comments about making explicit predictions led me to suspect that you'd be unmoved by them. Perhaps that's the problem. Or perhaps the problem is that you aren't even trying to be understood. "You guys are making worse predictions than you would if you thought like me." Oh, that's interesting; what predictions? "There's no point saying; you don't have the necessary concepts." Oh, what concepts? "There's no point saying; you wouldn't understand." Well, you might be right, but how can a conversation like this possibly be any use to anyone? If indeed you know ahead of time that no one who disagrees with you is capable of understanding what you say without lengthy in-person training, what is the point of saying it? OK, so let's take a look at what's happened here. The question is, if I understand you right, whether committed LW-style naturalist reductionists make worse predictions than you do about whether there's scope for listening in a quiet room to produce something subjectively different from mere not-hearing-sound. We've got exactly two data points here. One: you. Unfortunately, you haven't told us what your prediction ahead of time actually was, but you say that the person you thought least likely to have had that experience did in fact have it, which doesn't sound like a big predictive success to me. (Though it could have been, if you thought they were 95% likely to have the experience and others in the room more like 99%.) Two: me. If you read what I wrote you will see that the first thing I said was "For sure there are multiple different possible experiences of not-sound", and i commented specifically that a
    I do applied empiricism in the sense that I made a prediction that it's worthless to try to give you a specific example and indeed I find that it's worthless.
    What sort of response would have been evidence of its not being worthless? What are you trying to achieve here?
    As far as giving you the example, the goal of the example was giving you the example helps you to understand something that you haven't before. But generally writing more about the purpose of this conversation would only open more issues that I can't fully explain.
    Leading to the question A little later... It's turtles all the way down
    Reductionism doesn't mean "is currently being explained by being reduced to simpler ideas". It's closer to "can potentially be explained by being reduced to simpler ideas". Testing hypotheses in general is neither reductionist nor anti-reductionist, although there could be anti-reductionist ways of generating the hypotheses. If you think that differences in vitamin D3 ultimately will depend on some molecular cause, you're fine. If you think differences in vitamin D3 will just depend on the time of day because there's a special physical law dealing with vitamin D3 and time of day and this physical law has no components, you're not. In other words, you're overstating what counts as anti-reductionist in order to make spiritual experiences, which actually are anti-reductionist in practice, look good.
    You are hiding behind definitions of words while ignoring why our society funds things the way it does. I care about the predictions that people who are commited to certain ideas make. I don't care about whether a position is justificable under rationalism with definition X.
    Then let me phrase it without using definitions: You're classifying "vitamin D3 response depends on time of day" with "spiritual experiences" in order to make spiritual experiences look good. They aren't similar.
    If you think I wanted to classify taking Vitamin at a different time of day as an spiritual experience than you haven't understood my position.
    You're not classifying it as a spiritual experience, but you're classifying it in the same category as a spiritual experience. You're saying that both of them are "empiric". You imply that since taking vitamin D3 at different times of day iis empiric, and nobody could object to that, and spiritual experiences are empiric too, nobody should object to them either. But your category "empiric" is so broad that it includes things that aren't really very similar.
    No. There isn't something inherently empiric of taking vitamin D3 at a specific time of the day. There's something empiric about the way that advice get's generated as opposed to theory driven drug development that only tests drug candidates where it has a biochemical target. Objecting to spiritual experience is an interesting choice of words. Do you mean that if people meditate in a spiritually framed setting, do you think they won't have experiences? Do you mean you object in the sense that you think those are bad experiences and the people shouldn't have those experience? The way people object to LSD and ban it, because it leads to objectionable experiences?
    "Object" here means "object to the use of, as a way of determining things about reality". I don't really care if you like triggering brain malfunctions, but don't expect me to believe you when you tell me the hallucinations are of real things. And that's equally true whether you triggered the brain malfunction through a drug or a "spiritual experience". Billions of people believe that when Mohammed starved himself and went into the desert, the angel Gabriel that he saw really was there. I do not.
    The question of whether the object of a hallucination is "real" is a question about having a theory about the world. I advocate against focusing on that question. I advocate to focus on whether you can make reliable predictions. Yes, that's not an easy concept to understand if you are bound up with thinking the important and meaningful question is whether or not the angel Gabriel was really there. It typical for the new atheist crowd to focus on those questions and because you are emotionally invested into that question you pattern-match myself into a category that's not the position I advocate.
    Whether the angel Gabriel was really there is inherently the most important and meaningful question because how people act based on that can leave me dead. Whether something leaves me dead is pretty important. You can't just say it isn't important and make it become unimportant.
    Very few people act on whether or not the angel Gabriel was really there or not. A lot of people act on whether or not they think the angel Gabriel was really there. If James thinks that Gabriel was there, then James will act as if Gabriel had been there; if John think Gabriel wasn't there, then John will act as if Gabriel hadn't been there.
    You are replying as though "X is an important question" means "the truth value of X has important effects", but in this context it really means "knowing the truth value of X has important effects". The fact that people will act based on what they think the answer is, rather than the actual answer, is irrelevant to the latter parsing.
    Yes, you care about the question and it's very meaningful to you. At the same time is valuable to understand that there are other people who don't care about the question and care about different things and that you won't understand them if you project your own values about which questions are meaningful on them.
    That's a fully general argument--you could say it about the importance of anything. It has nothing to do specifically with hallucinations. If you just mean that it's unimportant whether something is a hallucination because everything is unimportant to someone, then I can't disagree. But you don't seem to have meant that.
    I haven't used the word hallucinations or intentended to refer to that concept before you did. I also haven't said atheists but new atheists, which is a term that refers to a subgroup of atheists. If someone goes offtopic and you tell them that they are offtopic it's indeed a quite general argument. That doesn't make it wrong.
    You don't need to use a concept in order for what you say to have implications concerning that concept. That's not a standard term, so with no way to distinguish them, anything you say about it just ends up being a statement about atheists.
    It is a standard term, the fact that you don't know it doesn't mean that it doesn't have a regular usage. It's standard in the way that it has a Wikipedia page: https://en.wikipedia.org/wiki/New_Atheism If you can't follow me when I talk about concepts that are easily understandable and well documented on Wikipedia, there no hope that you get a glimpse when I talk about things in this discussion that are not easy to understand. No hope for medium level concepts like the nature of modern drug development and the QS. No at all for hard concepts like living knowledge, body knowledge, beginners mind, support points, effects of ideology and phenomenological investigation.
    Okay, I just read that page. It's odd, then, that I haven't heard of "new atheism", even though I have heard of most of the people mentioned on that page. It's also odd that nobody on that page is quoted as calling themselves a new atheist. Is this a term used by people other than their detractors? This link suggests that the term arose from "journalistic commentary on the contents and impacts of their books"--that is, they don't call themselves that and it's just a label attached by someone else. This doesn't give me confidence that the label is used for more than just "people I don't like". And while rationalwiki is untrustworthy for a lot of things, the article on new atheism there is decidedly lukewarm on it. "The term "New Atheism" is generally only used in blogs and opinion columns, and is more of a pejorative than a self-descriptor for the New Atheists".
    Do you object to the core idea? That there's as Wikipedia describes: If you would want to take a self-description you could use the term 'militant atheist'. Richard Darwin used the phrase in his TED talk but I would expect that most people would understand it more pejoratively than "new atheism". It's quite worthile to distinguish the cluster the cluster of new atheists from other atheists. The average atheist in Germany simply doesn't believe in God. He doesn't go around and argues that religion should be fought in the way Dawkins et al do. They average atheist in Germany does care very much for the question of whether "Whether the angel Gabriel was really there". But people like you care about the question. It's useful to have a term for that cluster of beliefs.
    I object to the idea of someone claiming that his opponents are all part of the same group when the targets in question don't actually identify as part of the same group. Labelling other people this way is highly prone to bias. That's a self-description of one person, not an assertion about how he should be grouped with other people.
    Dude, get over yourself.
    Like when?