Posts

Sorted by New

Wiki Contributions

Comments

This its very similar to another result: at the beginning, before seeing any draws, you believe that at the end of $n$ draws, every possible number of  green draws is equally likely, i.e, $\int_0^1 \binom{n}{k}x^{n-k}(1-x)^{k} dx = \frac{1}{n+1}$. The proof: if you draw $n+1$ IID uniform $[0,1]$ random variables, on the one hand, the first one its equally likely to have any particular rank, so the probability it has rank $k+1$ is $\frac{1}{n+1}$. On the other hand, the probability it has rank $k+1$ is exactly the probability that $k$ of the remaining $n$ uniform random variables take a value greater than it, and $n-k$ of the remaining $n$ take a value lesser than it, which equals the integral.

EDIT: Thanks again for the discussion. It has been very helpful, because I think I can now articulate clearly a fundamental fear I have about meditation: it might lead to a loss of the desire to become better.

Of course. You are, at the very least, technically right.

However, I think that obtaining enlightenment only makes it harder for you to change your values, because you're much more likely to be fine with who you are. For example, the man who went through stream entry you linked to seems to have spent several years doing nothing, and didn't feel particularly bad for it. Is that not scary? Is that likely to be a result of pursuing physical exercise?

On the other hand, if you spent time thinking clearly about your values, the likelihood of them changing for the better is higher, because you still have a desire (craving?) to be a better person.

Thank you for this comment. Even if you don't remember exactly what happened, at the very least, your story of what happened is likely to be based on the theoretical positions you subscribe to, and it's helpful to explain these theoretical positions in a concrete example.

I guess what I don't like about what you're saying is that it's entirely amoral. You don't say how actions can be good. Even if a sense of good were to exist, it would be somehow abstract, entirely third-personal, and have no necessary connection to actual action. All intentions just arise on their own, the brain does something with them, some action is performed, and that's it. We can only be good people by accident, not by evaluating reasons and making conscious choices.

I also disagree that you can generally draw conclusions about what happens in normal states of consciousness from examining an abnormal state of consciousness.

The person who experienced stream entry whose thoughts you link to says (in the very next line after your quote) that he decided to sit still until he experienced a physiological drive. That seems to be a conscious decision.

EDIT: You can find another example of someone being completely amoral (in a very different way) here: https://www.youtube.com/watch?v=B9XGUpQZY38

(I am not at all endorsing anything said in the video.)

To put the point starkly, as far as I can tell, whatever you're saying (and what that video says) works just as well for a murderer as it does for you. Meditating, and obtaining enlightenment, allows a murderer to suffer less, while continuing to murder.

"I started out skeptical of many claims, dismissing them as pre-scientific folk-psychological speculation, before gradually coming to believe in them - sometimes as a result of meditation which hadn’t even been aimed at investigating those claims in particular, but where I thought I was doing something completely different."

Did you come to believe in rebirth and remembering past lives?

You say that "I wasn't sure of how long this was going to be healthy...". Was this experienced as a negative valence? If so, why did you do what this valence suggested? I thought you were saying we shouldn't necessarily make decisions based on negative valences. (From what you've been saying, I guess you did not experience the "thought of a cold shower being unhealthy" as a negative valence.)

If it wasn't experienced as a negative valence, why did you leave the shower? Doesn't leaving the shower indicate that you have a preference to leave the shower? Is it a self that has this preference? What computes this preference? Why is the result of this computation something worth following? Does the notion of an action being worthy make sense?

Thank you for your reply, which is helpful. I understand it takes time and energy to compose these responses, so please don't feel too pressured to keep responding.

1. You say that positive/negative valence are not things that the system intrinsically has to pursue/avoid. Then when the system says it values something, why does it say this? A direct question: there exists at least a single case in which the why is not answered by positive/negative valence (or perhaps it is not answered at all). What is this case, and what is the answer to the why?

2. Often in real life, we feel conflicted within ourselves. Maybe different valuations made by different parts of us contradict each other in some particular situation. And then we feel confused. Now one way we resolve this contradiction is to reason about our values. Maybe you sit and write down a series of assumptions, logical deductions, etc. The output of this process is not just another thing some subsystem is shouting about. Reasons are the kind of things that motivate action, in anyone. So it seems the reasoning module is somehow special, and I think there's a long tradition in Western philosophy of equating this reasoner with the self. This self takes into account all the things parts of it feel and value, and makes a decision. This self computes the tradeoffs involved in keeping/ letting go of craving. What do you think about this?

I think you are saying that the reasoning module is also somehow always under suspicion of producing mere rationalisations (like in the chicken claw story), and that even when we think it is the reasoning module making a decision, we're often deluded. But if the reasoning module, and every other module, is to be treated as somehow not-final, how do (should) you make a decision when you're confused? I think you would reject this kind of first-person decision making, and give a sort of third-person explanation of how the brain just does make decisions, somehow accumulating the things various subsystems say. But this provides no practical knowledge about what processes the brains of people who end up making good (or bad) decisions deploy.

3. This is unrelated to my main point, but the brain showing some behaviour in an 'abnormal' situation does not mean the same behaviour exists in the 'normal' situation. In particular, the theory that there are multiple subsystems doing their own thing might make sense in the case of the person with anosognosia or the person experiencing a binocular rivalry illusion, but it does not follow that the normal person in a normal situation also has multiple subsystems in the same way. Perhaps it might follow if you have a mechanistic, reductionist account of how the brain works. I'm not being merely pedantic; Merleau-Ponty takes this quite seriously in his analysis of Schneider.

Load More