All of davidamann's Comments + Replies

I think a better way to frame this issue would be the following method.

  1. Present your philosophical thought-experiment.
  2. Ask your subject for their response and their justification.
  3. Ask your subject, what would need to change for them to change their belief?

For example, if I respond to your question of the solitary traveler with "You shouldn't do it because of biological concerns." Accept the answer and then ask, what would need to change in this situation for you to accept the killing of the traveler as moral?

I remember this method giving me ... (read more)

The happiness box is an interesting speculation, but it involves an assumption that, in my view, undermines it: "you will be completely happy." This is assuming that happiness has a maximum, and the best you can do is top up to that maximum. If that were true, then the happiness box might indeed be the peak of existence. But is it true?
This seems to nicely fix something which I felt was wrong in the "least convenient possible world" heuristic. The LCPW only serves to make us consider a possibility seriously. It may be too easy to come up with a LCPW. Asking what would change your mind helps us examine the decision boundary.

I find a similar strategy useful when I am trying to argue my point to a stubborn friend. I ask them, "What would I have to prove in order for you to change your mind?" If they answer "nothing" you know they are probably not truth-seekers.

Great, David! I love it.

Namely, the point of reversal of your moral decision is that it helps to identify what this particular moral position is really about. There are many factors to every decision, so it might help to try varying each of them, and finding other conditions that compensate for the variation.

For example, you wouldn't enter the happiness box if you suspected that information about it giving the true happiness is flawed, that it's some kind of lie or misunderstanding (on anyone's part), of which the situation of leaving your family on the outside is a special case... (read more)

I believe that you have an unexamined assumption in your post. Namely, can you have any effect on what your child believes?

A book by Judith Rich Harris called "The Nurture Assumption" makes the case that it is not parents that shape a child's attitudes and beliefs, but the child's peers. Parents impact on children tends to be primarily genetic and in the basics (no abuse, well fed and clothed, and choosing the general environment where the child is raised.) For a more detailed look on Harris's argument, see Malcolm Gladwell's article at http:/... (read more)

JRH is mostly talking about the long term. Parents have little effect on their children's adult beliefs and behavior. They have an enormous impact on what they believe and how they act as children. One of JRH's examples that she brings out several times is what language is spoken fluently as an adult. We tend to assume kids get it from their parents, but that's a spurious correlation. Nearly all normal children are exposed to parents and peers who speak the same language, and the children end up matching both. When parents and peers speak different languages, children end up speaking the same language as their peers, once they move out of the home. While still at home, they continue speaking to their parents in their parents' language. Similarly, the Santa question is about what your child believes during the formative years. No one continues to believe the Santa story beyond 15, so that isn't a question of peers vs. parents.
Good point - but it depends who counts as the child's 'peers'. In a harmful environment like public schools, the child is artificially sequestered with same-age children for most of the day (and in most households, then exposed to 'age-appropriate' television for the rest of it). Of course parents wouldn't have much impact in this environment. My children will be unschooled - that may be relevant.

This "trying to believe" tactic is much more explicitly used in areas where there is randomness or unpredictability.

My business is finance. As a financial advisor, I am constantly "trying to believe" in things like regression to the mean, long term performance of the market vs. short term volatility, the efficacy of asset allocation.

But each day I am faced with evidence that causes me to doubt my rationally held beliefs about investing.

I think baseball players may have similar issues with batting. They may rationally know that it's ... (read more)

The haunted rationalist is probably an example of a physiological response tied to a shared cultural delusion. In strange places, when we are alone, we often can feel nervous or fearful. I remember feeling this way when I was alone in our church growing up, or when I was alone in our own house for the first time as a teenager.

There's probably a physiological reason for this. Perhaps we produce more adrenaline when left alone after a period of close cohabitation with others. This would be a useful evolutionary trait allowing us to be more aware of our s... (read more)

Show him a picture of the brain. Point to the amygdala and ask him to describe how that part works. After that, you can ask him why he eats donuts given that he knows how unhealthy they are.

I don't think that the "effort" distinction is banal at all.

The "lying" scenario provides us with much more information about the "liar", than the "keeping secrets" scenario provides us about the "secret keeper". Let me go into this in more detail.

An individual assumes that others have mental states, but that individual has no direct access to those mental states. An individual can only infer mental states through the physical actions of another.

For now, let's assume that an individual who can more ac... (read more)

----The person may be unsure of your willingness to receive this information. In other words, there are many reasons a person may refrain from giving you potentially helpful information and still have a mental stance of "friendliness".---- I agree. For instance, if you know that people would over-value your evidence. For instance, what if Walrus believes that Carpenter is over-credulous. He thinks that Carpenter will take the evidence, proclaim 100% certainty that pigs can have wings, and go blow all his money trying to start a flying pig farm. Walrus believes that there will be a higher cost to Carpenter of overconfidence in the belief that pigs have wings, and a lower cost to underconfidence. Consequently, Walrus will keep his evidence to himself because he knows that Carpenter will receive it with bias.
Agreed, but for me the intuition that "lying" is less wise than "keeping secrets" doesn't fully disappear when I assume that I don't care about consequences for my reputation or other punishments, so I don't think this can be the whole story. Also, I sort of skirted over the issue by calling it "bad faith", but I don't think there's necessarily a contradiction between "lying" to someone and looking out for their best interests (consider that pigs don't in fact have wings, so the Walrus is manipulating the Carpenter toward a true conclusion), though there often is in practice. (In case anyone is wondering why I'm putting scare quotes around "lies" and "secrets", it's because I'm thinking more in terms of misleading contributions and non-contributions to an intellectual debate than in terms of more everyday examples. I don't think my comments apply well to things like privacy issues, for example.)