Sorted by New

Wiki Contributions


I have read this post before and have agreed to it. But I read it again just now and have new doubts.

I still agree that beliefs should pay rent in anticipated experiences. But I am not sure any more that the examples stated here demonstrate it.

Consider the example of the tree falling in a forest. Both sides of the argument do have anticipated experiences connected to their beliefs. For the first person, the test of whether a tree makes a sound or not is to place an air vibration detector in the vicinity of the tree and check it later. If it did detect some vibration, the answer is yes. For the second person, the test is to monitor every person living on earth and see if their brains did the kind of auditory processing that the falling tree would make them do. Since the first person's test has turned out to be positive and the second person's test has turned out to be negative, they say "yes" and "no" respectively as answers to the question, "Did the tree make any sound?"

So the problem here doesn't seem to be an absence of rent in anticipated experiences. There is some problem, true, because there is no single anticipated experience where the two people anticipate opposite outcomes even though one says that the tree makes a sound and the other one says it doesn't. But it seems like that's because of a different reason.

Say person A has a set of observations X, Y, and Z that he thinks are crucial for deciding whether the tree made any sound or not. For example, if X is negative, he concludes that the tree did make a sound otherwise it didn't, if Y is negative, he concludes it did not make a sound and so on. Here, X could be "cause air vibration" for example. For all other kinds of observations, A has a don't care protocol, i.e., the other observations do not say anything about the sound. Similarly, person B has a set X', Y', Z' of crucial observations and other observations lie in the set of don't cares. The problem here is just that X,Y, Z are completely disjoint from X', Y', Z'. Thus even though A and B differ in their opinions about whether the tree made a sound, there is no single aspect where they would anticipate completely opposite experiences.

How about this:

People are divided into pairs. Say A and B are in one pair. A gets a map of something that's fairly complex but not too complex. For example, an apartment with a sufficiently large number of rooms. A's task is to describe this to B. Once A and B are both satisfied with the description, B is asked questions about the place the map represented. Here are examples of questions that could be asked:

How many left-turns do you need to make to go from the master bed room to the kitchen?

Which one is the washroom nearest to the game room?

You are sitting in room1 and you want to go to room2. You have some guests sitting in room3 and you want to avoid them. Can you still manage to reach room2?

You can also just simulate the story about Y Combinator and Paul Graham. Show a new web-service to person A and ask him to describe it to person B. Finally ask B questions about the web service.

In both cases, the accuracy with which B answers the questions is directly proportional to the quality of A's description.

I think two variants can be tried. In the first one, A does not know what questions will be given to B. In the second one, he does, but he is prohibited from directly including the answers as a part of his description.

Hey, I live in Waterloo too. I will join. (Perhaps not this one, but any subsequent ones after the 24th this month that are organized in Waterloo.) Please keep me posted and let me know if you need any help in organizing this.

If you have many things to do and you are wasting time, then you should number those things from 1 to n and assign n+1 to wasting time and then use to generate a random number between 1 and n+1 (1 and n+1 included) to decide what you should do. This adds some excitement and often works.

I live in Waterloo, Ontario (Canada). Does anyone live nearby?

Consulting a dataset and counting the number of times the event occured and so on would be a rather frequentist way of doing things. If you are a Bayesian, you are supposed to have a probability estimate for any arbitrary hypothesis that's presented to you. You cannot say that oh, I do not have the dataset with me right now, can I get back to you later?

What I was expecting as a reply to my question was something along the following lines. One would first come up with a prior for the hypothesis that the world will be nuked before 2020. Then, one would identify some facts that could be used as evidence in favour or against the hypothesis. And then one would do the necessary Bayesian updates.

I know how to do this for the simple cases of balls in a bin etc. But I get confused when it comes to forming beliefs about statements that are about the real world.

Load More