Posts

Sorted by New

Wiki Contributions

Comments

That paper did help crystallize some of my thoughts. At this point I'm more interested in wondering if I should be modifying how I think, as opposed to how to implement AI.

I guess the distinction in my mind is that in a Bayesian approach one enumerates the various hypothesis ahead of time. This is in contrast to coming up with a single hypothesis and then adding in more refined versions based on results. There are trade-offs between the two. Once you get going with a Bayesian approach you are much better protected against bias; however if you are missing some hypothesis from your prior you don't find it.

Here are some specific responses to the 4 answers:

  1. If you have a problem for which it is easy to enumerate the hypotheses, and have statistical data, then Bayes is great. If in addition you have a good prior probability distribution then you have the additional advantage that it is much easier to avoid bias. However if you find you are having to add in new hypotheses as you investigate then I would say you are using a hybrid method.

  2. Even without Bayes one is supposed to specifically look for alternate hypothesis and search for the best answer.
    On the Less Wrong welcome page the link next to the Bayesian link is a reference to the 2 4 6 experiment. I'd say this is an example of a problem poorly suited to Bayesian reasoning. It's not a statistical problem, and it's really hard to enumerate the prior for all rules for a list of 3 numbers ordered by simplicity. There's clearly a problem with confirmation bias, but I would say the thing to do is to step back and do some careful experimentation along traditional lines. Maybe Bayesian reasoning is helpful because it would encourage you to do that?

  3. I would agree that a rationalist needs to be exposed to these concepts.

  4. I wonder about this statement the most. It's hard to judge qualitative statements about probabilities. For example, I can say that I had a low prior belief in cryonics, and since reading articles here I have updated and now have a higher probability. I know I had some biases against the idea. However, I still don't agree and it's difficult to tell how much progress I've made in understanding the arguments.

Hi Less Wrong. I found a link to this site a year or so ago and have been lurking off and on since. However, I've self identified as a rationalist since around junior high school. My parents weren't religious and I was good at math and science, so it was natural to me to look to science and logic to solve everything. Many years later I realize that this is harder than I hoped.

Anyway, I've read many of the sequences and posts, generally agreeing and finding many interesting thoughts. It's fun reading about zombies and Newcomb's problem and the like.

I guess this sounds heretical, but I don't understand why Bayes theorem is placed on such a pedestal here. I understand Bayesian statistics, intuitively and also technically. Bayesian statistics is great for a lot of problems, but I don't see it as always superior to thinking inspired by the traditional scientific method. More specifically, I would say that coming up with a prior distribution and updating can easily be harder than the problem at hand.

I assume the point is that there is more to what is considered Bayesian thinking than Bayes theorem and Bayesian statistics, and I've reread some of the articles with the idea of trying to pin that down, but I've found that difficult. The closest I've come is that examining what your priors are helps you to keep an open mind.