Wiki Contributions

Comments

I'm being too vague with my use of the word "model". By "model" I just mean some set of possibilities that are grouped together. For instance in machine learning, a model is a parametrized function (which can be regarded as a set of functions each indexed by a parameter). A set of different models is also a model (just more possibilities). Maybe this is not the best word to use.

In the case of Solomonoff induction, some of those programs might contain logic that appear to simulate simple environments with 3D space containing stuff, such as chairs and cars, that interact with each other in the simulation as you'd expect. I'd say the stuff in such a simulation is roughly an ontology. There will be another program which runs a simulation containing objects you might call monsters, of which some snarks are boojums. To be clear, I'm using "ontology" to mean "a set of concepts and categories in a subject area or domain that shows their properties and the relations between them."

I used "ontology" here to mean the definitions in your model. E.g. boojums, snarks and monsters in the examples above. If you wanted to update your model itself based on observations and remain within the Bayesian framework, you'd have to have had the foresight to anticipate doing so, and provided a collection of different models with a prior over them.

You are right. Thank you for the correction, and I like your description which I hope you don't mind me using (with credit) when I edit this post. My error was not realizing that P(boojum(x)|snark(x)) is the marginal probability for one particular row in the table. Even though the syntax is (hopefully) valid, this stuff is still confusing to think about!

and he similarly thinks about the dummy variable of the quantifier that way, and so how could all these different cats have the same propability?

I'm not quite sure how Chapman is interpreting these things, but what you are describing does sound like a reasonable objection for someone who interprets these probabilities to be physically "real" (whatever that means). Though Chapman is the one who chose to assert that all conditional probabilities are 0.4 in this example. I think he want's to conclude that such a "strong" logical statement as a "for-all" is nonsensical in the way you are describing, whereas something like "for 90% of x, P(boojum(x)|snark(x)) is between 0.3 and 0.5" would be more realistic.

But the statement actually means something like "If all I know about something is that it's a cat, I give it a 40% propability of being black".

Or you can just interpret this as being a statement about your model, i.e. without knowing anything about particular cats, you decided to model the probability any each cat is (independently) black as 40%. You can choose to make these probabilities different if you like.

Answer by Daniel AbolafiaJan 15, 202130

The arguments for Bayesian epistemology embodying rationality. It would be helpful to see this position elucidated all in one place.

tristanm writes "I’ve noticed a recent trend towards skepticism of Bayesian principles and philosophy ... which I have regarded with both surprise and a little bit of dismay, because I think progress within a community tends to be indicated by moving forward to new subjects and problems rather than a return to old ones that have already been extensively argued for and discussed." These extensive arguments and discussions about this topic seem to be scattered about LW and other sites. It would help Bayesian proponents to have a standard sequence to point to, especially if they think the issue is settled.