Take all the metaphysical models of the universe that any human ever considers.

Say there are n number of mutually-incompatible ones.  I don't have a good definition of this, so our estimates will vary by orders of magnitude.

Start with 1:1 odds, or a 50% chance that one of them is correct, and a 50% chance that none were correct.

0.5/n is the prior for a given individual metaphysical model.

You might have opinions about "50%".  Maybe you think humans are actually way less likely than that to ever think of the "right" one.  Maybe you think we're almost guaranteed to, for some reason.  I started with a very general, unimposing prior of 1:1 odds.  We can generalize this to p/n, for whatever you think p is.

How big is n? I don't know but I ran a Twitter poll asking people to pick a rough order of magnitude.  I would like a number.  It's easy to lie with numbers, but it's even easier to lie without them.

Updates away from the base rate can be made based on the complexity of the idea.  This mechanism is very general, and it should apply even in this domain to some extent.  Some religions are crazy complicated, so probability mass should be gouged from those.  Though, estimating "complexity" is going to be very hard.  The lady down the road is a witch, she made the universe.  Still, do we have zero information about such complexity comparisons? I doubt that.

I have seen smart people say there is no way to assign probabilities to metaphysical ideas.  They then express opinions about many worlds interpretation, the simulation argument, and others.  They'll do it with even less informative, more nebulous terms than a probability.  Such as "I am skeptical", or "seems really likely".  As a forecaster I have also seen some wildly overconfident claims made by some smart people.  Many orders of magnitude above a cloud of reasonable base rate estimates.  I know they don't have that much information and aren't calibrated.

I thought it would help to give a simple base rate template.  So here is one.

New Comment
24 comments, sorted by Click to highlight new comments since: Today at 10:30 AM

To complement @Dagon's comment, another difficulty is that Skepticism itself is also a philosophical model, which can be taken either as merely epistemological, or as a metaphysical model unto itself, so the initial 1:1 model actually giving Skepticism a 50% prior vs. all other models. And then we have some relatively weird models such as Nominalism, which is metaphysically skeptical except for affirming, atop a sea of complete no-rules free-formness, the absolute will of an absolute god who decides everything just because.

Fun detail: my Philosophy major followed a method called "monographic structuralism" that consists in learning each philosopher's system as if we were devout followers of theirs (for the class duration). The idea was that before opining on this or that philosophical problem it was worth knowing that philosopher's arguments and reasoning as well as they themselves did. So one studied philosopher A enough to argue perfectly for his ideas, finding them perfectly self-consistent from beginning to end and from top to bottom; then studied philosopher B similarly; then philosopher C, ditto; and so on and so forth, which invariably led one to learn two philosophers who said the exact opposite of each other while still being perfectly self-consistent -- at which point one threw their hands up and concluded the issue to be strictly undecidable. In the end most students, or at least those who stuck with the major long enough, became philosophical skeptics. :-)

One difficulty is that such models aren't mutually exclusive, there's a lot of overlap.  Another is that "correct" is very hard to define for elements that don't have reportable experiential predictions.  And even if we do suppose some resolution to the wager (which is what gives "probability" it's meaning), many models could be partly-correct (either in some of their elements, or in some contexts but not universal).  

Great points, there will be a vast cloud of models that aren't mutually exclusive, and this base rate currently fails to capture them.  We would have to somehow expand it.

Also definitely true on how models can be slightly correct, mostly-correct, etc.

I would say the wording that the resolution to the wager is "what gives the probability its meaning" is not entirely correct, though I am highly sympathetic to that sentiment.  Suppose you have a shoebox in your room, and in the shoebox is a dice.  (One dice, I'm pushing for the word "die" to die).  You shake the shoebox to roll it, carefully take a picture of what it rolled, and preserve that somewhere, but I never see the dice, box, nor the picture, and you never tell me the result.

I can still have a forecast for the dice roll.  Though, there are a lot more uncertainties than normal.  For example I can't be sure how many sides it even had, could've been one of those Dungeons and Dragons dice for all I know.  In fact it could be a 3D-printed dice with a near-arbitrary amount of sides.  I'd have to have a forecast distribution for the number of sides the dice had.  And that distribution would be much wider than most people would guess.  Though, I can confidently say it's got fewer sides than the number TREE(3).  I'd also have to forecast the odds you're lying about how many dice are in there, or lying some other way I haven't thought of.

In the end there is some remnant of a meaningful forecast process to be made.  Just as if I was participating in a forecast on AI or on COVID.  My true prior could be that n is somewhere from one to TREE(3) or whatever, and I slim it down somewhat.  But there are two major distinctions:

1) I'll be deprived the information of the resolution.

2) I'm not able to whittle the estimate down nearly as much as other domains.  I will end up with a cloud of estimates whose 25th%ile and 75th%ile span many orders of magnitude.  This is uncomfortable to work with, but I don't see why I wouldn't have some sort of base rate for the next metaphysical idea I hear.  (If I felt like bothering to, of course.)

If such an estimate is meaningless, then the way I hear many smart people talk about the subject is hyper-meaningless.  They're debating the merits of specific numbers that could land in the shoebox.  They'll even get very invested in some of them, building whole religions (or in EA perhaps "proto-religion" to be more fair) around them.  They do this without even bothering to estimate the number of sides it could have, number of dice, or anything else.  Smart people don't agree with me yet that the forecasting process is of some relevance to this domain, but they probably will some day, except for the ones that have a religion.

You can forecast all you want, but there is no "correctness" to that forecast.  It requires some informational path from event to experience for a probability to have meaning.  That does include telling the forecast to someone who observed (even indirectly) the outcome and seeing if they laugh at you, but does not include making up a number and then forgetting about it forever.

So to help me understand your position, how do you feel comparatively when someone like Bostrom says there's, for example, maybe a 50% chance we're in a simulation? (More egregiously, Elon saying there's a one in a billion chance we're not in a simulation!).

I think they are both perfectly reasonable statements about the models they prefer to imagine.  They're using probability terminology to give a sense of how much they like the model, not as any prediction - there's no experience that will differ whether it's true or false.  

Probability of being in a simulation doesn't make sense without clarifying what that means for the same reasons as probability in Sleeping Beauty. In the decision-relevant sense you need to ask what you'd care about affecting, since your decisions affect both real and simulated instances.

Virtually all forecasting has varying degrees of a risk the prediction resolves "ambiguous".  That risk reduces the informativeness.  While I can't say what exactly does or does not count as us being "in a simulation", there's also no particular reason I can't put a probability on it.  In the vast semantic cloud of possible interpretations, most of which is not visible to me, I have some nonzero information about what isn't a simulation, and I know a simulation-promoter has shifted probability away from those other things.  E.g. I know they are saying it's not just WYSIWYG.  It's not much, but it's also nonzero.

I also have placed many predictions on things that I will never see the resolution of, even if they are well-defined.  Things that could not possibly affect anything to do with me.

I would wholeheartedly endorse an economic argument that such predictions are of too little tangible value to us.  I do not endorse the idea that you fundamentally can't have a probability attached.  In fact it's remarkably difficult for that to be entirely true, once actual numbers are used and extremely small amounts of information or confidence are a thing.

While I can't say what exactly does or does not count as us being "in a simulation", there's also no particular reason I can't put a probability on it.

Well, I quoted Sleeping Beauty as a particular illustration for why you'd put different probabilities on something depending on what you require, and that must be more specific than "a probability". This is not a situation where you "can't have a probability attached", but illustrates that asking for "a probability" is occasionally not specific enough a question to be meaningful.

I would agree that models are generally useful as ML demonstrates, even if it's unclear what they are saying, but in such cases interpreting them as hypotheses that give probabilities to events can be misleading, especially when there is no way of extracting these probabilities out of the models, or no clear way of formulating the events we'd be interested in. Instead, you have an error function, and you found models that have low error for the dataset, and these models make things better than the models with greater error. That doesn't always have to be coerced into the language of probability.

I like this idea. Most of these probabilities become actionable if one thinks what will be after death (and is suicide good?). Hell? Another level of simulation? Quantum immortality? Future AI will create my copy? Nothingness? 

Answers on these questions depends on one's metaphysical assumptions. If he has a probability distribution over the field of possible metaphysical ideas, he may choose best course of action reading death. 

[-][anonymous]3y20

So maybe the error here is that humans can't really hold thousands of hypotheses in their head. For example if you contrast the simulation argument vs "known physics is all there is" you can falsify the "known physics" argument because certain elements of the universe are impossible due to known physics. Or don't have an apparent underlying reason, which the simulation argument can explain. (the speed of light is explainable if the universe is made of discrete simulation cells that must finish by a deadline, and certain quantum entanglement effects could happen if the universe can write to the same memory address in one step)

But there are thousands of other explanations that likely fit the same data, and it's not falsifiable. The simulation argument is just one available "at hand" to a tech worker who has worked on related software.

EDIT: I think I misunderstood.  Just to confirm, did you mean this removes the point of bother with a base rate, or did you mean it helps explain why people are ending up at preposterously far distances from even a relatively generous base rate estimate?

I have placed many forecasts on things where I am incapable of holding all the possible outcomes in my head.  In fact that is extremely common for a variety of domains.  In replication markets for example, I have little comprehension of the indefinite number of theories that could in principle be made about what is being tested in the paper.  Doesn't stop me from having opinions about some ostensible result shown to me in a paper, and I'll still do better than a random dart-throwing chimp at that.

[-][anonymous]3y20

Yes. Thats what I meant, it you only compare hypotheses A and B when there is a very large number of hypotheses that fit all known data you may become unreasonably confident in B if A is false.

Cool idea. Any model we actually spend time talking about is going to be vastly above the base rate, though. Because most human-considered models are very nonsensical/unlikely.

In hindsight I should've specified a time limit.  Someone pointed out to me that if something taxonomically included in "human" continued living for a very long time, then that thing could "consider" an indefinite number of ideas.  Maybe I should've said "that anyone considers up until the year 3k" or something.

I don't think that solves the problem though. There are a lot of people, and many of them believe very unlikely models. Any model we (lesswrong-ish) people spend time discussing is going to be vastly more likely than a randomly selected human-thought-about model. I realise this is getting close to reference class tennis, sorry.

I had little hope of solving much in this domain! But a base rate that is way off is still useful to me for some discussions.  What you're pointing to might offer some way to eliminate a lot of irrelevant n, or gouge probability away from them.  So with respect to discussions within smart circles, maybe the base rate ends up being much higher than 1/5million.  Maybe it's more like 1/10,000, or even higher.  I'm not a stickler, I'd take 1/1,000, if it lets certain individuals in these circles realize they have updated upward on a specific metaphysical idea way more strongly than they could reasonably.  That it's an obvious overconfidence to have updated all the way to 50% chance on a specific one that happens to be popular in smart circles at the time.

I think that's how I'd use this as well.

[-]TAG3y10

I have seen smart people say there is no way to assign probabilities to metaphysical ideas.

There's objective probabilities and subjective probabilities, and there's absolute probabilities and relative probabilities. So that's four quadrants.

Subjective is easier than objective, and relative is easier than absolute. So subjective+relative is the easiest quadrant. Even if you are sceptical about absolute objective probability, you are still entitled to your own subjective opinion...for what it's worth...because eveyone is.

(If it's not obvious, the more difficult quadrants carry more weight).

Take all the metaphysical models of the universe that any human ever considers.

This N is huge. Approximate it with the number of strings generatable in a certain formal language over the lifetime of the human race. We're probably talking about billions even if the human race ceases to exist tomorrow. (Imagine that 1/7 of the people have had a novel metaphysical idea, and you get 1B with just the people currently on earth today. If you think that's a high estimate, remember that people get into weird states of consciousness (through fever, drugs, exertion, meditation, and other triggers), so random strings in that language are likely.)

You may want to define "metaphysical idea" (and thus that language) better. Some examples of what I mean by "metaphysical idea:"

Those aren't metaphysical. Metaphysics is a well defined philosophical research field.

[-]TAG3y10

Well, it's not defined as the study of bitstrings or programmes.

That depends. Several metaphysical systems develop ontologies, with concepts such as "objects" and "properties". Couple that with the subfield of Applied Metaphysics, which informs other areas of knowledge by providing systematic means to deal with those foundations. So it's no surprise that one such application, several steps down the line, was the development of object-oriented programming with its "objects possessing properties" ordered in "ontologies" via inheritance, interfaces and the like.

[-]TAG3y10

I was making a dig at Solomonoff induction. SIs essentially contain machine code.