Measuring Meta-Certainty

by Bob Jacobs3 min read5th Jul 202011 comments

8

Forecasting & PredictionRationalityWorld Modeling
Frontpage

Epistemic status: I'm pretty sure I'm not the first person that thought of this concept, but I couldn't find it online. I will post it here, just in case.


If you are a regular user of this site then you're probably a proponent of the probabilistic model of certainty. You might even use sites like Metaculus to record your own degrees of certainty on various topics for later reference. What I don't see is people measuring their meta-certainty. (*Meta-certainty is your degree of certainty over your degree of certainty which you can test by making predictions about how accurate your predictions are going to be)


Meta-certainty exists

I think it's pretty obvious that such a thing as "meta-certainty" exists. When I predict there is a one in six chance of my dice rolling a five, but I also predict there is a one in six chance of world war three happening before 2050, that doesn't automatically mean that the two predictions are equal. I feel more certain that I guessed the actual probability of the dice roll correctly, than I feel about my probability estimate of world war three happening. In other words: my meta-certainty of the dice roll is higher.

The problem is that I find it much harder to figure out my meta-certainty estimate than my certainty estimate. This might be because human beings are inherently bad at guessing their own meta-certainty, or it might be because I have never trained myself to reflect on my meta-certainty in the same way that I've trained myself to reflect on my regular certainty.


Why care about meta-certainty?

So why should we care about meta-certainty? Well the most obvious answer is science. By measuring meta-certainty we could learn more about the human brain, how humans learn, how we reflect on our own thoughts etc.

But maybe non-psychologists should be interested in this too. If you have a concrete model of how certain you are about your certainty, you could more reliably decide when and where to search for more evidence. My meta-certainty about X is very low, so maybe there are some low-hanging fruits of data that might quickly change that. I said that I was very meta-confident about X, but that is contingent on Y being true which I'm not very meta-confident about. Did I make a mistake or am I missing something?

I think it could also show us some more biases. I'm willing to bet that people are more meta-confident about their political beliefs, but I'm not sure what other domains my brain is meta-overconfident about. This could also help us in heuristics research.


It's really hard to measure this

My friends tell me that putting a percentage on their certainty is hard/ridiculous. I've always found it doable and important but my endeavor to do the same with my meta-certainty has certainly made me sympathize with my friends more. Maybe this is actually a part of certainty that is too hard for us to intuitively put an accurate percentage on. You can tell me in the comments if you don't find it more difficult, but I suspect most will agree with me. I see less reason for why evolution would select for creatures that know their own meta-certainty compared to creatures that would know their object-level certainty. But even if it is more difficult we can quantify the differences in a more indirect way. I've tried to use words like "almost certain", "very likely", "likely", "more likely than not" etc to discover a posteriori what the actual probabilities of my intuitions are.

I unfortunately can't share any insights yet since I only started doing this recently and have been doing it pretty inconsistently. If sites like Metaculus gave the option to always register your meta-certainty, it would help people record it and would quickly give us large swaths of data to compare. I think most people would start out creating a nice bell-curve with your certainty on one axis and your meta-certainty one the other, but who knows, maybe it will turn out that meta-certainty is actually asymmetric for some reason.

Figuring out what degree of certainty was "correct" for a situation is very very hard and requires a lot of (a posteriori) data. Figuring out the "correct" degree of meta-certainty will probably take even longer. I think that even if we get really good at measuring meta-certainty, it won't ever be as good as the object-level certainty. But even in a rough version (with e.g steps of 10% instead of 1%) we could gain some interesting insights into our psyche.


What about meta-meta-certainty?

So does meta-meta-certainty exist? Sure! When I'm drunk I might think to myself that I should be more uncertain about my meta-certainty compared to what my sober self would say. When I know I'm cognitively impaired I would give myself a lower meta-meta-certainty. The problem is that meta-meta-certainty might bleed into the lower levels.

I think that measuring meta-certainty is less useful than measuring object-level certainty, but still ultimately worth it in small amounts (e.g measuring it more roughly). I think meta-meta-uncertainy is even less useful and might not even be worth measuring unless you're a die-hard psychologist. This process of adding levels of meta has diminishing returns not only in terms of usefulness, but also in terms of accuracy. If evolution is not particularly interested in selecting for accurate meta-certainty then I think that meta-meta-meta-cetainty is basically impossible.


Conclusion

While measuring meta-certainty can help us discover more biases and help us make better predictions, it is ultimately less important than measuring regular certainty. Having a rough framework of your own meta-certainty might be useful, but I can't confidently say the same about any meta-levels above it. I would like websites like Metaculus to add the option of recording your meta-certainty, but steps of ten (0%-10%-20%...) might be enough if they want to conserve bandwidth. Meta-certainty is also useful when you want to improve your calibration. Making a distinction between easier and harder predictions helps you realize how good you really are at predicting.

8

11 comments, sorted by Highlighting new comments since Today at 2:47 PM
New Comment

Normally I think of meta-certainty in terms of parameters of an underlying model. I am "sure" that the probability of the coin is 50/50 because I have an underlying model where I know that it's hard to get more information about the coin to resolve that uncertainty. But if I expect to soon gain more information, like if I'm told that it's a heavily biased coin and I just have to flip it a few times to find out which way it's biased, then I might talk about this expected gain of information by saying that the probability is only "on average" 0.5, and that I "expect it to be" either 0.1 or 0.9.

Really, I think talking about the model is quite natural, and once you've done that there's no extra thing that is the meta-uncertainty.

To measure something you need an operationalized definition and currently you don't have such a definition. 

Your degree of certainty about your degree of certainty. That's why it's called meta-certainty.

That doesn't operationalize what it means to have a degree of certainty over a degree of certainty. 

What does it mean to have certainty over a degree of cetainty? How do you go about measuring whether or not the certainty is right?

What does it mean to have certainty over a degree of certainty?

When I say "I'm 99% certain that my prediction 'the dice has a 1 in 6 chance of rolling a five' is correct", I'm having a degree of certainty about my degree of certainty. I'm basically making a prediction about how good I am at predicting.

How do you go about measuring whether or not the certainty is right?

This is (like I said) very hard. You can only calibrate your meta-certainty by gathering a boatload of data. If I give a 1 in 6 probability of an event occurring (e.g a dice roll returning a five), and such an event happens a million times, you can gauge how well you did on your certainty by checking how close it was to your 1 in 6 prediction (maybe it happened more, maybe it happened less) and calibrate yourself to be more optimistic or pessimistic. Similarly if I give a 99% chance of my probabilities (e.g 1 in 6) being right I'm basically saying: If the event (e.g you predicting something has a 1 in 6 chance of occurring) happened a million times you can gauge how well you did on your meta-certainty by checking how many times the you predicting 1 in 6 turned out to be wrong. So meta-certainty needs more data than regular certainty. It also means that you can only ever measure it a posteriori unfortunately. And you can never know for certain if your meta-certainty is right (the higher meta levels still exist after all), but you can get more accurate over time.

I'm not sure how far you want me to go with trying to defend measuring as a way of finding truth. If you have a problem with the philosophical position that certainty is probabilistic or the position of scientific realism in general then this might not be the best place to debate this issue. I would consider it off topic as I just accepted them as the premises for this posts, sorry if that was the problem you were trying to get at.

Basically you are not speaking about Bayesian probability but about frequentist probability? If that's the case it's quite good to be explicit about it when you post on LessWrong where we usually mean the Bayesian thing. 

In the sense the term probability is used in scientific realism, it's defined about well-defined empiric events either happening or not happening. Event X has probability Y however isn't an empiric event and thus it doesn't have a probability the same way that empiric events do.

If it would be easy to define a meta-certainity metric, then it would be easy for you to reference a statistician who properly defined such a thing or a philosopher in the tradition of scientific realism. Even when it's intuitively desireable to define such a thing it's not easy to create it.

Probability is easy to resolve when things have clear outcomes. I don't find it trivial to apply it to probability distributions. Say that you belive that a coin has 50% chance of coming up heads and 50% chance of coming up tails. Later it turns out that the coin has 49.9% chance of coming up heads and 49.9% chance of coming up tails and 0.2% chance of coming up on it's side. Does the previous belief count as a hit or miss for the purposes of meta-certainty? If I can't agree what hits and misses are then I can't get to ratios.

One could also mean that a belief like "probability for world war" could get different odds when asked in the morning, afternoon or night while dice odds get more stable answers. There "belief professed to when asked" has clear outcomes. But that is harder to link to the subject matter of the belief.

It could also point to "order of defence" kind of thing, which beliefs would be first in line to be changed. High degree of this kind could mean a thing like "this belief is so important to my worldview that I would rather believe 2+2=5 than disbelieve it". "conviction" could describe it but I think subjective degrees of belief are not supposed point to things like that.

Does the previous belief count as a hit or miss for the purposes of meta-certainty?

A miss. I would like to be able to quantify how far off certain predictions are. I mean sometimes you can quantify it but sometimes you can't. I have previously made a question posts about it that got very little traction so I'm gonna try to solve this philosophical problem myself once I have some more time.

One could also mean that a belief like "probability for world war" could get different odds when asked in the morning, afternoon or night while dice odds get more stable answers.

This could be a possible bias in meta-certainty that could be discovered (but isn't the concept of meta-certainty itself).

"conviction" could describe it but I think subjective degrees of belief are not supposed point to things like that.

Conviction could be an adequate word for it, but I'll stick with meta-certainty to avoid confusion. You could rank your meta-certainty in "order of defense", but I would start out explaining it in the way that I did in my response to ChristianKl.

Well it clarifies that the first of the three kind of directions was intended.

If that is a miss what do hits look like? If I have a belief of 50%, 50% coin at what point can I say that the distribution is "confirmed". If the true distribution is 49.9999% vs 50.0001% and that counts as a miss that would make almost all beliefs to be misses with hits being rare theorethical possibiliies. So within rounding error all beliefs that reference probablities not 1 or 0 have meta-certainty 0.

Note that in calculating p-values the null hypothesis is not ever delineated a clear miss but there always remains a finite possiblity that noise was the source of the pattern.

I was trying to convey the same problem, although the underlying issue has much broader implications. Apparently johnswentworth is trying to solve a related problem but I'm currently not up to date with his posts so I can't vouch for the quality. Being able to quantify empirical differences would solve a lot of different philosophical problems in one fell swoop, so that might be something I should look into for my masters degree.

This sounds like a confused mix of knightian uncertainty and variance in confidence interval.