Epistemic status: I'm pretty sure I'm not the first person that thought of this concept, but I couldn't find it online. I will post it here, just in case.
If you are a regular user of this site then you're probably a proponent of the probabilistic model of certainty. You might even use sites like Metaculus to record your own degrees of certainty on various topics for later reference. What I don't see is people measuring their meta-certainty. (*Meta-certainty is your degree of certainty over your degree of certainty which you can test by making predictions about how accurate your predictions are going to be)
I think it's pretty obvious that such a thing as "meta-certainty" exists. When I predict there is a one in six chance of my dice rolling a five, but I also predict there is a one in six chance of world war three happening before 2050, that doesn't automatically mean that the two predictions are equal. I feel more certain that I guessed the actual probability of the dice roll correctly, than I feel about my probability estimate of world war three happening. In other words: my meta-certainty of the dice roll is higher.
The problem is that I find it much harder to figure out my meta-certainty estimate than my certainty estimate. This might be because human beings are inherently bad at guessing their own meta-certainty, or it might be because I have never trained myself to reflect on my meta-certainty in the same way that I've trained myself to reflect on my regular certainty.
Why care about meta-certainty?
So why should we care about meta-certainty? Well the most obvious answer is science. By measuring meta-certainty we could learn more about the human brain, how humans learn, how we reflect on our own thoughts etc.
But maybe non-psychologists should be interested in this too. If you have a concrete model of how certain you are about your certainty, you could more reliably decide when and where to search for more evidence. My meta-certainty about X is very low, so maybe there are some low-hanging fruits of data that might quickly change that. I said that I was very meta-confident about X, but that is contingent on Y being true which I'm not very meta-confident about. Did I make a mistake or am I missing something?
I think it could also show us some more biases. I'm willing to bet that people are more meta-confident about their political beliefs, but I'm not sure what other domains my brain is meta-overconfident about. This could also help us in heuristics research.
It's really hard to measure this
My friends tell me that putting a percentage on their certainty is hard/ridiculous. I've always found it doable and important but my endeavor to do the same with my meta-certainty has certainly made me sympathize with my friends more. Maybe this is actually a part of certainty that is too hard for us to intuitively put an accurate percentage on. You can tell me in the comments if you don't find it more difficult, but I suspect most will agree with me. I see less reason for why evolution would select for creatures that know their own meta-certainty compared to creatures that would know their object-level certainty. But even if it is more difficult we can quantify the differences in a more indirect way. I've tried to use words like "almost certain", "very likely", "likely", "more likely than not" etc to discover a posteriori what the actual probabilities of my intuitions are.
I unfortunately can't share any insights yet since I only started doing this recently and have been doing it pretty inconsistently. If sites like Metaculus gave the option to always register your meta-certainty, it would help people record it and would quickly give us large swaths of data to compare. I think most people would start out creating a nice bell-curve with your certainty on one axis and your meta-certainty one the other, but who knows, maybe it will turn out that meta-certainty is actually asymmetric for some reason.
Figuring out what degree of certainty was "correct" for a situation is very very hard and requires a lot of (a posteriori) data. Figuring out the "correct" degree of meta-certainty will probably take even longer. I think that even if we get really good at measuring meta-certainty, it won't ever be as good as the object-level certainty. But even in a rough version (with e.g steps of 10% instead of 1%) we could gain some interesting insights into our psyche.
What about meta-meta-certainty?
So does meta-meta-certainty exist? Sure! When I'm drunk I might think to myself that I should be more uncertain about my meta-certainty compared to what my sober self would say. When I know I'm cognitively impaired I would give myself a lower meta-meta-certainty. The problem is that meta-meta-certainty might bleed into the lower levels.
While measuring meta-certainty can help us discover more biases and help us make better predictions, it is ultimately less important than measuring regular certainty. Having a rough framework of your own meta-certainty might be useful, but I can't confidently say the same about any meta-levels above it. I would like websites like Metaculus to add the option of recording your meta-certainty, but steps of ten (0%-10%-20%...) might be enough if they want to conserve bandwidth. Meta-certainty is also useful when you want to improve your calibration. Making a distinction between easier and harder predictions helps you realize how good you really are at predicting.