The two states differ mathematically mainly with respect to how they update. In the first case, one is confident in the bias of the coin, so the probability will not shift much as new evidence comes in (like e.g. coinflips). In the second case, the probability will shift as new evidence comes in.
As a general rule, insofar as humans are well-described as thinking probabilistically, our probabilistic models are little parts of a big world model. Those little parts don't just exist for e.g. one coin flip; they stick around after the coin is flipped and interact with the rest of the world model. So the way they update is an inherent part of their type signature; that's why little models which update differently feel different.
Another difference would be expectations for when the coin gets tossed more than once.
With "Type 1" if I toss coin 2 times I expect "HH", "HT", "TH", "TT" - each with 25% probability
With "Type 2" I'd expect "HH" or "TT" with 50% each.
There seem to be (at least) two different types of uncertainty that feel very different from the inside:
I have a coin that I believe to be fair, so , where is the bias of the coin. In that case, I have hypothesis in which I fully believe, and it assigns equal probabilities to the coin landing heads and landing tails.
I have a coin, and I'm unsure which way it will land, such that 'coin lands on tails' and 'coin lands on heads'. In that case, I have 2 hypotheses which I am unsure about.
Being in Type 1 feels like reality containing randomness. "It could go one way, it could go the other way, whatever." In practice, we deal with it with epistemology, to get a better estimate of this probability, and expected utility maximization, to get the most of what we know.
But being in the state of Type 2 uncertainty feels like two competing worldviews. It feels like two debaters, interchangeably stealing your brain hardware, arguing for their position. And while being in one worldview, the other one feels completely wrong and stupid and immoral, because of perfectly sound arguments this worldview gives you. Until you give the wheel to the other worldview, which debunks those arguments from the ground up. Each worldview argues for itself as being the ultimate truth.
Now, mathematically, those two types should be completely the same and give identical results. But humans are not perfect Bayesians. We do not have immediate access to the sum of all mutually exclusive hypotheses' plausibility to use as a denominator for current plausibility. To calculate, you only need to consider a single hypothesis, but to calculate you need to consider all mutually exclusive, collectively exhaustive hypotheses. So from the inside, Type 2 feels like going back and forth between having absolute certainty in one belief to having absolute certainty in another.
To understand that you are actually in the state of Type 2 uncertainty, to step into the outside frame of reference, is to introduce a new debater to the table. And this debater is being in the state of Type 1 uncertainty, where he gives some probability to the first worldview and to the second. We, being aspiring rationalists, could try to give that guy a bit more credence, because he does offer some benefits (epistemology, utility maximization). But those original worldviews didn't go anywhere. They would start arguing with this guy too. They will try to shake his "trying to please everyone" attitude, try to invalidate all the benefits he tries to offer ("what even is 'truth'?", "utilitarianism is evil"). And, of course, they will ask "why are you supporting this monster" (which is the other worldview).
I am not sure what to make of this. Writing this post and being in the meta-meta state to the object-level hypotheses, being unsure of what type of uncertainty to use, I think that I personally would benefit from introducing the outside debater more. But sometimes it can be harmful to be the Devil's advocate: some worldviews aren't worth being debated and argued with.