It seems tendentious to call this a "skill issue". Predicting the motions of atoms in a gas will go wrong after 30 or 40 collisions, if one's measurement of initial conditions gets the local gravitational gradient wrong by an amount equivalent to the displacement of one electron mass by one Planck distance at the edge of the visible universe. I estimate this change in gradient, relative to the gradient of the Earth's gravity at its surface, to be about 1 part in
This is not in any reasonable sense "in principle" computable and a "skill issue" if you can't. I suspect that a Laplacian demon bent on predicting thermal noise would immediately collapse into a black hole from the effort.
It's not right to say that the Copenhagen interpretation means that "only quantum mechanics" is aleatory. First of all, QM describes all physical phenomena so presumably what you meant was "only microscopic phenomena". But this is not right either, as chaotic dynamical systems send microscopic differences to macroscopic differences and therefore send microscopic aleatory randomness to macroscopic aleatory randomness. It's possible that there's even enough chaos in a coin flip to make it aleatorily random.
Yes, most people who are somewhat familiar with quantum physics still massively underestimate how fast these effects propagate.
Even in the Many Worlds model, it's ridiculous to say that it's a skill issue to fail to predict things. A successful prediction in such a model doesn't just mean that you successfully predict "both happen, in some branches or other", it means that your predictions (which are quantum phenomena like everything else) are perfectly matched with the future measurements in every branch. Under such a model, given some very weak conditions that are easily observed to hold in practice, this is impossible.
A similar objection applies to a Bohmian interpretation.
Having a model doesn't guarantee that you can predict anything using that model. It is in principle impossible to realize a physical system that does some of the measurements and computations you're asking for...
In the 'code is law' sense, your model cannot predict anything not in its laws. The laws of physics are rarely included. Epistemic uncertainty is bounded by the model's laws. Aleotoric uncertainty is what lies outside the model but within the universe.
Aleatoric Uncertainty Is A Skill Issue
Epistemic status: shitpost with a point
Disclaimer: This grew out of a conversation with Claude. The ideas are mine, the writeup is LLM-generated and then post-edited to save time and improve the flow
You know the textbook distinction. Epistemic uncertainty is what you don't know. Aleatoric uncertainty is what can't be known — irreducible randomness baked into the fabric of reality itself.
Classic examples of aleatoric uncertainty: coin flips, dice rolls, thermal noise in sensors, turbulent airflow.
Here's the thing though.
A Laplacian demon predicts all of those.
Every single "classic example" of aleatoric uncertainty is a system governed by deterministic classical mechanics where we simply don't have good enough measurements or fast enough computers. The coin flip is chaotic, sure. But chaotic ≠ random. A sufficiently precise demon with full knowledge of initial conditions, air currents, surface elasticity, gravitational field gradients, and your thumb's muscle fiber activation pattern will tell you it's heads. Every time.
The thermal noise? Deterministic molecular dynamics. The dice? Newtonian mechanics with a lot of bounces. Turbulence? Navier-Stokes is deterministic, we just can't solve it well enough.
The Laplacian demon doesn't have aleatoric uncertainty. It's just that mortals have skill issues and are too proud to admit it.
So what's actually irreducibly random?
Quantum mechanics. That's it. That's the list.
And even that depends on your interpretation:
So under Many-Worlds and Bohmian mechanics, all uncertainty is epistemic. The universe is fully deterministic. There are no dice. There is no irreducible randomness. There is only insufficient information.
Under Copenhagen, there is exactly one source of genuine aleatoric uncertainty: quantum measurement. Everything else that textbooks call "aleatoric" is a Laplacian demon looking at your sensor noise model and saying "get wrecked, scrubs."
The Real Problem: Lack of Epistemic Humility
Here's what actually bothers me about the standard framing. When you label something "aleatoric," you're making an ontological claim: this randomness is a property of the world. But in almost every classical case, it's not. It's a property of your model's resolution. It's noise in your world model that you're projecting onto reality.
And then you refuse to label it as such.
Think about what's happening psychologically. "It's not that my model is incomplete — it's that the universe is inherently noisy right here specifically where my model stops working." How convenient. The boundary of your ignorance just happens to coincide with the boundary of what's knowable. What are the odds?
The aleatoric/epistemic distinction, as commonly taught, isn't really a taxonomy of uncertainty. It's a taxonomy of accountability. Epistemic uncertainty is uncertainty you're responsible for reducing. Aleatoric uncertainty is uncertainty you've given yourself permission to stop thinking about. The label "irreducible" isn't doing technical work — it's doing emotional work. It's a declaration that you've tried hard enough.
And look, sometimes you have tried hard enough. Sometimes it's correct engineering practice to draw a line and say "I'm modeling everything below this scale as noise." But at least be honest about what you're doing. You're choosing a level of description. You're not discovering a fundamental feature of reality. The universe didn't put a noise floor there. You did.
"But This Distinction Is Useful In Practice!"
Yes! I agree! I'm not saying we should stop using the word "aleatoric" in ML papers and engineering contexts. When you're building a Bayesian neural network and you separate your uncertainty into "stuff I could reduce with more training data" vs. "inherent noise floor I should model as a variance parameter," that's a genuinely useful decomposition. You would, in fact, go completely insane trying to treat thermal noise in your LIDAR as epistemic and heroically trying to learn your way out of it.
The pragmatic framing does real work: aleatoric = "uncertainty I'm choosing to treat as irreducible at this level of description." That's fine. That's good engineering.
But let's stop pretending it's a deep metaphysical claim about the nature of reality. It's not. It's a statement about where you've chosen to draw the line on your modeling resolution. The universe (probably) isn't random. Your model is just too coarse to be a demon.
The Punchline
* Unless you count "not knowing which branch you're on" as a new, secret third thing.
tl;dr: Aleatoric uncertainty is a skill issue. The Laplacian demon has no variance term. The only candidate for genuine ontological randomness is quantum mechanics, and half the interpretations say even that's deterministic. Your "irreducible noise" is just you being bad at physics and too proud to admit uncertainty in your model.
By the way: I may be the only one, but I was actually genuinely confused about this topic for years. I took the definition of aleatoric uncertainty literally and couldn't understand what the professors were on about when they called coin flips aleatoric uncertainty. None of the examples they gave were actually irreducible.