Why is that? Well, your brain has native hardware that understands cause-effects models on its own. You just need reality to shove the relationship in your face hard enough, and your brain will go "ok, seems legit. let's add it to our world-model".
What about the opposite? Coincidences that happens with enough regularity that a superstition or inaccurate causal model forms. At a train station, I once saw a three year old swiping their palm on the glass of an advertisement which had a paper loop that rotated on a timer. The child thought there was a causal connection between the palm gesture they probably learned on a tablet like an iPad, and the movement of the paper. Because they kept swiping, and the advertisement rotated pretty quickly, at least for a while they thought they were controlling it. They weren't. That is not understanding but I don't see how it is different from your door example. The sensory data of reality matches expectations or patterns.
Those who knew Miasma Theory would have been said to 'understand' the causes of disease. From a modern perspective, they didn't.
What is interesting is we can understand things we know to be false. We can understand a fantasy story. If Carl is jokingly raising a middle finger to Andy who is standing next to Blane, Blane may mistakenly think Carl is being rude to him. But Andy may "understand" how Blane made that mistake. We can understand how Luke Skywalker blasting swaprats gave him the confidence to bring down the Death Star. There is no Luke Skyalker or Death Star. They are not "true" in the sense they are not "real". But the story can be understood.
Epistemological status: these are speculative thoughts I had while trying to improve understanding. Not tested yet.
What differentiates understanding from non-understanding?
When you pull a door towards you, you predict it will move towards you in a particular way. You can visualize the movement in your mind's eye. Similarly, the door having been pulled, you can infer what caused it to end up there.
So, starting from a cause, you can predict its effects; starting from an effect, you can infer its cause.
Let's call that understanding. You instantiate a causal model in your mind, and you see how a change in part of the model affects the rest of the model, you also see what changes have to occur to get to a desired state of the model. The speed and accuracy with which you can predict effects or causes, as well as the number of changes you know how to propagate, the number of goal states you know how to reach, are the depth of your understanding.
Conversely, non-understanding would be not being able to visualize what happens when you pull the door, or not having any idea how the pulled door got to where it stands.
So how do we go from non-understanding to understanding?
Say you don't know/understand what happens when you pull a door... Then you pull a door.
Now you understand what happens.
Why is that? Well, your brain has native hardware that understands cause-effects models on its own. You just need reality to shove the relationship in your face hard enough, and your brain will go "ok, seems legit. let's add it to our world-model".
Now let's consider a mathematical proof. You follow all the logical steps, and you agree, "sure enough, it's true". But you still feel like you don't really grok it. It's not intuitive, you're not seeing it.
What's going on? Well, this is still a brand-new truth, you haven't practiced it much so it has not become part of your native world-model. And unlike more native things, it is abstract. So even if you try to pattern match it to existing models to make it feel more real, more native through analogies, such as "oh, light behaves like waves", it doesn't work that well.
This usually naturally goes away the more you actually use the abstract concept: your brain starts to accept it as native and real, and eventually light behaves like light. It even feels like it always was that way.
Ok, but what can we actually do with all this?
Consider a complicated math equation. There are symbols you do not understand. However you do know this is math.
What's the algorithm to go from non-understanding it to understanding it?
Steps:
An important part of this understanding algorithm is to be meticulous about noticing what you don't understand. The issue is that there's probably a bunch of stuff you don't understand, and you have limited time, so you need to become real good at ranking. My hope is that with practice these 2 skills become second nature. And then at a glance you're able to see what pieces don't understand, and guesstimate the most important among these, as well as the cost of analyzing them, from which you can prioritize.
This approach has the huge added benefit of being very active, thus motivating. Keeping a tight feedback loop is probably a key point. Trying to understand by yourself before searching too. As for the search part, you might want to experiment with an LLM pre-prompt so it gives you a brief answer to any question you ask. Maybe a "no thinking" or even a local LLM is better to have short latency and tighten the feedback-loop.
The key principle behind this understanding algorithm is fairly simple: if you understand A causes B, and B causes C, and you are able to hold both of these statements in your mind, or have practiced them enough that they stand as a compressed second nature / pointer you can refer to compactly, or alternatively you can follow the step by step and accept the logical conclusion of the process without having to hold the whole chain in your mind; then you understand A causes C, to some degree.
The kind of causal understanding I talk about is just a big graph of cause-effects relationships. To understand the graph as a whole, you need to understand enough of the individual cause-effects relationships. To learn efficiently, you need to focus on understanding the cause-effects relationships that give you personally the most info for the least effort. And if you want to learn fast, you need to develop these noticing and prioritizing skills, to become good and fast at it.
I heard that not that many concepts are necessary to understand complex ideas, or complex proofs. That's encouraging. It may be that by perfecting this learning technique one could learn extremely fast, and stumble across new insights as well.
Performance on this task should be measured. How many seconds does it take to learn a concept on average? 1 concept every 5 minutes? can we tighten the loop, go lower? 1 concept a minute? one concept every 30 seconds? Maybe not, that would be 120 concepts/hour. Apparently, this many concepts an hour is wildly biologically implausible: the brain needs to consolidate memories, there can be interference issues etc. But investigating the limits sounds like fun anyway. Also, consider that the more concepts we learn the higher the probability that the brain will auto-unlock a bunch of related concepts, so who knows?
Learning this fast, what could one learn, and what could one do?
Should you read books?
Reading books is like getting lots of answers to questions you don't ask.
The great thing is that you're getting lots of data very fast, as in you don't have to go through the steps of noticing what you don't understand and looking for answers. It also helps discovering unknown unknowns.
The bad thing is that the data may not be informative to you, like if you already know about some of it, or don't understand it and then have to perform the understanding algorithm on the book anyway. And also since you're not asking questions yourself, it can become boring, or tedious, and that sure doesn't help with absorbing data.
From that, I'd say engaging introductory books and documentaries on subjects you don't know, to get a feel for a field, are probably most efficient.