My immediate thought was that static friction was keeping it up. Cohesion/adhesion being a factor would come later.
How did I come up with that idea immediately? Well... because "A block is stationary on a ramp because of friction, now do some math about the situation..." is a central intro physics problem. And, it's how I keep my pen from falling when I hold it. Now, you can have something tip without friction being enough to stop it (because there's not much rubbing against the wall or support), but it does get you in the general space of "those clingy forces".
Consider Hinduism. It doesn't have much dogma. It isn't about making claims about the world the way Christianity or Islam is. It's more like a catalog of Jungian archetypes and models for thinking about the world. A Hindu "God" isn't a cause of events in the world; it's more like a manifestation of or symbol for patterns of events.
Ex Hindu here: while my parents weren't that strong on teaching me the religion, you are straightforwardly wrong in your characterization. Probably some Hindus are like that, perhaps even a higher rate than the chiller sorts of Christianity (though I actually expect it to be the other way around: Hindus in India are more extreme on average than Hindus that have assimilated to Western culture). There were claims like "These gods literally exist", "these things are sins", "these prayers should be said", "these rituals should be done to remove the evil spirits".
This is especially clear when you consider more extreme Hindus, e.g. in India. Part of why vegetarianism is more common there is because of Hinduism, and the women in my family who weren't born in the US are much more likely to be vegetarian (the men typically stopped after a while here). Surely such a practice is evidence that Hinduism makes plenty factual and moral claims?
Neither G nor -G are theorems, but (G or -G) is a theorem.
As an analogy (whose deepness I don't quite know), this is like if the logical system was a person (who I will name Peano) who I determine will never figure out whether G is true or false. Peano, however, knows that G must be either true or false. He can know (G or -G) even if he can never know G nor know -G.
The tricky thing, I think, is to not accidentally mix the meta lels.
I think it definitely means that the bone breaks first if you load them under equal pressures, and less confidently that the diamond rod breaks first if you suddenly shock it. As an example elsewhere, you don't want to make the cores of swords out of brittle materials due to them breaking, which is why cast iron sucks. You want a ductile core that is tough (so it can absorb the energy of impacts) and a hard-but-possibly-brittle exterior (so that you can cut and keep an edge).
Plausibly this means you don't want structural materials to be made out of diamond even if you'd want your teeth to be, at least unless you needed to sustain high loads. It looks to me like (some, stuff like femurs) bone is optimized to be flexible.
10% of the year is not a sensible way to measure your error. If I ask "When did X happen?" and you answer 2000 CE when the real answer was 2020 CE, there's a sense in which you are more wrong than if you answered 300 BCE while the real answer was 500 BCE. Even if you don't think this, you probably don't think that the sensible target should be narrower closer to 0 BCE.
Whereas you are in a meaningful way about as incorrect if you say 10 km when the real answer is 11 km as if you say 1 m when the real answer is 1.1 m
"How long have you had with the current biggest issue in your life?"
What does this mean? Is this "how long have I had the issue?" or "how long have I tried solving?"?
"How many X would you trade for a Y?" You should be more specific. Am I to imagine that you are generously offering me either n extra units of X or one extra unit of Y, and I need to figure out how big n needs to be for me to be indifferent? (This is how I assumed you meant it).
I think I know of the trick you are talking about, in that there does seem to be an obvious pseudoprediction place in my mind that interfaces with motor output, and it's obviously different from actually believing, or trying to believe. However I mostly can't manage more than twitches or smaller motor movements, and it gets harder the more resistant I am to doing it (thus, less useful the more I would need use of it). If I'm thinking of the right thing, then the failure of me to sometimes send the pseudoprediction to my muscles seems to be the cause of some various stuff I experience when I essentially can't get myself to do certain things (e.g. get out of bed) (going by how people react to my more detailed descriptions this phenomena appears to be something very unusual about me).
It feels to me like the same sort of "prediction" as my Inner Sim that visualizes what happens when I throw a ball at the wall - it's clearly distinct from what "I" believe.
I separately also have experienced the thing where I think the script says I ought to feel X and so I feel X, but that feels totally different to me. Possible exception: I recently (for completely unrelated reasons) had a panic attack (which are very rare for current me), and for a while after the big spike I would get close to having it again partially due to what might have been having that sort of pseudo expectation of the hyperventilating and then accidentally causing it to actually happen, which would then threaten to launch me back into the panic attack. This might secretly be how the script thing works, though it doesn't feel like it to me.
This is where I disagree the most. I have not particularly noticed cognitive reflectivity decreasing my passions, though it's possible that most of the reflection I do is either when I have caught myself in the grips of some feeling that will likely be bad (e.g. getting angry at someone over a triviality, or having a panic attack), or when I am already in a pretty neutral state. Thus to me it mostly feels like the only passions made more distant are the ones that I wanted distanced.
I try to be followed by my positive passions when they occur, with some triggers for when seriousness or distance is necessary.
You seem to also observe the thing Eliezer does here. What's it like for you, when reflection makes you less happy?
The way I think about it is it's based on what I care about. I am in fact unwilling to do certain things to save the life of someone who is threatening suicide and blaming me, because I care more about myself, and I am fundamentally okay with caring about myself in that way. If, say, my best friend made some stupid mistake that put her at risk of great harm, I would be doing the heroic responsibility thing because I care a lot about the outcome.
It's fine to care about yourself! The principle of "I am obligated not to harm you, but not obligated to help you" is a fine one. The point of heroic responsibility is to see what I could do in cases where I do want to go all out to achieve some outcome.
Oliver Sacks has now been revealed (by self admission) to have made up many of the details in his case studies, including the titular case of the man who mistook his wife for a hat (twitter thread by steven pinker about the new yorker article linked previously, since the article is paywalled). Here's an excerpt about a supposed quote from his journal:
Now, the claim you made about certain kinds of temporal lobe damage removing your ability to recognize objects may still be true. If you have another source for this/if there's common medical knowledge that this is the case, I would be interested in it.