When I was younger...
One thing to remember when talking about distinction/defusion is that it's not a free operation: if you distinguish two things that you previously considered the same, you need to store at least a bit of information more than before. That is something that demands effort and energy. Sometimes, you need to store a lot more bits. You cannot simply become superintelligent by defusing everything in sight.
Sometimes, making a distinction is important, but some other times, erasing distinctions is more important. Rationality is about creating and erasing distinctions to achieve a more truthful or more useful model.
This is also why I vowed to never object that something is "more complicated" if I cannot offer a better model, because it's always very easy to inject distinctions, the harder part is to make those distinctions matter.
I don't think you need the concept of evidence. In Bayesian probability, the concept of evidence is equivalent to the concept of truth; both in the sense that P(X|X) = 1, whatever you consider evidence is true, but also P(X) = 1 --> P(A /\ X) = P(A|X), you can consider true sentences as evidence without changing anything else.
Add to this that good rationalist practice is to never assume that anything is P(A) = 1, so that nothing is actually true or actually an evidence. You can do epistemology exclusively in the hypotethical: what happens if I consider this true? And then derive consequences.
Well, I share the majority of your points. I think that in 30 years millions of people will try to relocate in more fertile areas. And I think that not even the firing of the clathrate gun will force humans to coordinate globally. Although I am a bit more optimist about technology, the actual status quo is broken beyond repair
The fact is surprising when coupled with the fact that particles do not have a definite spin direction before you measure it. The anti-correlation is maintained non-locally, but the directions are decided by the experiment.
A better example is: take two spheres, send them far away, then make one sphere spin in any orientation that you want. How much would you be surprised to learn that the other sphere spins with the same axis in the opposite directions?
How probable is that someone knows their internal belief structure? How probable is that someone who knows their internal belief structure tells you that truthfully instead of using a self-serving lie?
The causation order in the scenario is important. If the mother is instantly killed by the truck, then she cannot feel any sense of pleasure after the fact. But if you want to say that the mother feels the pleasure during the attempt or before, then I would say that the word "pleasure" here is assuming the meaning of "motivation", and the points raised by Viliam in another comment are valid, it becomes just a play on words, devoid of intrinsic content.
So far, Bayesian probability has been extended to infinite sets only as a limit of continuous transfinite functions. So I'm not quite sure of the official answer to that question.
On the other hand, what I know is that even common measure theory cannot talk about the probability of a singleton if the support is continuous: no sigma-algebra on 2ℵ0 supports the atomic elements.
And if you're willing to bite the bullet, and define such an algebra through the use of a measurable cardinal, you end up with an ultrafilter that allows you to define infinitesimal quantities
Under the paradigm of probability as extended logic, it is wrong to distinguish between empirical and demonstrative reasoning, since classical logic is just the limit of Bayesian probability with probabilities 0 and 1.
Besides that, category theory was born more than 70 years ago! Sure, very young compared to other disciplines, but not *so* young. Also, the work of Lawvere (the first to connect categories and logic) began in the 70's, so it dates at least forty years back.
That said, I'm not saying that category theory cannot in principle be used to reason about reasoning (the effective topos is a wonderful piece of machinery), it just cannot say that much right now about Bayesian reasoning
Yeah, my point is that they aren't truth values per se, not intuitionistic or linear or MVs or anything else
I've also dabbled into the matter, and I have two observation: