A quite different possibility to define a fuzzy/probabilistic subset relation:
Assume the sets and are events (sets of possible disjoint outcomes). Then iff . This suggests that a probabilistic/partial/fuzzy "degree of subsethood" of in is simply equal to the probability
This value is 1 if is completely inside , reducing to conventional crisp subsethood, and 0 if is completely outside . It is 0.5 if is "halfway" inside . Which seem pretty intuitive properties for fuzzy subsethood.
Additionally, the value itself has a simple probabilistic interpretation -- the probability that an outcome is in given that it is in .
Instead of saying "This sentence doesn't have truth value 1, nor 1/2, nor 1/4, ..." (which, even if infinitely long, would only work for countably many truth values), you could simply say "This sentence has truth value 0", which is just as paradoxical, but the paradox also works for real-valued or hyperreal truth values.
I tried to find other examples, but apparently only American English uses the American style, while all(?) other languages use the British style, which should probably be called the "Non-American" style.
I particularly find definitions confusing. Which one is logically correct?
I would say 1 is the least incorrect one, but it still has issues. The problem is, in the phrase "A means B", A refers to a word, a string of letters, but B doesn't refer to a string of letters. It seems to refer to the concept B is expressing, to a meaning.
The problem with 1 seems to be that quotation can either refer to a word or to the meaning of a word. Let's say double quotes refer to a word/phrase, while single quotes refer to the meaning of that phrase. Then the correct expression of the above would be this:
That seems to make perfect logical sense.[1]
To clarify, there is a common distinction between
a) term / sign / symbol / word,
b) meaning / intension,
c) reference object / extension.
An unquoted term refers to c), a quoted term refers to either a) or b). Hence my double / single quote disambiguation. ↩︎
Is there an interpretation of KL divergence which works for subjective probability (credence functions) where there is no concept of "true" or "false" distribution? And even for an objective interpretation, the term "cost" seems to be external to probability theory.
From the standpoint of hedonic utilitarianism, assigning a higher value to a future with moderately happy humans than to a future with very happy AIs would indeed be a case of unjustified speciesism. However, in preference utilitarianism, specifically person-affecting preference utilitarianism, there is nothing wrong with preferring our descendants (who currently don't exist) to be human rather than AIs.
PS: It's a bit lame that this post had -27 karma without anybody providing a counterargument.
This is also why various artists don't necessarily try to make Tolkien's Orthanc, Barad-dûr, Angband, etc look ugly, but imposing and impressive in some way. Even H.R. Giger's biomechanical landscapes could be described as aesthetic. Or the crooked architecture in The Cabinet of Dr. Caligari (1920). Architecture is art, and art doesn't have to be beautiful or pleasant, just interesting. But presumably nobody would like to actually live in a Caligari-like environment. (Except perhaps people in the goth subculture?)
I don't think this is a fallacy. If it was, one of the most powerful and common informal inference forms (IBE a.k.a. Inference to the Best Explanation / abduction) would be inadmissible. That would be absurd. Let me elaborate.
IBE works by listing all the potential explanations that come to mind, subjectively judging how good they are (with explanatory virtues like simplicity, fit, internal coherence, external coherence, unification, etc) and then inferring that the best explanation is probably correct. This involves the assumption that the probability is small that the true explanation is not among those which were considered. Sometimes this assumption seems unreasonable, in which case IBE shouldn't be applied. That's mostly the case if all considered explanations seem bad.
However, in many cases the "grain of truth" assumption (the true explanation is within the set of considered explanations) seems plausible. For example, I observe the door isn't locked. By far the best (least contrived) explanation I can think of seems to be that I forgot to lock it. But of course there is a near infinitude of explanations I didn't think of, so who is to say there isn't an unknown explanation which is even better than the one about my forgetfulness? Well, it just seems unlikely that there is such an explanation.
And IBE isn't just applicable to common everyday explanations. For example, the most common philosophical justification that the external world exists is an IBE. The best explanation for my experience of a table in front of me seems to be that there is a table in front of me. (Which interacts with light, which hits my eyes, which I probably also have, etc.)
Of course, in other cases, applications of IBE might be more controversial. However, in practice, if Alice makes an argument based on IBE, and Bob disagrees with its conclusion, this is commonly because Bob thinks Alice made a mistake when judging which of the explanations she considered is the best. In which case Bob can present reasons which suggest that, actually, explanation x is better than explanation y, contrary to what Alice assumed. Alice might be convinced by these reasons, or not, in which case she can provide the reasons why she still believes that y is better than x, and so on.
In short, in many or even most cases where someone disagrees with a particular application of IBE, their issue is not with IBE itself, but what the best explanation is. Which suggests the "grain of truth" assumption is often reasonable.
Most examples of bad reasoning, that are common amongst smart people, are almost good reasoning. Listing out all the ways something could happen is good, if and only if you actually list out all the ways something could happen
Well, that's clearly almost always impossible (there are almost infinitely many possible explanations for almost anything), so we can't make an exhaustive list. Moreover, "should" implies "can", so, by contraposition, if we can't list them, it's not the case that we should list them.
, or at least manage to grapple with most of the probability mass.
But that's backwards. IBE is a method which assigns probability to the best explanation based on how good it is (in terms of explanatory virtues) and based on being better than the other considered explanations. So IBE is a specific method for coming up with probabilities. It's not just stating your prior. You can't argue about purely subjective priors (that would be like arguing about taste) but you can make arguments about what makes some particular explanation good, or bad, or better than others. And if you happen to think that the "grain of truth" assumption is not plausible for a particular argument, you can also state that. (Though the fact that this is rather rarely done in practice suggests it's in general not such a bad assumption to make.)
Judging from the pictures, this could also be a quadratic fit.
One reason for men: Due to a general innate supply/demand difference, most men have a much lower probability of finding sexual partners than most women. This can lead to frustration even if it doesn't lead to jealousy.