A fast uncontrolled takeoff (the AI doesn't solve successor alignment) seems also possible.
One reason for men: Due to a general innate supply/demand difference, most men have a much lower probability of finding sexual partners than most women. This can lead to frustration even if it doesn't lead to jealousy.
A quite different possibility to define a fuzzy/probabilistic subset relation:
Assume the sets and are events (sets of possible disjoint outcomes). Then iff . This suggests that a probabilistic/partial/fuzzy "degree of subsethood" of in is simply equal to the probability
This value is 1 if is completely inside , reducing to conventional crisp subsethood, and 0 if is completely outside . It is 0.5 if is "halfway" inside . Which seem pretty intuitive properties for fuzzy subsethood.
Additionally, the value itself has a simple probabilistic interpretation -- the probability that an outcome is in given that it is in .
Instead of saying "This sentence doesn't have truth value 1, nor 1/2, nor 1/4, ..." (which, even if infinitely long, would only work for countably many truth values), you could simply say "This sentence has truth value 0", which is just as paradoxical, but the paradox also works for real-valued or hyperreal truth values.
I tried to find other examples, but apparently only American English uses the American style, while all(?) other languages use the British style, which should probably be called the "Non-American" style.
I particularly find definitions confusing. Which one is logically correct?
I would say 1 is the least incorrect one, but it still has issues. The problem is, in the phrase "A means B", A refers to a word, a string of letters, but B doesn't refer to a string of letters. It seems to refer to the concept B is expressing, to a meaning.
The problem with 1 seems to be that quotation can either refer to a word or to the meaning of a word. Let's say double quotes refer to a word/phrase, while single quotes refer to the meaning of that phrase. Then the correct expression of the above would be this:
That seems to make perfect logical sense.[1]
To clarify, there is a common distinction between
a) term / sign / symbol / word,
b) meaning / intension,
c) reference object / extension.
An unquoted term refers to c), a quoted term refers to either a) or b). Hence my double / single quote disambiguation. ↩︎
Is there an interpretation of KL divergence which works for subjective probability (credence functions) where there is no concept of "true" or "false" distribution? And even for an objective interpretation, the term "cost" seems to be external to probability theory.
From the standpoint of hedonic utilitarianism, assigning a higher value to a future with moderately happy humans than to a future with very happy AIs would indeed be a case of unjustified speciesism. However, in preference utilitarianism, specifically person-affecting preference utilitarianism, there is nothing wrong with preferring our descendants (who currently don't exist) to be human rather than AIs.
PS: It's a bit lame that this post had -27 karma without anybody providing a counterargument.
This is also why various artists don't necessarily try to make Tolkien's Orthanc, Barad-dûr, Angband, etc look ugly, but imposing and impressive in some way. Even H.R. Giger's biomechanical landscapes could be described as aesthetic. Or the crooked architecture in The Cabinet of Dr. Caligari (1920). Architecture is art, and art doesn't have to be beautiful or pleasant, just interesting. But presumably nobody would like to actually live in a Caligari-like environment. (Except perhaps people in the goth subculture?)
Here R (the square root of R²) is Pearson correlation, which checks for linear association. The better measure here would be to use Spearman correlation on the original data, which checks for any monotonic association. Spearman is more principled than trying to transform the data first with some monotonic function (e.g. various sigmoids) before applying Pearson.