This is a bet at 30% probability, as 42.86/142.86 = .30001.
That is the average of Alice's probability and Bob's probability. The fair bet according to equal subjective EV is at the average of the two probabilities; previous discussion here.
I don't buy the way that Spencer tried to norm the ClearerThinking test. It sounds like he just assumed that people who took their test and had a college degree as their highest level of education had the same IQ as the portion of the general population with the same educational level, and similarly for all other education levels. Then he used that to scale how scores on the ClearerThinking test correspond to IQs. That seems like a very strong and probably inaccurate assumption.
Much of what this post and Scott's post are using the ClearerThinking IQ numbers for relies on this norming.
It occurs to me that the ClearerThinking data provides a way to check this assumption. It included data from 2 different groups, crowdworkers and people in Spencer's social network. If college-degree-level crowdworkers did just as well on the ClearerThinking test as college-degree-level people in Spencer's network, then it becomes more plausible that both did about as well as college-degree-level people in the general population would have. Whereas if the college-degree-level crowdworkers and Spencer's network people scored differently, then obviously they can't both match the college-degree-level general population, so there'd be an open question about how the groups compare and direct evidence against the accuracy of Spencer's method of norming the test.
I think that the way that Scott estimated IQ from SAT is flawed, in a way that underestimates IQ, for reasons given in comments like this one. This post kept that flaw.
"Can crimes be discussed literally?":
- some kinds of hypocrisy (the law and medicine examples) are normalized
- these hypocrisies are / the fact of their normalization is antimemetic (OK, I'm to some extent interpolating this one based on familiarity with Ben's ideas, but I do think it's both implied by the post, and relevant to why someone might think the post is interesting/important)
- the usage of words like 'crime' and 'lie' departs from their denotation, to exclude normalized things
- people will push back in certain predictable ways on calling normalized things 'crimes'/'lies', related to the function of those words as both description and (call for) attack
- "There is a clear conflict between the use of language to punish offenders, and the use of language to describe problems, and there is great need for a language that can describe problems. For instance, if I wanted to understand how to interpret statistics generated by the medical system, I would need a short, simple way to refer to any significant tendency to generate false reports. If the available simple terms were also attack words, the process would become much more complicated."
Does it bother you that this is not what's happening in many of the examples in the post? e.g., With "the American hospital system is built on lies."
This post reads like it's trying to express an attitude or put forward a narrative frame, rather than trying to describe the world.
Many of these claims seem obviously false, if I take them at face value at take a moment to consider what they're claiming and whether it's true.
e.g., On the first two bullet points it's easy to come up with counterexamples. Some successful attempts to steer the future, by stopping people from doing locally self-interested & non-violent things, include: patent law ("To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries") and banning lead in gasoline. As well as some others that I now see that other commenters have mentioned.
In America, people shopped at Walmart instead of local mom & pop stores because it had lower prices and more selection, so Walmart and other chain stores grew and spread while lots of mom & pop stores shut down. Why didn't that happen in Wentworld?
I made a graph of this and the unemployment rate, they're correlated at r=0.66 (with one data point for each time Gallup ran the survey, taking the unemployment rate on the closest day for which there's data). You can see both lines spike with every recession.
Trying to make this more intuitive: consider a prediction market which is currently priced at x, where each share will pay out $1 if it resolves as True.
If you think it's underpriced because your probability is y, where y>x, then your subjective EV from buying a share is y-x. e.g., If it's priced at $0.70 and you think p=0.8, your subjective EV from buying a share is $0.10.
If you think it's overpriced because your probability is z, where z<x, then your subjective EV from selling a share is x-z. e.g., If it's priced at $0.70 and you think p=0.56, your subjective EV from selling a share is $0.14.
Those two will be equal if x is halfway between y and z, at their arithmetic mean.
So if two people disagree on whether the price should be y or z, then they will have equal EV by setting a price at the arithmetic mean of y & z, and trading some number of prediction market shares at that price. i.e., The fair (equal subjective EV) betting odds are at the arithmetic mean of their probabilities.