I appreciate the need for a phrase or concept to refer to instances when the “easier” thing is harder for you than the “hard” thing, so thank you for pointing me towards that idea. It reminds me of mathematicians who have gotten remarkably bad at arithmetic because it simply doesn’t come up in their studies.
That said, it’s difficult for me to disentangle the phrase “I don’t know how to count that low” from its somewhat elitist origins. It seems to bring the unworthiness of the task into focus instead of the person’s competence at the task. Indeed, the google engineers in the story seem to remark upon their distance from the task as a point of pride in your story, that the numbers are so low that bothering to count them or even knowing how to count them is a mark of low status. Perhaps the ego-saving of looking down on the task is part of the appeal? Or perhaps I am reading too much into the Google story. Something like “I’ve forgotten how to walk” appeals much more to me since it emphasizes my present lack of skills.
Simple answer first: If the sensitivity and specificity are estimated with data from studies with large (>1000) sample sizes it mostly won’t matter.
Avoiding point estimates altogether will get you broader estimates of the information content of the tests, regardless of whether you arrive at those point estimates from Bayesian or frequentist methods.
Comparing the two methods, the Bayesian one will pull very slightly towards 50% relative to simply taking the sample rate as the true rate. Indeed, it’s equivalent to adding a single success and failure to the sample and just computing the rate of correct identification in the sample.
The parameters of a Beta distribution can be interpreted as the total number of successes and failures, combining the prior and observed data to get you the posterior.
Re: “ If the sensitivity is actually 100%, then we get a Bayes factor of 0, which is weird and unhelpful — your odds of having COVID shouldn't go to literally 0. I would interpret this as extremely strong evidence that you don't have COVID, though. I'd love to hear from people with a stronger statistics background than me if there's a better way to interpret this.”
The test doesn’t actually have 100% sensitivity. That’s an estimate based on some study they ran that had some number of true positives out of some number of tests on true cases. Apparently it got all of those right, and from that they simply took the point estimate to equal the sample rate.
The Bayesian solution to this is to assume a prior distribution (probably a Beta(1,1)), which will update in accordance to incoming evidence from the outcomes of tests. If the study had 30 tests (I haven’t read it since I’m on mobile, so feel free to replace that number with whatever the actual data are), that’d correspond to a posterior of a Beta(31,1) (note that in general Betas update by adding successes to the first parameter and failures to the second parameter, so the prior of 1 becomes a posterior of 31 after 30 successes). Taking a point estimate based on the mean of this posterior would give you a (n+1)/(n+2) percent sensitivity. In my toy example, that’d be 31/32% or ~97%. Again, replace n with the sample size of the actual experiment.
A real Bayes factor would be slightly more complicated to compute since the point estimate given the posterior involves some loss of information, but would give very similar values in practice because a Beta is a pretty nice function
The Beta(1,1) is probably better known as the Uniform distribution. It’s not the only prior you can use, but it’ll probably be from the beta family for this problem.
As a test with a true 100% sensitivity accumulates more data, the point estimate of its sensitivity given this method will approach 100% (since (n+1)/(n+2) approaches 1 as n approaches infinity), which is a nice sanity check.
When the test fails to detect covid, it will increment the second number in the beta distribution. For an intuition of what this distribution looks like for various values, this website is pretty good: https://keisan.casio.com/exec/system/1180573226
Within this sample, years of education would be mostly fixed. Those with more/fewer would run into selection effects (e.g. getting held back a grade gets more years schooling, but presumably lower odds of getting into Oxford).
I noticed that this post isn’t in the covid 19 sequence I can find from your profile(nor are the other September posts). Was that an intentional change? I found it useful to be able to link people to that sequence as a stable location to find your posts who would probably not be interested in lesswrong more broadly.
Continuing the thread of splitting what are usually considered atomic players into a team:
Chess has a fun variant called Hand and Brain that lets players of disparate skill levels enjoy the game concurrently. A single chess player is broken into a team: the hand and the brain. Generally the stronger player serves as the brain, who names a chess piece type on each move (e.g. “pawn”). It is then up to the hand to play a legal move with that piece type (e.g. by moving a particular pawn to a particular square). Frequently pairs of hands and brains will play against one another, but a single hand and brain combo could play against unitary players and would be moderately stronger than the hand alone. What are the benefits of such a game mode?
The brain is forced to find as many good moves as possible, and therefore enjoys what is engaging about chess. However, the brain is also forced to engage in meta cognitive and social reasoning about the board from the perspective of their partner. If moving the rook to e4 is a blunder that requires you to recognize a sneaky tactic, perhaps the quiet bishop move will set things up better down the road even if another rook move would be slightly stronger. The brain can alternatively say a piece type that only has one good move and many obviously terrible ones on the hope that the hand can successfully rule out the blunders.
From the perspective of the hand, the game gives them an opportunity to learn from the stronger player: by looking at a narrower subset of the board, they can find individual moves that are stronger than they might otherwise. In a sense they are increasing the amount of computation they bring to the game. If they could, in a regular game, run this process in parallel for each of the piece types and then choose the one that is best via an oracle (the brain), then they would presumably play very well! By finding strong moves in the more limited case, they will be more likely to find them again in future games. It can also build confidence in their ability: “you can indeed find strong moves, you just need to also take the time to find the right piece”
Lastly, the game has a strong social component. I usually see it played in the context of coaches goofing off with their students (against other coach-student pairings), but it’s goofing off that lets the coach see where their student is misunderstanding the game. What moves do they rule out too early or simply not see at all? Can I get my student to play strong moves that are both aggressive and defensive? The hand is also usually encouraged to think aloud, which helps the brain identify both what to suggest for the current game and also what to work on in study.
Sadly this variant doesn’t translate well to something like go, since there isn’t a good way to let the stronger player narrow down the space of possible moves. I suppose they could literally narrow down the space by giving a quadrant or something, but it’s not clear to me that the weaker player would get much out of this, neither in the immediate game nor in their general understanding of go.
Safety deposit boxes are one solution to this problem: write the password down on a piece of paper and pass the job of identity verification off to the bank. This solution can also serve as an alternative to backing things up online: keep one external drive in the bank and one at home, swapping them with enough regularity that you avoid total losses.
This approach does have some downsides:
-Relies on your bank’s identity verification methods.
-Not accessible remotely (this is the primary reason it is safe).
-Requires you to physically go to a bank to make use of it (can be a large enough trivial inconvenience to prevent regularly swapping the external drives)
It also has pros:
-Can set up access for next of kin without giving them current access.
-Immune to the sorts of attacks that scale.
-Gives you physical access to something that won’t burn down in a house fire.
I used that tool for my current rent split and it worked alright, although I didn’t understand the tool well enough in advance to know that we should do more comparisons than it automatically suggests. As a result, when it proposed a distribution of rents I was in the awkward position of wishing to trade with two of my roommates; preferring their rent-room combinations to mine. The preference was weak enough (I would have paid about ~25$ per month to trade) that I expected to lose value in arguing for further work on this (my roommates were somewhat suspicious of the tool to begin with, so making changes at this stage would have damaged trust). Overall I expect I would have gotten slightly more value from starting with the “gut and negotiate” method. However everyone left this negotiation fairly content and with increased trust, which has reaped fairly good value as well.
In that context I was the coordination pioneer, and it was helpful to leverage the reputation of the nyt when proposing the scheme. I think most people are rightly suspicious of people who propose novel solutions to current coordination problems since there may be a trick to it that leaves them open to abuse; a known reputation (yours or not) is useful for soothing that concern.
Feel free to use it wholesale.
I would have found the explanation much clearer if you stated at the outset whether the length of the line is chosen by us or for us. As is, the explanation has a lot of parts that are allowed to move from the outset: the location where we are standing, the length of the line extending behind us, and the number of primes on that line. Since the goal is to impose a relationship between these three components, and ultimately our standing location is the only bit that is allowed to vary, it would have helped me make things more concrete if you started with something like "Wherever we plant our feet, the magic line extends backwards towards 0 and then whispers a number in our ear, hinting at the primes it contains."
This sets up the three key questions for the theorem: how far does the line extend backwards, what number does it whisper, and what does that number have to do with primes?