Perhaps I'm misunderstanding you somewhere, but it seems that the requirement you need on defeaters is much stronger than the No Indescribable Hellworld hypothesis.
For a bad argument to be seen to be flawed by a human, we need:1) A flaw in the argument to be describable to the human in some suitably simplified form.2) The human to see that this simplified description actually applies to the given argument.Something like the NIHH seems to give you (1), not (2).I don't see any reason to think that (2) will apply in general, but you do seem to require it.
Or am I missing your point?
Agreed. To be fair to Zvi, he did make clear the sense in which he's talking about "value" (those who value them most, as measured by their willingness to pay) [ETA: "their willingness and ability to pay" may have been better], but I fully agree that it's not what most people mean intuitively by value.I think what people intuitively mean is closer to:I value X more than you if:1) I'd pay more for X in my situation than you'd pay for X in my situation.2) I'd pay more for X in your situation than you'd pay for X in your situation.(more generally, you could do some kind of summation over all situations in some domain)The trouble, of course, is that definitions along these lines don't particularly help in constructing efficient systems. (but I don't think anyone was suggesting that they do)
If (2) and (3) were seriously considered, then I'd think you'd particularly want to avoid using only a single vaccine.
From a civilizational point of view, the largest issue isn't the expectation of the direct outcome - it's that there's a small chance you may have a bad outcome with very little variance across the population.
I'd be much less concerned about doing (2) or (3) with twenty different vaccines than with one.
It's also worth looking at the next table for Moderna one-dose severe-COVID-prevention efficacy:Vaccine group: 2 / 996Control group: 4 / 1079Efficacy: 42.6% (-300.8, 94.8) [95% CI]Huge error bars and little data, but certainly doesn't support a guess of ~80% efficacy at preventing severe cases. In the end it's the transmission that matters, but I suppose there's a danger based on public perception: if one dose turns out to have under 50% efficacy for severe cases it's not going to make anyone feel safe. If the sub 50% applies to deaths too, then you'll have many reports of "X took the vaccine, caught Covid and then died".I assume Moderna wouldn't be crazy about this either. Not great PR if everyone broadly remembers that vaccines stopped Covid, but specifically remembers that Moderna's failed to save their friend's granny.While there's short supply, it doesn't particularly matter if a load of people don't want to take it. Once there's a large supply, that changes - and if there's a largely baked-in misperception that the vaccine(s) suck(s), it's likely to be unhelpful.In some sense it's analogous to the mask situation:[Take action likely to reduce confidence in X] ---> [Free up supply of X to allow efficient targeting] ---> [Suffer consequences of longer-term low confidence in X]Here the confidence-reducing action wouldn't be a lie, but that's not the only consideration.
Ah yes, I think you're right.To me it seems that one dose efficacy is approx 80% from that table, and the two dose is still the old approx 95%. So it's more like an 80% to 95% upgrade than 87% to 97%.Zvi's main point likely still stands, but the personal immunity question is less clear [ETA even on a population level it's somewhat less clear, once you consider the confidence intervals: given 55% to 92% CI, one-shot efficacy could turn out to be below 70%, in which case things depend a lot on the homogeneity of populations, the precision of your targeting, and post vaccination behaviour changes]
My best guess on that table, looking at the full report (caveat: I am emphatically not an expert):1) The VE calculations look correct: they're almost precisely what I get by division of my naïve incidence rate calculations. I assume the small discrepancy is due to the data's being discrete: if you have 7 cases out of 996, your best prediction of incidence rates won't be precisely 7/996.2) From my guess the numbers in brackets in the first two columns aren't percentage rates at all. Rather they are "Surveillance time in person years for given endpoint across all participants within each group at risk for the endpoint". This description is at the bottom of the table, without any asterisk or similar. I assume that this is an error: there was supposed to be an asterisk for that from the bracketed number in the first two columns.This seems plausible for the data: the pre-14-days numbers are under half of the post-14-days numbers, and the median follow-up time was 28 days.But it's entirely possible that I'm wrong.
Thanks again for these.Typo: "...net negative to administer the virus".
It's a good book.
"Influence: the psychology of persuasion" has some useful ideas on identity formation too. In particular, the observation that your brain is looking for explanations for your own actions. When you do X it's likely to use "I'm the kind of person who does X" only if it can't find some strong external reason for you to have done X. The stronger the external motivation, the weaker the influence on your identity.
I think this is another reason the 2-minute approach is likely to be effective. The 2-minute version not contributing significantly to the outcome isn't either a bug or irrelevant: it's a feature.
It's denying your brain the outcome-based explanation, leaving it with the identity-building explanation.
Right, but any such trash-car-for-net-win opportunity for Bob will make Alice less likely to make the deal: from her perspective, Bob taking such a win is equivalent to accident/carelessness. In the car case, I'd imagine this is a rare scenario relative to accident/carelessness; in the general case it may not be.
Perhaps a reasonable approach would be to split bills evenly, with each paying 50% and burning an extra k%, where k is given by some increasing function of the total repair cost so far.
I think this gives better incentives overall: with an increasing function, it's dangerous for Alice to hide problems, given that she doesn't know Bob will be careful. It's dangerous for Bob to be careless (or even to drive through swamps for rewards) when he doesn't know whether there are other hidden problems.
I don't think you can use the "Or donates to a third party they don’t especially like" version: if trust doesn't exist, you can't trust Alice/Bob to tell the truth about which third parties they don't especially like.You do seem to need to burn the money (and to hope that Alice doesn't enjoy watching piles of money burn).
Thanks, particularly for the aerosol FAQ link.
Mostly harmless typo: ...because ‘they don’t expect to test negative.’