Sorted by New

Wiki Contributions


Presumably, a more complete statement would be "If you are vaccinated and worried about this virus, go get your booster.  If (or once) you have your booster, if you're still worried, increase ventilation and limit social contact, particularly in poorly ventilated spaces.  Also, lose some weight, fatty."

The link for "Ivermectin studies looking more and more like outright fraud" seems to go to a discussion of mix and match routines.

Thanks again for doing these - I look forward to them all week!

I was planning to say this too.

IIRC, covid risk budgets arose in a group housing context - the idea was that it was an equitable way to balance the risk that your activities were presenting to your housemates, and to prevent the more risk-loving housemates from unfairly exposing risk-averse housemates to undue dangers.

If you're not concerned about the risk to people close to you, or if they're defectors who are doing whatever they want anyway, then a covid risk budget makes less sense, and OP's cost-benefit analysis makes more sense.  Of course, as pointed out upthread, if you don't trust yourself to make reasonable long-term decisions in the moment, then committing to a budget is a pretty good way of lowering overall risk.

Hah! That is definitely a weakness of my "What does Gelman have to say" strategy.

Thanks for this engagement, it's great to see.

Stepping back to a philosophical point, a lot of scientific debates come down to study design, which is at a level of expertise in statistics that is (a) over my head and (b) an area where reasonable experts apparently can often disagree.

My normal strategy is to wait for Andrew Gelman to chime in, but (a) that doesn't apply in all cases, and (b) philosophically, I can't really justify even that except in kind of a brute Bayesian sense.

I'd love to get a good sense of how confident we can be that masks work - but since I'm not competent on the stats, I guess I'll wait for the RCTs and stick with "can't hurt, may well help" until then.  (Like Vitamin D, but with a higher percentage of "may well help.")

I'm not good at expressing it formally, but I was thinking more:

  1. Expected total utility to my friend of going to a bar with her granddaughters > expected total value to her of staying home,* but the the expected utility to society of her going to the bar with her grandkids is negative.
  2. As long as enough other people stay home, on the other hand, the social costs of her going out are  not as high as they would be if more people went out.
  3. On the third hand, even if a bunch of people going out increases the cost (to both herself and society) of her going out, watching other people defect makes her feel like a sucker.

*She's in her late 70s, and my feeling is she's don't the math and figures she may not have that many good years left, so she didn't want to miss out on one, even if it increased her life expectancy.

My intuition is that people get confused whether they're measuring the risk to themselves or the risk to society from their Covid decisions.  

  • It seems like a lot of people I know have decided that they'd rather accept the increased risk of serious injury or death rather than have a substantially reduced quality of life for a year or two.  Ok, fine. (Although it's hard to measure small risks.)
  • On the other hand, the other problem is that even if the person is accepting the risk for themselves, I'm not sure they're processing the risk that somebody else gets seriously ill or dies.

It might be that public pressure focusing on the risk to each of us is obscuring some of the risk to other people, or it might just be a collective action problem - if a mayor encourages other people to be vigilant about Covid from his beach vacation in Mexico, well, that does much less damage as long as everybody else follows his advice.

Not sure of all the costs, but my wife and daughter are in one of the trials, and they're each getting paid $1,600 on completion. They also have regular testing visits (not sure how often, though).  Depending on your assumption of how many trials are successful, that might get you a decent part of the way to $10,000 per participant in a successful trial.

I think the "strictly implies" may be stealing a base.

Yes, being convinced of the existence of the AI would make the man rethink the aspects of his religion that he believes renders an AI impossible, but he could update that and keep the rest. From his perspective, he'd have the same religion, but updated to account for the belief in AIs.

I wrote a long post saying what several people had already said years ago, then shortened it. Still, because this post has made me mad for years:

1) Of COURSE people can agree to disagree! If not, EY is telling this guy that no two rationalists currently disagree about anything. If THAT were true, it's so fascinating that it should have derailed the whole conversation!

(Leaving aside, for the moment, the question of whether Aumann's theory "requires" a rationalist to agree with a random party goer. If it really did, then the party goer could convince EY by simply refusing to change his mind.)

2) Presumably, if EY did produce an AI to the party-goer's satisfaction, he would be most likely to update his religious beliefs to include the existence of AIs.

Load More