Microcovid.org was a vital tool to many of us during the pandemic- I made a whole speech about it back at summer solstice. The back and forth over how finished covid is, plus a dependence entirely on volunteers, has pushed microcovid into something of a limbo. It's not clear what the best next step for it is. One option would be to update microcovid for new problems, but that’s a lot of work and I have a lot of uncertainty about how valuable any given improvement is. So I’d to collect some data.

  1. How are you using microcovid now?
  2. What is the minimum viable change that would create value for you, and what would that value be? The more explicit the better here- comments like “feature X would be worth $n to me” or “it enabled me to find a collaborated then enabled a project” are more useful than “I like it a lot”
  3. What’s your dream microcovid, and what value would that create for you?
  4. Anything else you’d like to share on this topic?

I’ve asked LW to enable the experimental agree/disagree feature for this post. The benefit of this is that you can boost particular data points without writing anything. The risk is that an individual's preferences get counted repeatedly: 5 people with the same opinion who write five posts and agree with all of the others’ identical points should be counted as 5 people, not 25. So I ask that you:

  1. Not agree with comments substantially overlapping with a comment you write
  2. If multiple comments make the same point, only click agree for one of them

This isn’t an exact science because comments will sometimes contain more than one point or make very similar but not totally identical points, but please do your best.


New Comment
45 comments, sorted by Click to highlight new comments since: Today at 2:17 PM

I used microcovid a lot but less so in recent months, because it is not useful for omicron and because it does not include rapid tests. Also I have become less careful after vaccination. I’ve also had the feeling microCOVID was a bit conservative. Does the risk estimate take into account that a part of infectious people deliberately stay at home and you are less likely to meet them?

Updates I would suggest (in addition to other update mentioned in other comments)

  • adjustment of the risk for people who tested negative on a rapid test. Rapid tests are popular in my social circles. I would use this feature for 1-2 social activities per month for as long as the pandemic lasts.
  • estimate the risk of catching long covid for vaccinated people. I don’t care about having symptoms of a breakthrough infection for a few days. I care about how it affects my wellbeing and productivity in the long run. Would be worth one time $20-$400 for me depending on how reliable it is and how much new information it provides me.
  • (related) a suggested risk budget if you are vaccinated.
  • (lower prio) Instead of filling in only one mask for “their mask“ specify how many people around you have which type of mask. Especially useful for public transportation where half of the people don’t wear their mask properly.

One thing I'm surprised I haven't seen listed yet: Adjustment for boosters (and generally updating the vaccine adjustments, which I think are almost certainly too generous for 1- and 2-dose vaccination, once Omicron is circulating.)

Lately, I have mostly been using Microcovid as a guide for training my intuition about how important the various factors are. I don't have a lot of confidence in the actual output of the model overall right now, since it doesn't account for boosters, or for Omicron. I also have a general distrust of some of the model's simplifying assumptions about how factors interact, although I don't have anything better to substitute, other than my own intuitive judgement.

Good point, agreed! Here is my (not very good) current attempt to adjust manually: https://www.lesswrong.com/posts/YtWqgzLDqxSvJDkCi/how-should-we-adjust-microcovid-estimates-for-omicron

(lower prio) Instead of filling in only one mask for “their mask“ specify how many people around you have which type of mask. Especially useful for public transportation where half of the people don’t wear their mask properly.

One way of bounding the risk would be to estimate the risk from the maskless and masked independently and then add their risk together. For instance, if you're using their "going to work" scenario, you could decompose that profile into the various sub-activities that it's made up of, which might be "going to work" with 1 person within 15 feet wearing an N95 and silent, 2 people within 15 feet wearing a cloth mask snugly, and 1 person within 15 feet wearing no mask and yelling at the conductor. That gives 3.5 + 14 + 450 microcovids, for an approximate total of 470.

This particular example is designed to illustrate a way of arriving at a reasonable estimate of this risk without going through each individual component, which is that the largest risk tends to dominate. Zvi summarizes this as "Risks Follow Power Laws", which is just as true when evaluating the decomposed risks of a single activity as it is for evaluating a set of distinct activities. Not all activities will follow this pattern of a single dominant risk component since it's very possible to have many components which each contribute a fairly inconsequential risk but in aggregate the risk is enough to matter. However, starting with the biggest risk factor lets you come to a decent estimate quickly. This is especially convenient when you have a clear decision criteria (e.g. "I'll take the bus if the risk is 10 microcovids or less, and drive otherwise"), since if the highest risk factor is above this then you're done. If it's quite a bit below, you're also done, since the other factors are unlikely to get you there (e.g. if you mentally decompose the activity into 3 parts, and the one you expect to be the biggest risk is 1 microcovid, then you'd also be done unless your mental model of risk is way off).

Agree with adjustment for rapid tests. Otherwise, the results are overly conservative.

My willingness to pay for incorporating rapid tests: $100 (relatively low because I think I can just apply this post manually pretty easily). If there is significant variation across available tests then my WTP would rise to $200+. 

I continue to get a lot of value from microcovid.org, just as is. Partly using it myself and partly using it with friends/family who want help evaluating particular actions. Very grateful for this site.

The main additional feature that would be great for me would be help modeling how much of an update to make from Covid tests (e.g., how much does it help if everyone takes a rapid test before a gathering).

I don't use microCOVID much. Two things I'd like from the site:

  • A simple, reasonable, user-friendly tool for non-rationalists I know who are more worried about COVID than me (e.g., family).
  • A tool I can use if a future strain arises that's a lot more scary. Something fast and early, that updates regularly as new information comes in.

The latter goal seems more useful in general, and my sense is that microCOVID isn't currently set up to do that kind of thing -- the site currently says "Not yet updated for the Omicron variant", over a month in.

For the latter goal, updating fast matters more than meticulously citing sources and documenting all your reasoning. I see less need for a 'GiveWell of microCOVID' (that carefully defends every claim), and more value in a sort of Bayesian approach where you take the individuals with the best forecasting track record on COVID-related things, ask for their take on all the uncertain parameters, and then let people pick their favorite forecaster (or favorite aggregation method) from a dropdown menu.

A thing that sticks out is that I don't actually know who has a good forecasting track record – I know some people have made predictions but those predictions aren't aggregated anywhere I know that makes it easy to check. 

I voted disagree, because at this point there have been plenty of COVID forecasting tournaments hosted by Good Judgement, Metaculus and several 3rd parties. Metaculus alone has 400 questions in the COVID category, a lot of which have 100+ predictions. I personally would find it quite easy to put together a group of forecasters with legibly good track record on COVID, but from working in this space I also do have a sense of where to start looking and who to ask.

Ah, good to know.

I also tried and failed to get my family to use it :( Among other things, I think they bounced off particularly hard on the massive drop-down of 10 different risk categories of ppl and various levels of being in a bubble.

I don't think the blocker here was fundamentally quantitative -- they think a bunch about personal finance and budgeting, so that metaphor made sense to them (and I actually expect this to be true for a lot of non-STEM ppl). Instead, I think UX improvements could go a long way.

My guess is that the focus on bubbles no longer makes sense, since almost no one is doing that now. Beyond that, I struggle to know what trade offs could make microcovid UI more approachable, without making it not microcovid. A number of people (including me) already complain it's too restrictive, and cutting down on options makes that worse. It's really not obvious to me that the value generated by doing existing microcovid, but simpler, outweights the loss of configurability. Also I literally don't know how to make it simpler or more inviting beyond tossing out options. I don't mean it can't be done, I mean I'm terrible UI designer who literally can't think of anything.

So I'd be really interested in:

  • arguments that microcovid is at the wrong place on the pareto frontier
  • ways to improve usability that don't trade off against specificity for power users
  • other numerical tools that could be useful to your family that aren't microcovid

For the last one: raemon has suggested a unitless assesment of "how risky is today compared to other days?" (and maybe location comparisons as well), created using microcovid, local prevalence numbers, and a single default human. 

This will be a bit of a disappointing answer (sorry in advance), but I indeed think UI-space is pretty high-dimensional and that there are many things you can do that aren't just "remove options for all users". Sadly, the best I way I know of how to implement this is to just do it myself and show the result; and I cannot find the time for that this week. 

What kind of simplifications would you like to see, while keeping the product something that's still fundamentally microcovid? 

The main thing I've gotten out of microcovid is reduced search costs. Having ballpark figures for the relative effects of situations and interventions, gathered in one place, by a source I consider reasonably trustworthy, makes it much cheaper to estimate which risks are worth taking and which interventions are worth bothering with.

"Trustworthy" in this context means "someone systematically looking for the correct numbers, as opposed to targeting a bottom line chosen for reasons other than correctness." As with most politicized information, the problem isn't that the information is unavailable, but that most sources are acting in bad faith. Noise drowns out the signal until search costs become prohibitive. Your whitepaper in particular is excellent in this respect. Showing your work goes a long way towards demonstrating good faith; sharing your sources is better; sharing how and why you're using them is best of all.

Even just having the available options collected together helped. I didn't know P100s were a thing until I read your whitepaper. I use one to attend otherwise-riskier-than-I'd-like events in relative safety and comfort.

It's a shame that Microcovid's numbers haven't been updated for omicron -- that would be the first item on my wishlist, along with the booster and test numbers mentioned in other comments -- but that doesn't diminish the work you've already done. Your team provided an identifiable signal in the noise, and I love it for that.

[edit: since you ask for dollar values, I'd be willing to contribute say $1k towards getting the numbers updated for omicron, tests, and boosters -- provided that it was done to similar standards and preferably by the same team. That's less because I'd derive that much value out of it personally (at this point I know what I need to) and more that I think this sort of work is an underfunded public good.]

The thing I'd get most value from for microcovid would be good information on how much (in dollars) a microcovid "costs". Yes this is personal, but you could have users enter in info about various person-specific parameters, like how much they value life, and help in answering questions like that. I'm not sure how much I'd pay for it. $100? $250?

More specific information about how risky activities are probably isn't that useful to me. I just need a rough sense.

Making your risk tolerance more customizable would be helpful. My policy with microcovid from the very beginning has been to look at the numbers and basically ignore the "high risk / medium risk" designations, because they don't match my risk tolerance and that divergence has only increased over the course of the pandemic (now that I'm vaccinated, 1 uCov is less costly because it carries less risk with it).

My policy with microcovid from the very beginning has been to look at the numbers and basically ignore the "high risk / medium risk" designations, because they don't match my risk tolerance and that divergence has only increased over the course of the pandemic (now that I'm vaccinated, 1 uCov is less costly because it carries less risk with it).


Not from the very beginning, but for most of this year this is how I've used microcovid.

There were periods of time when I would look at the site two or three times a week, but often just to check the 'adjusted prevalence' numbers, which are hidden by default under the "Details" dropdown. Oh, yeah, prevalence hasn't changed much, no need to update cached models, cool. Probably worth ~$20 aggregated.

The biggest update for me was realizing that going to the dentist was much less risky than it intuitively felt. This was great, probably worth at least $100 (or, I dunno, maybe worth negative money if the dentist is actually net harmful to me, which is possible 😂)

I'd agree that quantification of risks for Long Covid: probably at least $200.

Updates for future variants: probably at least $50.

My willingness to pay for a quantification of risks for long covid: $500

Have you seen this estimate? It's not a full calculator but lists sources enough that you can do your own math.

Thanks for sharing that! I guess I'd be willing to pay $500 (per year? maybe more than that per year?) for someone to do the math for me and keep it updated as new data comes in. (For example, the findings I mentioned here). I think part of it is that I'd just prefer to spend my free time doing other things; part of it is that I'm not very good at evaluating studies, so I don't trust that my attempts to update on new information would necessarily be valid.

(I did read your post back when you wrote it, and Zvi, Scott, and Matt Bell's posts around the same time, and kind of hand-waved my way to bumping my weekly budget to ~400 microCOVIDs, then roughly 1000 before Omicron kicked up, but am at a loss for how to update it as new findings come in.)

I've used microcovid occasionally, to make sure my intuitive feelings about risk were not completely crazy (and that did cause some updates; notably, putting numbers to staying outdoors had an influence.) I'm not a heavy user, but I do appreciate the work you've done!

I'd basically like to see more of the same - update microcovid.org for omicron and keep it going.

(FWIW, I'm in the Netherlands, where we just entered a new lockdown for omicron. So COVID unfortunately isn't "over".)

Agreed - I'm struggling to figure out how to apply microcovid estimates in the wake of omicron.  Without an adjustment for that, it seems like no other improvements would matter. I would be willing to pay $1000 if microcovid were updated to reflect omicron. (Please agree with my post if you have a similar willingness to pay, and agree with JoachimSchipper's post if you just generally support updates for omicron but don't have a similar willingness to pay).

(I'm a little confused as to why it's not clear that this is the best next step for microcovid, and if anyone has suggestions for making ad-hoc adjustments to use microcovid given omicron, would appreciate them!)

My current shitty attempts to adjust manually are here: https://www.lesswrong.com/posts/YtWqgzLDqxSvJDkCi/how-should-we-adjust-microcovid-estimates-for-omicron

I appreciate and regularly use microcovid to estimate the risks of social gatherings so I can decide how cautious to be socially. 

It would be nice if microcovid was updated to take omicron (and future variants) into account. An omicron update would be worth >$10 to me personally (though probably <$100), since it saves me the time of estimating the changing risk myself. 

If you feel reasonably confident in your ability to estimate the changing risk yourself with what you currently know, I'd be grateful for your input here

microCOVID has been a game changer for me and many people around me: the ability to get quantitative risk assessments radically improved our ability to efficiently spend risk. We recently stopped using it because of Omicron, and I'm very sad about it.

To me, one of the coolest things about microCOVID has been the proof of concept that a group of smart civilians can put together a useful tool that significantly shifts the efficient frontier for navigating Covid. That alone seems valuable to me, and I'd love to see the project keep going as a testbed for how to make similar projects succeed in future.

But, like most volunteer projects, it seems to be slowly sinking beneath the waves. I don't know what, if anything, could change that. A $50,000 grant from ACX? Providing an easier on-ramp for new volunteers? Some kind of Y Combinator for rationalist projects?

My use of MicroCovid.org so far has probably been very different to most LWers, as I'm based in Sydney, Australia. So far, I've mostly been content to follow public health guidelines and have used MicroCovid.org about 4 times a year for the last 2 years. Each time I used it, I found it very useful for thinking about risk and improving my implicit understanding for how risky different activities were. My usage pattern looks set to change pretty dramatically though, as Australia is in the early stages of Omicron going exponential.

Of all the current features, my favourite is the ability to share the link to a custom scenario you've created. I've used that a few times in group chats to discuss whether an event should go ahead, and if so, how risky the participants should think it to be. It would be nice if the link-share feature provided shortened links, but I understand the backend implementation of this is  probably inconvenient. Given the incoming Omicron burst here, improvement of this feature would be worth $100 to me. This post is my official commitment to donating an extra $100 to MicroCovid.org if this suggestion is implemented (but don't go out of your way to do more than $100 of work if I'm the only one who wants this).

I'd also highly value model data for Omicron and future strains, but I trust that the team is already weighing the pros and cons of adding the information currently known Vs. waiting for higher accuracy.

A massive thank you to anyone involved in MicroCovid.org if you're reading this. It has been very useful to so many people, and has collectively helped us all to reduce, manage, and feel comfortable at our chosen level of risk.


In the past, I mostly used the microcovid research page as a reference for me to get numbers that I put into my own Guesstimate model. The reason I didn't use microcovid directly is that I find it easiest to think independently about the numbers "how likely is it that the people I am coming in contact with have COVID", "how many people am I coming into contact with during this activity", and "how likely is a contagious person doing this activity to transmit COVID to me". I like to have each of those in my head and multiply them by myself.

This isn't really an answer to the question at hand, but I'd really like to see something similar for other risks like driving. If it was good I could see myself paying $1,000 for it.

For what it's worth hastily made a spreadsheet and found that regular heavy exercise was by far the largest improvement I could make to my life expectancy. Everything else paled in comparison. That said I only evaluated interventions that were relevant to me. If you smoke, I imagine quitting would score high as well.

Good to know, thanks! My understanding is that with exercise, going from nothing to something has a huge benefit, but after that the returns diminish pretty rapidly. I'm being very qualitative here, but maybe eg. going from something to solid exercise is decent, and then solid to intense is small. Does that match what you found?

My analysis was from no exercise to regular high intensity exercise. There's probably an 80/20 in between, but I did not look into it.

Gotcha, thanks.

I might have come across it in the past but I don't remember. Thanks!

That last row in particular that adjusts for things like impairment in particular is useful. I would still be willing to pay some good money for something a) more detailed (eg. driving speed is something I've come across that seems important and would be cool to see info on) and b) where more time was invested. At less than 1.5 hours, I feel worried about the reliability.

I'm curious what would changes you would make, based on the information? The things that affect driving risk are generally well known and Josh took a stab at quantifying them; what would you do differently if you found certain numbers were off by 20%? 

Not strictly what you asked for but you might be interested in Dan Luu's write-up on car safety, which bears on my answer to the above question: safer cars cost more money, and a 20% increase in risk ups the amount I'm willing to pay for more safety. I could also see it being useful for weather conditions, which weren't an issue considered in the original report.

UPDATE: I made a guestimate and it turns out that if you're already a basically safe driver,  the difference between the safest car and a decent car has to be really stupidly safe to affect your risk of death much.  The safety number is made up right now, I have a request out to Dan for his estimate of the safety increase but otherwise am not planning on pursuing it, since it makes so little difference to my life.

UPDATE2: Dan says that among late model cars, the difference in safety is pretty negligible (but late model matters a lot)

I'm curious what would changes you would make, based on the information? The things that affect driving risk are generally well known and Josh took a stab at quantifying them; what would you do differently if you found certain numbers were off by 20%?

In general I don't care too much about being off by 20%. There are some caveats/comments though.

  1. There are a lot of variables, and it feels too me like if each of them could be off by ~20%, the overall calculation could be off by, idk, a factor of 1-2? That matters somewhat to me, but still not too too much. I'm more interested in orders of magnitude differences, or at least factors of more like 3-5.
  2. I value life a lot more highly than others. And with a higher value on life, differences like 20% start to matter more. Still not too much, and if I'm being honest they probably still aren't the types of differences that would actually change my behavior.
  3. I suppose the things that affect driving risks are well known, but are their magnitudes well known? I have two rationalist friends in particular I'm thinking of who believe/suspect that being a safe driver can have orders of magnitude differences. On the other hand, I don't share that impression, and it looks like you along with Josh Jacobson don't either. But none of us have spent much time investigating this question, so I'd think our confidences are all relatively low. Another example is driving speed. I did a quick investigation and it seems like the sort of thing that could have orders of magnitude impact. If so, that could actually be pretty influential for me, making local trips at low speed limits something I'm ok with. And maybe there are other big impact things we are overlooking that would show up in a closer investigation. That's part of the value I see in a "microcovid for cars/other things": knowing that others have investigated it thoroughly, I can feel comfortable that we're not missing anything important.

Not strictly what you asked for but you might be interested in Dan Luu's write-up on car safety

I am interested in the question of how much the car you're in affects your risk of death, but I'm not really getting that from his article.

UPDATE: I made a guestimate and it turns out that if you're already a basically safe driver, the difference between the safest car and a decent car has to be really stupidly safe to affect your risk of death much. The safety number is made up right now, I have a request out to Dan for his estimate of the safety increase but otherwise am not planning on pursuing it, since it makes so little difference to my life.

If you use the typical $10M valuation for life, then a micromort costs $10. You arrived at 40 micromorts/year, so $400/year. If your ballpark of the safety of a car affecting mortality by a factor of 1 is accurate, and if you own a car for, say, 5 years, then you might save something like $400/year * 5 years = $2,000 by choosing a safer car, but this probably isn't worth the investment of time or money. If you 10x the value you place on life it starts to matter though. I don't get this impression, but it's also possible that the safety of the car affects mortality by a factor of 5-10 instead of 0.5-2.

Most urgently, I'd like the bugs around missing vaccination data for SF and NYC (and possibly other locales) to be fixed.  https://github.com/microCOVID/microCOVID/issues/1280

I hope the team is not holding off on fixing critical functionality that is obviously broken / missing (e.g. this bug, adjustment for omicron) while they wait for data in response to this post. 

If the team is resource constrained in some way (money, people with particular skillsets), would love to know how the community can help!

I have not used microcovid much because I am not confident in its predictions and modeling assumptions, or I don't feel they are clearly enough defined to make the tool useful. The change that would be valuable to me (which I have difficultly operationalizing) would be if Microcovid were improved such that I could be much more confident in its modeling assumptions and could use it without having to try to make lots of guesses about which scenarios are well modeled. Maybe it would be sufficient just to explain which types of assumptions make for robust modeling outcomes (maybe this is already somewhere in the documentation). Otherwise, I will continue not to use it.

I think that in general maybe Microvid works well in low-risk situations but breaks down in high-risk situations.

Prior to the recent Omicron surge and post-vaccination, I tended to estimate my covid risks by looking at reported covid case rates in my area, and assuming that as a fully vaccinated person, my risk of getting covid was likely lower than the average person in my area (Ohio), many of whom are not vaccinated, even if I went to restaurants and bars at about the same rate as I did in 2019.

Some examples of my confusion about microcovid's modeling assumptions . . .

Looking at the risk profiles for hypothetical other people, for fully vaccinated people in my state (Ohio):
Average person in your area: 11,000 microcovids
Has 4 close contacts whose risk profile you don't know, in an otherwise closed pod: 6,400
Has 10 close contacts whose risk profile you don't know, in an otherwise closed pod: 19,000

What is the definition of a close contact here? Does this mean somebody who they live with or something like that or just somebody who they regularly hang out with closer than 6 feet? It seems to me that the average person in my area (the mean-risk person since this mean is largely determined by the riskiest people, maybe not the median-risk person) has more or less gone back to normal and would have more than 10 close contacts if you're counting the people they live with, work with, or hang out with regularly. Or at least closer to 10 close contacts than 4.

Microcovid currently predicts that a fully mRNA-vaccinated person with a cloth mask who spends 8 hours in a bar acquires 380,000 microCovids (38% chance of getting covid), assuming that the average person in the bar went to a bar within the last 10 days. (reduced to 240,000 if the average person within 15 feet 10+ feet away rather than 6+ which seems more likely. But why doesn't the model care at all about people 20 feet away?) (As a side note, the default assumption was that most of the people in the bar had the risk of "an average person in your area" which doesn't seem right for a typical bar.)

And furthermore, the risk after 8 hours is equal to the risk after 4 hours, huh? I'd think that in 8 hours more people would be coming in and out, you'd be exposed to more possible infected people.

If this assumption were correct, then over the next week we'd see basically all the bartenders and bar workers here in Ohio getting covid simultaneously. (Or does it just max out at 4 hours so that the covid risk of working at a bar for a week is the same as for 4 hours? That just doesn't make sense.) Even if half or so of these cases were asymptomatic, it would probably be enough that many of the bars would shut down. Seems unlikely, but I guess we could see if it happens.

Likely one of the missing parameters here is the protection from recent infection. I could imagine that the majority of bar workers who haven't had covid in the last 3 months will get it over the next month or so, which wouldn't be enough to shut down many bars.

A one-night stand with somebody who has covid (modeled as kissing for 10 hours) my risk is only 100,000 microcovids. It seems bizarre to me that this risk would be about 1/2 to 1/4 the risk of going to a bar for 2 hours with 15 random people who had been in bars in the last 10 days. Maybe my intuitions are just way off. I suppose at the bar there could be multiple people near me with covid, and one of them might be much more infectious than the average person with covid. But I would think that all of them together wouldn't transmit as many viral particles to me as a single person with covid who I am kissing for 10 hours.

I want a way to translate microcovids into expected hours of my life lost, taking into account death, long covid, short term illness and all other effects of covid, if there are any.

Given that the early data I've seen suggests that efficacy of 3 doses vs. omicron is similar to that of 2 doses vs. delta -- probably a bit lower, but at least in the same universe -- I've been using it largely as is, multiplying the final output by 2 to 3 based on what I've seen about the household transmission rate of Omicron relative to Delta. I know some other boosted people who have used it in a similar fashion. There's so much uncertainty in the model assumptions that its best use in my view is to get very broad-strokes order-of-magnitude idea of the risk, which has been extremely useful for friends and relatives who have just wanted a baseline idea of whether the risk of getting COVID when participating in a particular activity is more like .01% or .1% or 1% or 10%. (Note: I doubt that said friends and relatives would have been able to use it in this way without my help, since it requires a little math and they're not math types.) So I guess my main recommendations would be:

- don't get rid of it even if you aren't confident in the Omicron data - if you can produce results that are probably in the right order of magnitude, it's still useful! If you aren't up for a full Omicron overhaul, but you think there's some back-of-the-envelope adjustment that could give results that are probably the correct order of magnitude, I think applying that -- with suitable caveats about accuracy -- would be preferable to taking the site down or leaving it as is.

- It's easy to forget how many people are not math people whatsoever. Best practice in risk communication is often considered to be communicating numbers as percentages, as well as contextualized frequencies -- not just 'X-in-a-million', but something like "X out of Y people (for context, Y is roughly the number of people living in Z)" -- as there are a lot of people who don't really understand percentages and need a little context to understand frequencies. In my ideal world the output would make the chance of getting COVID from this specific activity clear as a percentage and as a contextualized frequency, as well as the chance of getting COVID from this activity in a year under the assumption that you do this activity every N weeks, where N can be entered by the user.