I have a hypothesis that seems to fit the data. These numbers are not given out for the purpose of collecting data on vaccine side effects (that's what VAERS is for). They are intended to provide specialized medical care directed at those who have recently gotten vaccines.
One commenter reported calling a Walgreens number. If this is representative, these are local pharmacy/medical practice numbers that people are calling, not some national reporting service.
Reassurance is one of the jobs of a anyone providing medical care. "Even though you aren't feeling well after the treatment, you have nothing to worry about, the treatment is safe." is exactly what I would want someone to say if there was nothing either of us could do to help matters, especially if I was worried enough to call. You are especially likely to do so if you personally believe the vaccine is save (which is very likely for someone responding to such a number). If I was simply recording side effects, I wouldn't bother with that. Y
If you already believe the side effect is caused by the vaccine and think it's a very big deal, and then during the call they try to give the reassurance, you will instead distrust them, and also want to report their untrustworthiness to friends.
If you never call the number because you are not worried, or you do trust them, you have nothing notable to report. This would explain why every report looks like a reassurance that fell flat. Your sample is biased strongly towards looking exactly that way, regardless of how common side effects or the "there are no side effects" line actually is.
And all that assumes that this game of telephone, chaining between the medical establishment, the people taking the calls, your friends reporting the call, and then your fuzzy recollection, didn't distort any of the data.
Currently, this "explains" your data for me. As in, I am no longer confused about your reports about your friends. I understand what happened, I think. There is no data collection rejection involved, at least not related to these calls.
Do you doubt this hypothesis? If so, what evidence could you provide against it? What evidence would we need to collect to figure out whether the hypothesis is true?
I would expect that if one called such a number, one could confirm that the other person is doing no data collection about the likelihood of side effects, that the line in context is intended for reassurance if it comes up, and the entire call will otherwise be completely in line with providing post-vaccine medical care. Averaging across multiple calls, of course.
If I'm wrong, I would expect that getting a full description of an entire call would show that the line in question is used as a shutdown, side effects are not being recorded (but they are supposed to be recorded every time according to the rules of the job), there is no reasonable medical triage going on, and the numbers in question are intended purely to advocate for vaccine safety. Also averaging across multiple calls.
Suppose 50% of vaccinated people would attend this event, and so would 50% of unvaccinated people, after considering the risks (ergo, there is no risk compensation). However, only vaccinated people are allowed to go to the event. Then the vaccinated people could have increased rates of Covid compared to unvaccinated people because of being more likely to attend superspreader events, even though they did not increase their level of risk compared to the unvaccinated population.
Whether this is the actual reason for the apparent negative effectiveness would depend on the actual percentages, and how common/dangerous superspreader events really are.
I searched the CDC's Vaccine Adverse Event Reporting System (VAERS) and there are 474 reported cases of abnormal blood pressure following COVID-19 vaccination. Looking further in the Google search, I found a study (n = 113) which indicated increased risk of high blood pressure after vaccination, especially after previous infection.
Plainly, not everyone in the healthcare system is on the same page about side effects. I'd err on the side of the Walgreens person you talked to being more accurate, given that high blood pressure is a known side effect. Not known by that Nebraska Medicine doctor, apparently.
I'm wondering what the details of your friends reporting attempts are. Who exactly did they talk to? VAERS is the official U.S. reporting system, what were their experiences with that? If there is an underreporting problem, we need as many specifics as we can get to combat it. Given that some vaccines do have well-known side effects among certain demographics, lots of people have been able to report their side effects successfully. We would need to figure out why your friend group has been far less successful to correct the issue.
Without an explicit probability calculation, how exactly are we supposed to determine what the levels of side effects in reality are, vs what the medical data that has been collected and reported suggests, vs what the average person thinks is true? Perhaps all are biased and/or untrustworthy. I'm not sure where we can go from there. Has personal testimony from our own social groups become the best we can do?
What does it mean to Left-box, exactly? As in, under what specific scenarios are you making a choice between boxes, and choosing the Left box?
If you compare deaths to harms, you can end up scared of vaccines or Covid, depending on which you compare. If no one died of a vaccine in your group but one or two people were hurt by Covid, you will be scared of Covid. The question is, where does the framing come from? If no one died of Covid or a vaccine in your group (which seems to be the most likely case for a given group), which do you become scared of, and why?
Perhaps such probabilities are based on intuition, and happen to be roughly accurate because the intuition has formed as a causal result of factors influencing the event? In order to be explicitly justified, one would need an explicit justification of intuition, or at least intuition within the field of knowledge in question.
I would say that such intuitions in many fields are too error-prone to justify any kind of accurate probability assessment. My personal answer then would be to discard probability assessments that cannot be justified, unless you have sufficient trust in your intuition about the statement in question.
What is your thinking on this prong of the dilemma (retracting your assessment of reasonableness on these probability assessments for which you have no justification)?
My approach was not helpful at all, which I can clearly see now. I'll take another stab at your question.
You think it is reasonable to assign probabilities, but you also cannot explain how you do so or justify it. You are looking for such an explanation or justification, so that your assessment of reasonableness is backed by actual reason.
Are you unable to justify any probability assessments at all? Or is there some specific subset that you're having trouble with? Or have I failed to understand your question properly?
A vaccination requirement could result in lower apparent effectiveness; so could risk compensation. In order to determine how much risk compensation occurred, we have to determine how much the vaccination requirement lowered the effectiveness. Without that analysis, concluding that risk compensation has a big enough effect to cause or contribute significantly to negative effectiveness is premature.
I am otherwise unsure of what you are trying to get at. The unvaccinated were prevented from doing a risky activity, and the vaccinated were allowed to do the activity (with a lower risk due to their status), yes.