Must the frequentist refuse to assign probabilities to one-off events? Consider the question 'will it rain tomorrow'. The frequentist can define some abstract class of events, say the class of possible weathers. She can then assume that every day the actual weather is randomly sampled from this imaginary population. She can then look at some past weather records and calculate a random sample from this hypothetical population. Suppose there that in this large sample 30% of days were sunny; we can then say that approximately 30% of the hypothetical weathers in this population are sunny and hence the probability of drawing a sunny day tomorrow is approx 30%.

Obviously the answer she gets hinders on the model assumptions she specifies. She can, for instance, model the weather as some stationary, autoregressive process (then the actual weather is sampled from an abstract population of weather time series), run her regression, calculate the estimates and arrive at a completely different answer. That is still the case for Bayesians though, since they also have to specify their priors and models and their answers depend on how they do it. My point is only that the above line of thought would allow a frequentist to make statements about probabilities of one-offs.

It seems to me that this kind of philosophy is often employed in social science. When political scientists estimate the effect of democracy on GDP, what they are trying to find, statistically speaking, is the expected difference in GDP between a democratic and a non-democratic country drawn from their respective populations, all else equal. Those populations are not the *real world* populations of democratic and non-democratic countries, but some abstract populations which real-world countries are assumed to be drawn from. I have never seen this logic explicitly spilled out, but it seems to be implicitly assumed and is required for applying frequentist techniques to social science questions.

Must the frequentist refuse to assign probabilities to one-off events? Consider the question 'will it rain tomorrow'. The frequentist can define some abstract class of events, say the class of possible weathers. She can then assume that every day the actual weather is randomly sampled from this imaginary population. She can then look at some past weather records and calculate a random sample from this hypothetical population. Suppose there that in this large sample 30% of days were sunny; we can then say that approximately 30% of the hypothetical weathers in this population are sunny and hence the probability of drawing a sunny day tomorrow is approx 30%.

Obviously the answer she gets hinders on the model assumptions she specifies. She can, for instance, model the weather as some stationary, autoregressive process (then the actual weather is sampled from an abstract population of weather time series), run her regression, calculate the estimates and arrive at a completely different answer. That is still the case for Bayesians though, since they also have to specify their priors and models and their answers depend on how they do it. My point is only that the above line of thought would allow a frequentist to make statements about probabilities of one-offs.

It seems to me that this kind of philosophy is often employed in social science. When political scientists estimate the effect of democracy on GDP, what they are trying to find, statistically speaking, is the expected difference in GDP between a democratic and a non-democratic country drawn from their respective populations, all else equal. Those populations are not the *real world* populations of democratic and non-democratic countries, but some abstract populations which real-world countries are assumed to be drawn from. I have never seen this logic explicitly spilled out, but it seems to be implicitly assumed and is required for applying frequentist techniques to social science questions.