For the record, the popular interpretation of "Popperian falsificationism" is *not* what Karl Popper actually believed. (According to Wikipedia, he did not even like the word "falsificationism" and preferred "critical rationalism" instead.) What most people know as "Popperian falsificationism" is a simplification optimized for memetic power, and it is quite simple to disprove. Then we can play *motte and bailey* with it: the *motte* being the set of books Karl Popper actually wrote, and the *bailey* being the argument of a clever internet wannabe meta-scientist about how this or that isn't scientific because it does not follow some narrow definition of falsifiability.

I have not read Popper's book, therefore I am only commenting here on the traditional internet usage of "Popperian falsificationism".

The good part is noticing that beliefs should pay rent in anticipated consequences. A theory that explains everything, predicts nothing. In the "Popperian" version, beliefs pay rent by saying which states of the world are *impossible*. As long as they are right, you keep them. When they get wrong *once*, you mercilessly kick them out.

An obvious problem: How does this work with *probabilistic* beliefs? Suppose we flip a fair coin, and one person believes there is a 50% chance of head/tails, and other person believes it is 99% head and 1% tails. How exactly is each of these hypotheses falsifiable? How many times exactly do I have to flip the coin and what results exactly do I need to get in order to declare each of the hypotheses as falsified? Or are they both unfalsifiable, and therefore both equally unscientific, neither of them better than the other?

That is, "Popperianism" feels a bit like Bayesianism for mathematically challenged people. Its probability theory only contains three values: yes, maybe, no. Assigning "yes" to any scientific hypothesis is a taboo (Bayesians agree), therefore we are left with "maybe" and "no", the latter for falsified hypotheses, the former for everything else. And we need to set the rules of the social game so that the "maybe" of science does *not* become completely worthless (i.e. equivalent to any other "maybe").

This is confusing again. Suppose you have two competing hypotheses, such as "there is a finite number of primes" and "there is an infinite number of primes". To be considered scientific, either of them must be falsifiable in principle, but of course neither can be proved. Wait, what?! How exactly would you falsify one of them *without* automatically proving the other?

I suppose the answer by Popper might be a combination of the following:

- mathematics is a special case, because it is
*not* about the real world -- that is, whenever we apply math to the real world, we have two problems: whether the math itself is correct, and whether we chose the right model for the real world, and the concept of "falsifiability" only applies to the latter; - there is always a chance that we
*left out something* -- for example, it *might* turn out that the concept of "primes" or "infinity" is somehow ill-defined (self-contradictory or arbitrary or whatever), therefore one hypothesis being wrong does not necessarily imply the other being right.

Yet another problem is that scientific hypotheses actually get disproved all the time. Like, I am pretty sure there were at least dozen popular-science articles about experimental refutation of theory of relativity upvoted to the front page of Hacker News. The proper reaction is to ignore the news, and wait a few days until someone provides an explanation of why the experiment was set up wrong, or the numbers were calculated incorrectly. That is business as usual for a scientist, but would pose a philosophical problem for a "Popperian": how do you justify believing in the scientific result during the time interval *between* the experiment and its refutation were published? How long is the interval allowed to be: a day? a month? a century?

The underlying problem is that experimental outcomes are actually *not* clearly separated from hypotheses. Like, you get the raw data ("the machine X beeped today at 14:09"), but you need to combine them with some assuptions in order to get the conclusion ("therefore, the signal travelled faster than light, and the theory of relativity is wrong"). So the end result is that "data + some assumptions" disagree with "other assumptions". There as assumptions on both sides; either of them could be wrong; there is no such thing as pure falsification.

Sorry, I got carried away...

It's been known for two thousand years that there are infinitely many primes.

https://primes.utm.edu/notes/proofs/infinite/euclids.html