274

LESSWRONG
LW

273
Rationality
Frontpage

3

[ Question ]

In plain English - in what ways are Bayes' Rule and Popperian falsificationism conflicting epistemologies?

by Sandro P.
2nd Apr 2021
1 min read
A
5
33

3

Rationality
Frontpage

3

In plain English - in what ways are Bayes' Rule and Popperian falsificationism conflicting epistemologies?
14Viliam
3TAG
2dr_s
1Sandro P.
2Viliam
4tkpwaeub
2Charlie Steiner
1TAG
2Charlie Steiner
1TAG
2Charlie Steiner
2TAG
3Zac Hatfield-Dodds
4Sandro P.
2Ksaverus
1JugglingJay
1JugglingJay
3TAG
1JugglingJay
2TAG
1JugglingJay
2TAG
1JugglingJay
2[comment deleted]
2TAG
1JugglingJay
2TAG
1JugglingJay
2TAG
1JugglingJay
2TAG
1JugglingJay
2TAG
1JugglingJay
2[comment deleted]
New Answer
New Comment

5 Answers sorted by
top scoring

Viliam

Apr 03, 2021

140

For the record, the popular interpretation of "Popperian falsificationism" is not what Karl Popper actually believed. (According to Wikipedia, he did not even like the word "falsificationism" and preferred "critical rationalism" instead.) What most people know as "Popperian falsificationism" is a simplification optimized for memetic power, and it is quite simple to disprove. Then we can play motte and bailey with it: the motte being the set of books Karl Popper actually wrote, and the bailey being the argument of a clever internet wannabe meta-scientist about how this or that isn't scientific because it does not follow some narrow definition of falsifiability.

I have not read Popper's book, therefore I am only commenting here on the traditional internet usage of "Popperian falsificationism".

The good part is noticing that beliefs should pay rent in anticipated consequences. A theory that explains everything, predicts nothing. In the "Popperian" version, beliefs pay rent by saying which states of the world are impossible. As long as they are right, you keep them. When they get wrong once, you mercilessly kick them out.

An obvious problem: How does this work with probabilistic beliefs? Suppose we flip a fair coin, and one person believes there is a 50% chance of head/tails, and other person believes it is 99% head and 1% tails. How exactly is each of these hypotheses falsifiable? How many times exactly do I have to flip the coin and what results exactly do I need to get in order to declare each of the hypotheses as falsified? Or are they both unfalsifiable, and therefore both equally unscientific, neither of them better than the other?

That is, "Popperianism" feels a bit like Bayesianism for mathematically challenged people. Its probability theory only contains three values: yes, maybe, no. Assigning "yes" to any scientific hypothesis is a taboo (Bayesians agree), therefore we are left with "maybe" and "no", the latter for falsified hypotheses, the former for everything else. And we need to set the rules of the social game so that the "maybe" of science does not become completely worthless (i.e. equivalent to any other "maybe").

This is confusing again. Suppose you have two competing hypotheses, such as "there is a finite number of primes" and "there is an infinite number of primes". To be considered scientific, either of them must be falsifiable in principle, but of course neither can be proved. Wait, what?! How exactly would you falsify one of them without automatically proving the other?

I suppose the answer by Popper might be a combination of the following:

  • mathematics is a special case, because it is not about the real world -- that is, whenever we apply math to the real world, we have two problems: whether the math itself is correct, and whether we chose the right model for the real world, and the concept of "falsifiability" only applies to the latter;
  • there is always a chance that we left out something -- for example, it might turn out that the concept of "primes" or "infinity" is somehow ill-defined (self-contradictory or arbitrary or whatever), therefore one hypothesis being wrong does not necessarily imply the other being right.

Yet another problem is that scientific hypotheses actually get disproved all the time. Like, I am pretty sure there were at least dozen popular-science articles about experimental refutation of theory of relativity upvoted to the front page of Hacker News. The proper reaction is to ignore the news, and wait a few days until someone provides an explanation of why the experiment was set up wrong, or the numbers were calculated incorrectly. That is business as usual for a scientist, but would pose a philosophical problem for a "Popperian": how do you justify believing in the scientific result during the time interval between the experiment and its refutation were published? How long is the interval allowed to be: a day? a month? a century?

The underlying problem is that experimental outcomes are actually not clearly separated from hypotheses. Like, you get the raw data ("the machine X beeped today at 14:09"), but you need to combine them with some assuptions in order to get the conclusion ("therefore, the signal travelled faster than light, and the theory of relativity is wrong"). So the end result is that "data + some assumptions" disagree with "other assumptions". There as assumptions on both sides; either of them could be wrong; there is no such thing as pure falsification.

Sorry, I got carried away...

Add Comment
[-]TAG4y30

This is confusing again. Suppose you have two competing hypotheses, such as “there is a finite number of primes” and “there is an infinite number of primes”. To be considered scientific, either of them must be falsifiable in principle, but of course neither can be proved.

It's been known for two thousand years that there are infinitely many primes.

https://primes.utm.edu/notes/proofs/infinite/euclids.html

Reply
[-]dr_s11d20

To go with the coin example. Suppose I need to experimentally investigate the extent to which the coin is fair, calling p the probability of it coming up head. I then have a countable infinity of possible hypotheses, which I may consider all a priori equally likely, p∈[0,1]. I then perform experiments (tossing the coin).

Bayesian updating tells me that my belief in each of the hypotheses after H heads and T tails should be

P(p|H,T)=pH(1−p)TB(H,T)

(B is just the normalization factor, don't mind it).

Popperian updating tells me something about this: that if I've... (read more)

Reply
[-]Sandro P.4y10

Thanks for your generous reply. Maybe I understand the bailey and would need to acquaint myself with the motte to begin to understand what is meant by those who say it's being 'dethroned by the Bayesian revolution'.

Reply
2Viliam4y
Sorry for jargon. But it's a useful concept, so here is the explanation: -- Motte and Bailey Doctrines -- All In All, Another Brick In The Motte The latter also contains a few examples.

tkpwaeub

Apr 04, 2021

40

I'm not sure Bayes' Rule dictates anything beyond its plain mathematical content, which isn't terribly controversial:

P(A|B)=P(B|A)⋅P(A)P(B)
 

When people speak of Bayesian inference, they are talking about a mode of reasoning that uses Bayes' Rule a lot, but it's mainly motivated by a different "ontology" of probability. 

As to whether Bayesian inference and Popperian falsificationism are in conflict - I'd imagine that depends very much on the subject of investigation (does it involve a need to make immediate decisions based on limited information?) and the temperaments of the human beings trying to reach a consensus. 

Add Comment
[-]Charlie Steiner4y20

Hm. I don't think people who talk about "Bayesianism" in the broad sense are using a different ontology of probability than most people. I think what makes "Bayesians" different is their willingness to use probability at all, rather than some other conception of knowledge.

Like, consider the weird world of the "justified true belief" definition of knowledge and the mountains of philosophers trying to patch up its leaks. Or the FDA's stance on whether covid vaccines work in children. It's not that these people would deny the proof of Bayes' theorem - it's just that they wouldn't think to apply it here, because they aren't thinking of the status of some claim as being a probability.

Reply
1TAG4y
What were the major problems with JTB before Gettier? There were problems with equating knowledge with certainty...but then pretty much everyone moved to fallibilism. Without abandoning JTB. So JTB and probablism, broadly defined, aren't incompatible. There's nothing about justification, or truth or belief that cant come in degrees. And regarding all three of them as non-binary is a richer model than just regarding belief as non-binary.
2Charlie Steiner4y
I'm not really sure about the history. A quick search turns up Russell making similar arguments at the turn of the century, but I doubt there was the sort of boom there was after Gettier - maybe because probability wasn't developed enough to serve as an alternative ontology.
1TAG4y
It remains the case that JTB isn't that bad, and Bayes isn't that good a substitute.
2Charlie Steiner4y
"Classic flavor" JTB is indeed that bad. JTB shifted to a probabilistic ontology is either Bayesian, wrong, or answering a different question altogether.
2TAG4y
I'll go for answering different questions. Bayes, although well known to mainstream academia , isn't regarded as the one epistemology to rule them all , precisely because there are so many issues it doesn't address.

Zac Hatfield-Dodds

Apr 03, 2021

30

Considered as an epistemology, I don't think you're missing anything.

To reconstruct Popperian falsification from Bayes, see that if you observe something that some hypothesis gave probability ~0 ("impossible"), that hypothesis is almost certainly false - it's been "falsified" by the evidence. With a large enough hypothesis space you can recover Bayes from Popper - that's Solomonoff Induction - but you'd never want to in practice.

For more about science - as institution, culture, discipline, human activity, etc. - and ideal Bayesian rationality, see the Science and Rationality sequence. I was going to single out particular essays, but honestly the whole sequence is probably relevant!

Add Comment
[-]Sandro P.4y40

Thanks for the recommendation. To the sequence I go!

Reply

Ksaverus

Sep 20, 2024

20

New poster. I love this topic. My own of view of the shortcoming of Bayesianism is as follows (speaking as a former die-hard Bayesian):

  1. The world (multiverse) is deterministic.
  2. Probability therefore does not describe an actual feature of the world. Probabilities only make sense as a statistical statements.
  3. Making a statistical statement requires identifying a group of events or phenomena that are sufficiently similar that grouping makes sense. (Grouping disparate unique events makes the statistical statement meaningless, since we would have no reason to think subsequent events behave in the same way.)
  4. Events and phenomena like balls in urns, medical tests for diseases with large sample sizes, even some human events like sports games, have sufficient regularity that grouping makes sense and statistical statements are meaningful.
  5. Propositions about explanatory theories (are there infinite primes, is Newtonian physics “correct”) do not have sufficient regularity - a statistical statement based on any group of known propositions logically yields no predictive value about unknown propositions. (Other than where you have a good explanatory theory linking them.)
  6. If probability statements about the correctness of an unknown explanatory proposition are therefore meaningless, priors and Bayesian updates are similarly meaningless. Example: Newton. Before Einstein, one’s prior for the correctness of Newton would have been high. Just as one’s prior on Einstein being correct right now is presumably high. But both are meaningless, since it is not the case that there is a proportion of universes in which they are true and a proportion in which they are false.
  7. Counterpoint: why do prediction markets seem to work? No idea! Still wrestling with this. Would love to hear your thoughts.
Add Comment
[-]JugglingJay16d10

I don't even think you need the universe/multiverse to be deterministic.  Even if there are fundamentally random aspects of the universe, we would still need to consolidate those features that are similar and those whose unpredictability can be modeled by statistics, like you said.  As for prediction markets, I think their efficacy may be overblown, but I would need to look into it more.  

Reply

JugglingJay

Sep 01, 2025*

10

This is a bit of an old post, but I felt I might be able to add to the discussion.  Keep in mind this is my own informal take on a rigorous philosophical topic, and I am by no means a professional.  My bias leans towards critical rationalism (Popperianism), but I'll try to be fair.  

I think you are correct in identifying induction as the fundamental tension between the two epistemologies.  Bayesian epistemology (as distinct from Bayes' Theorem) utilizes Solomonoff induction, whereas Popper is highly critical of inductive and probabilistic reasoning.  For Popper, induction isn't reasoning at all, if we view reasoning as a method by which we generate knowledge.  I'm sometimes tempted to call it "pseudo-inductive reasoning," which is just the fancy way of saying "guessing."  

To understand Popper's views, it's important to understand that knowledge for Popper has nothing to do with belief, credence, or confidence; those are all subjective states of one's mind.  Perhaps we can psychologically manipulate these mental states (perhaps upon realizing the Bayesian calculus), but those mental states don't constitute knowledge.  

If we view the scientific endeavor in three pieces, it might become more clear: 1) We propose ideas, hypotheses, and conjectures 2) We scrutinize these hypotheses and eliminate the falsified ones, and 3) We adopt those hypotheses which have been highly scrutinized as our new scientific theories, acting as if they are true until they are usurped.  For Popper, only (2) is the knowledge-generation process, while Bayesians greatly concern themselves with (1) and (3) (they don't neglect (2), but they don't see it as identically the essence of knowledge-generation).  

Popper gave a simple example of how knowledge can be contained inside books, and so is not intrinsically tied to our beliefs or actions.  Rather, (in my understanding) knowledge is the record of the progress of falsification, and the hypotheses at the top of the current leaderboard.  How we generated the hypotheses and what we do with our theories once we have them are problems for psychology and rationality, not epistemology.  

Add Comment
[-]TAG16d30

Bayesian epistemology (as distinct from Bayes’ Theorem) utilizes Solomonoff induction

Not literally, it's uncomputable.

For Popper, induction isn’t reasoning at all, if we view reasoning as a method by which we generate knowledge

He has multiple objections to induction. One is that it is not deductively valid, which in fact is addressed by reframing it in probabilistic terms.

Solomonoff induction, if you could make it work, would address that issue of knowledge creation as well, since Solomonoff Inductors generate hypotheses mechanically. If a rationali... (read more)

Reply
1JugglingJay15d
I'm not so sure it is addressed by reframing in probabilistic terms.  At least for now, I'm convinced of a version of Popper's argument against probabilistic reasoning (admittedly, probably an over-simplified version), and I plan to familiarize myself with the more formal argument in the near future.   That would be fine for hypothesis-generation, and perhaps that is how our minds do it when we do so rationally, but I believe it is Popper's view that this isn't the same as knowledge-generation.  Perhaps it generates good hypotheses, but the scrutinizing step is still necessary for knowledge.  Popper is effectively saying that a Monte Carlo random search through hypotheses could still generate good hypotheses, and we would still be able to extract knowledge from them if the scrutiny step is applied.  Knowledge need not require an effective algorithm (though I'm sure any strategy is better than none).   I think that is one of the fundamental differences in perspective.  However, I'll be a little vague here because Popper didn't care as much about the one true definition of a thing, but rather its function and its explanation.  So the question shouldn't be "Is it known?" but rather "What is going on when we say it is known?"
2TAG14d
That argument ends with the conclusion that evidence can support any number of hypotheses. But everybody knows that. Any Bayesian or rationalists would say that's what you need simplicity criteria for.
1JugglingJay13d
Sure, but that adds the additional assumption about simplicity, and it concedes that evidence doesn't weigh more in one generalization's favor over another.  Bayesianism requires this extra axiom, which ironically makes it less simple (unless you want to reason to it from Bayes' Theorem, but that ignores the fact that P(E|G) should be 1).  In contrast, simplicity is desired on a Popperian account because it makes hypotheses easier to test; clear simple predictions are usually easier to falsify.   I think Popper's point is that induction was never needed in the first place.  Knowledge grows through the process I mentioned, and we don't need to assume any inductive trick that necessarily (even probabilistically) gets us to general laws.  We can accept that knowledge is fundamentally trial-and-error or guess-and-check, and the phenomenon of knowledge-production loses nothing.  
2TAG13d
Why is that a problem? There is still a form of probabilistic inductive reasoning that works. Why is that a problem? Everybody uses simplicity criteria, so no one has the problem. Less simple than...something that doesn't work. The point of simplicity is to get the simplest working theory. Ok. So they both use simplicity, and for different reasons. That isn't telling me induction doesn't work. Induction is useful. There is value in knowing, even without certainty, what will happen next, even without having the explanatory knowledge of why it will happen. There is more value in having the explanatory knowledge as well, but that doesn't mean there is zero value in non-explanatory prediction. It's significant that even simple organisms use induction..they expect the food that made them sick before to make them sick again.. That tells you that induction is useful...and that it is not difficult to implement. In any case "induction isn't needed" is a different claim to "induction doesn't work". Bayes, correctly understood, isn't something different to trial and error. It doesn't give you a mechanism to generate hypotheses, so you have to conjecture them. And it does give you a mechanism to falsify them (although it does so incrementally, not all at once like naive Popperism). It's a mistake to suppose that just because you have two different" schools", they can't agree on a single point. Popperism, correctly understood, isn't just trial and error, either.
1JugglingJay11d
I'm just articulating Popper's views to the best of my ability, and he did not believe in probabilistic induction.  He explains what we call 'inductive reasoning' as mere conjecture (perhaps calibrating for some psychological biases), but that's not actually reasoning (at least to Popper).  The reason assuming simplicity as a fundamental criterion is a 'problem' as you said, is because you don't need to.  As I mentioned, Popper can account for the desirability of simplicity without assuming it as an axiom.   That wasn't the argument against induction.  It was an accounting for simplicity without assuming it.   Again, just through a Popperian lens: prediction is useful, even without explanation.  However, prediction on its own is not knowledge.  For Popper, induction was just never a thing to begin with.  I think my point in saying "induction was never needed in the first place" was to emphasize that we can still account for knowledge production without induction.  I agree it's different from saying "induction doesn't work," but if the logic of induction is not warranted (perhaps by showing it doesn't mathematically work), then induction isn't a thing.   Totally agree, but without induction, guess-and-check is perhaps the most primitive way of describing Popperianism.  To be a little triggering, induction would change it to semi-prescient guess-and-check.  Regardless, I agree this is a bit of an oversimplification.  
2[comment deleted]11d
2TAG11d
I'm interested in whether his views are supported by sound arguments. But induction can be performed by organisms and software too simple to form conjectures. Maybe that's a True Scotsman argument. You don't need to assume it because you can argue for it methodologicaly. There are more complex conjectures than simple ones. So if you conjecture something complex, it is less likely to be the right conjecture. Also, you have only a finite amount of time to consider conjectures, so you can't start at the end an infinite list..But you can start with th the simplest conjecture. That's f course, that's roughly his Solomonoff induction works. The argument is in favour of relative simplicity: it doesn't assume that the universe has any absolute level of simplicity. I am not sure what "assuming it as an axiom" means. I can argue for simplicity on methodological grounds. Poppers argument for simplicity is also methodological , so what am I doing wrong that he is doing right? The also sounds like a true Scotsman. Unless it's conjecture. We can't account for the production of all kinds of knowledge without induction, because it produces one of the kinds. But I believe it is a thing, because it can be demonstrated directly , and because thrreare not any sound and valid argument against it that I have seen.The (What is the best argument against it, and why not quite it directly?)
1JugglingJay11d
I'm actually not as well informed on how reasoning operates in other organisms, but if you are allowing for primitive structures that enable some kind of proto-inductive reasoning, then I have no idea why you wouldn't also allow for primitive structures that enable proto-conjecture.  If there's a distinction between the two, then surely conjecture would operate on even more primitive mechanics than inductive reasoning.  Automated systems are mostly doing optimization, which is sort of in a totally different camp, but I'd allow for the possibility that something like Attention in LLMs is simulating some kind of intuition.  Still, that's a total guess on my part.   Neither were No-True-Scotsman's, since I defined what Popper meant by "reasoning" and "knowledge" in my very first post, so these exclusions were not merely arbitrary.  I'm certainly interested in diving into this more, since I find the math fascinating.  Nonetheless, in my 3rd post when I said:  I was giving you the option to accept it axiomatically vs. deriving it, and perhaps it's on me that I interpreted your following responses as taking the former position.  However, if you derive it, you must derive it utilizing probabilistic reasoning (as it sounds like you do), which runs into the issue I mentioned; namely P(E|H)=1,  because that's what evidence means.  Evidence is a logical consequence of the hypothesis, so H⟹E, or if you want to view them as sets of possible worlds, H⊆E.  This immediately implies P(E|H)=1.  This result ruins the desired Bayesian updating for induction, which was the whole reason why you mentioned simplicity, which was the reason I assumed it was your axiom.   However, a better critique would be the very notion of probability as uncertainty.  (This argument is my own)  Probability is merely normalized measure defined on some σ-algebra (a definition of measurable sets).  It can be used to model measurement errors, frequencies of outcomes, perhaps the number of indiscernible sym
2TAG11d
Remember, I'm talking about algorithms as well, and simple ones. Much simpler than current LLMs, the kind that could be constructed decades ago. If you write an algorithm, it's a white box to you -- there's no mystery how it works. And you can write an algorithm that is hard coded to expect patterns to repeat, that has the ultimate inductive bias. That is simpler than writing something that conjectures repeating patterns. For instance, with 1969s technology, you can write an inductor that figures out that q's are likely to be followed by u's in english text. Note that because its a white box, you can can show there is is no time T, in its execution, where the conjecture that the patterns will repeat is formed, as opposed to a previous time where it hasn't ....It expects repeating patterns from boot up. Modern A Is at capable of inferring patterns that aren't hard coded, and they are much more complex than 1960's GOFAI This is not a coincidence. You need more of an argument than the word "surely" . Did you mean "induction" ? Not under probabilistic reasoning! The hypothesis that text is in English implies a high probability that q'sill be followed by u's, but not a certainty, there are some exceptions. So what is non probabilistic uncertainy? Something that doesn't follow the laws of probability? But that's an argument for something general than probability ... but the claim that probability theory can found inductionism doesn't depend on probability being completely general. (And I said probabilistic reasoning, not Bayes. There are prop!e within Bayes is the one true probability theory and/or a completely general epistemology ... But I'm not one of them) ...on your part. The problem is his true scotsmanning. He has these claims that "knowledge is.." , but they are based on defining knowledge, not on making discoveries about the world. Non explanatory predictive knowledge. If you mean inductions in an agent occurring without that agent itself making a con
1JugglingJay11d
Is it simpler?  Based on your description, it doesn't sound simpler.  In fact, if you asked me to write a program to conjecture repeating patterns, I would probably end up writing exactly what you would describe as prediction.  From what I can tell, this is a distinction without a difference.   I actually don't need an argument because you are the one claiming a distinction.  My claim was an expression of my intuition given your hypothetical, which is meant to query you for said distinction.   Nope.   Here you are making a slight equivocation between different versions of E and different interpretations of probability:   H=The text is English,  E=q's are always followed by u's in the text.   That's different from  H=The text is English,  E′=q's are followed by u's in the text with high probability.   In the former case, E is not a logical consequence of H, and so P(E|H) could be less than 1 (though in this sense, probability would not be interpreted as a credence, but as a frequency of letter usage).  In the latter case, E' is a logical consequence of H, so interpreted as credence P(E′|H)=1.  In this case, E' itself is a probabilistic statement (letter frequency) which can be true or false, and it is guaranteed to be true if H is true.   Uncertainty is a subjective feeling, and it still needs to be demonstrated that this feeling can be modeled by probability.  I would be careful not to shift the burden of proof.  It is the job of the Bayesian to prove we can model all uncertainty this way, not the job of the skeptic to disprove it.  As the skeptic, I'll be careful not to strawman, but if probability really does ground epistemology, then I don't think it's a stretch to characterize this as assuming all boolean statements can be taken as inputs to a probability measure, representing our credence.  If that's not the case, I'd be open to an alternative interpretation.   As I mentioned before, bickering over definitions was never Popper's intention.  He was fa
2TAG10d
Yes, obviously. You need to write a) code to generate conjectures b) code to test the conjectures c) code to reject bad conjectures , and go back a) Whereas I only need to write b) Here's the argument supporting the claim, again:- "Note that because its a white box, you can can show there is is no time T, in its execution, where the conjecture that the patterns will repeat is formed, as opposed to a previous time where it hasn’t ….It expects repeating patterns from boot up" Why does that matter? It sometimes can, since probability sometimes works. Maybe it sometimes d oesn't, but I sent see how that results in a sweeping deposit of Induction. Im not defending Bayesianism in that sense, as I said. Maybe smuggling in definitions without inconvenient bickering was the intention...you are not automatically on the epistemological high ground when you refuse to engage in "semantics" The ability of agents too simple to form conjectures to nonetheless perform inductive reasoning. By its authors. But a number of criticisms and counterarguments have been published, eg:- By the way, an argument can be valid mathematically , but still fail to represent the real world. Conveniently, Vasrani's argument has that property. If Popper and Miller have both competencies , others could as well.,
1JugglingJay9d
Because you gave an example that didn't work?   It doesn't, but it was at least convincing to me that probabilistic reasoning is much more vague than it makes itself out to be.   Sounds good.   Agreed, but choosing to focus on the referent rather than the sense while acknowledging the different senses is the 'high ground' as you said, and it is an explicit engaging in semantics.  I'm happy to discard or adopt terms if they are shown to be obfuscating or useful respectively. Perfect, I'm lumping these together because I'm realizing this is the crux and perhaps you can consolidate further.  I apologize if I didn't adequately respond to your other instantiations of these.   For your a)b)c) program, I was only talking about conjectures in that thread, so I would only need to write (a).  Is (a) necessarily more complicated than whatever mechanism you have for induction?  Also, for me (b) only consists of deductive falsifications, so what you call "induction" would still be part of (a).   For your white box example, it's not clear to me how initialized expectations are not the same as conjectural dispositions.   For simple agential models which cannot conjecture but still perform inductive reasoning, I'm curious what mechanisms you think are sufficient for conjecture and what mechanisms are necessary for induction?  Obviously, for very simple agents, "conjecturing" and "reasoning" aren't exactly writing down logical statements in English.  We're probably talking about encoding information somehow?  Inductive bias, like how ML systems work?   Yeah, and I'll definitely be looking into those as well.  I look forward to it!   Totally agree with the first part.   Definitely, and I hope to be one, but the discourse around it does not inspire confidence.  
2TAG9d
Why didn't it work? Why is that interesting to me? AFAIC ,the debate is about whether induction works. So I'm not interested in general point scoring against Bayes or probability forming conjectures without any attempt to refute or support them is not knowledge generation. I'm stipulating that b) is a simple inductor. No, it's just doing something in a hard coded way. Not generating an English level description of what to do, interpreting it, and executing it.
1JugglingJay8d
Because either you are not updating credence (which I have no objection to), or you can't distinguish between hypotheses without assuming simplicity as an axiom (which, feel free to do so, but I already argued it doesn't need to be assumed).  But I think this train of thought seems less important than the necessity of induction discussion in the other threads.   It doesn't need to be.  I just found it more compelling.   Totally agree.  So I think we may have talked past each other a bit because I was only comparing induction to conjecture, not the full knowledge-generation process.  Sure (b) alone is simpler than (a), (b), and (c) collectively, but that's not what I was arguing against.   Okay, well that's a bit of a bedrock of disagreement then.   Sure, so what is your sufficient condition for conjecture to be present, and what is your necessary condition for induction to be present?  
2TAG8d
So have I.:- (Also, it is completely unclear why "having to assume simplicity" amounts to "not working". You could argue, as Vasrani does that Bayes without simplicity doesn't work: I have argued that no real Bayesian ignores simplicity). Why not? An aircraft without wing s or engine is sim ple, but it can't fly. Because you think I was stipulating something else? Because you think there are no simple inductors? You can tell that a algorithm is making predictions on a black box basis , and you can tell it's an inductor if it does immediately on boot up. A conjecture-and-refutation machine has to be complex enough to form high level representations, and make inferences from them.
1JugglingJay8d
I think in each of these threads, we've started to go in circles, so if it's any consolation I'm interested in following your future posts, and if I post anything in the future I would be interested to see your critiques.  
Rendering 0/10 comments, sorted by
top scoring
(show more)
Click to highlight new comments since: Today at 9:48 AM
[+][comment deleted]13d20
Moderation Log
More from Sandro P.
View more
Curated and popular this week
A
5
0
Deleted by TAG, 09/05/2025
Reason: Comment deleted by its author.

Bayes' Rule dictates how much credence you should put in a given proposition in light of prior conditions/evidence. It answers the question How probable is this proposition?
 

Popperian falsificationism dictates whether a given proposition, construed as a theory, is epistemically justifiable, if only tentatively. But it doesn't say anything about how much credence you should put in an unfalsified theory (right?). It answers the question Is this proposition demonstrably false (and if not, lets hold on to it, for now)?  
 

I gather that the tension has something to do with inductive reasoning/generalizing, which Popperians reject as not only false, but imaginary. But I don't see where inductive reasoning even comes in to Bayes' Rule. In Arbital's waterfall example, it just is the case that "the bottom pool has 3 parts of red water to 4 parts of blue water" -  which means that there just is a roughly 43% probability that a randomly sampled water molecule from that pool is red. How could a Popperian disagree? 

What am I missing?

Thanks!