The phlogiston theory gets a bad rap. I 100% agree with the idea that theories need to make constraints on our anticipations, but i think you're taking for granted all the constraints phlogiston makes.
The phlogiston theory is basically a baby step towards empiricism and materialism. Is it possible that our modern perspective causes us to take these things for granted to the point that the steps phlogiston ads aren't noticed? In another essay you talk about walking through the history of science, trying to imagine being in the perspective of someone taken in by a new theory, and i found that practice particularly instructive here. I came up with a number of ways in which this theory DOES constrain anticipation. Seeing these predictions may make it easier to help raise new predictions for existing theories, as well as suggest that theories don't need to be rigorous and mathematical in order to constrain the space of anticipations.
The phlogiston theory says "there is no magic here, fire is caused by some physical property of the substances involved in it". By modern standards this does nothing to constrain anticipation further, but from a space of total ignorance about what fire is and how it works, the phlogiston theory rules out such things as:
The last example is particularly instructive, because the phrase "saturated with phlogiston" is correct as long as we interpret it to mean "no longer containing sufficient oxygen." That is a correct prediction based on the same mechanism as our current (extremely predictive) understanding of what makes fires go out. It's that the phlogiston model just got the language upside down and backwards, and mistakes the absence of fuel for the presence of something that inhibits the reaction. They did call oxygen "dephlogisticated air", and so again, the theory says "this stuff is flammable, wherever it goes, whatever the time of day, or whatever incantation or prayer you say over it" - which is correct, but so obviously true that we perhaps aren't seeing it as constraining anticipation.
From my understanding of the history of science, it's possible that the phlogiston theory constrained the hypothesis space enough to get people to search for strictly material-based explanations of phenomena like fire. In this sense, a belief that "there is a truth, and our models can come closer to it over time" also constrains anticipation, because it says what you won't experience: a search for truth that involves gathering evidence over time, and refining models, which never get better at predicting experience.
Is a model still useful if it only constrains the space of hypotheses that are likely to pan out with predictive models, rather than constraining the space of empirical observations?
Wow! I had written my own piece in a very similar vein, look at this from a predictive processing perspective. It was sitting in draft form until I saw this and figured I should share, too. Some of our paragraphs are basically identical.
Yours: "In computer terms, sensory data comes in, and then some subsystem parses that sensory data and indicates where one’s “I” is located, passing this tag for other subsystems to use."
Mine: " It was as if every piece of sensory data that came into my awareness was being “tagged” with an additional piece of information: a distance, which was being computed. ... The 'this is me, this is not me' sensation is then just another tag, one that's computed heavily based upon the distance tags. "
I came here with this exact question, and still don't have a good answer. I feel confident that Eliezer is well aware that lucky guesses exist, and that Eliezer is attempting to communicate something in this chapter, but I remain baffled as to what.
Is the idea that, given our current knowledge that the theory was, in fact, correct, the most plausible explanation is that Einstein already had lots of evidence that this theory was true?
I understand that theory-space is massive, but I can locate all kinds of theories just by rolling dice or flipping coins to generate random bits. I can see how this 'random thesis generation method' still requires X number of bits to reach arbitrary theories, but the information required to reach a theory seems orthogonal to the truth. It feels like a stretch to call coin flips "evidence." I'm guessing that's what Robin_Hanson2 means by "lucky draw from the same process"; perhaps there were a few bits selected from observation, and a few others that came from lucky coin flips.
Perhaps a better question would be, given a large array of similar scenarios (someone traveling to look at evidence that will possibly refute a theory), how can I use the insight presented in this chapter to constrain anticipation and attempt to perform better than random in guessing which travelers are likely to see the theory violated, and which travelers are not? Or am i thinking of this the wrong way? I remain genuinely confused here, which i hope is a good sign as far as the search for truth :)