This could mean you would also have to reject thirding in the famous Sleeping Beauty problem. Which contradicts a straightforward frequentist interpretation of the setup: If the SB experiment was repeated many times, one third of the awakenings would be Monday Heads, so if SB was guessing after awakening "the coin came up heads" she would be right with frequentist probability 1/3.
Of course there are possible responses to this. My point is just that: rejecting Katja's doomsday argument by rejecting SIA style anthropic reasoning may come with implausible consequences in other areas.
Interesting. Is g and intelligence the same thing? If not, how do they relate to each other?
No, N is a prior. You can't draw conclusions about what a prior is like that. N could be tiny and there could be a bunch of civilizations anyway, that's just unlikely.
I just quoted the paper. It stated that N is the expected number of civilizations in the Milky Way. If that is the case, we have to account for the fact that at least one civilization exists. Which wasn't done by the authors. Otherwise N is just the expected number of civilizations in the Milky Way under the assumption we didn't knew that we existed.
Sure, prior in the sense of an estimate before you learn any of your experiences. Which clearly you're not actually computing prior to having those experiences, but we're talking in theory.
"before you learn any experience"? I.e. before you know you exist? Before you exist? Before the "my" refers to anything? You seem to require exactly what I suspected: a non-indexical version of your statement.
SIA is just a prior over what observer one expects to end up with.
There are infinitely many possible priors. One would need a justification that the SIA prior is more rational than the alternatives. FNC made much progress in this direction by only using Bayesian updating and no special prior like SIA. Unfortunately there are problems with this approach. But I think those can be fixed without needing to "assume" some prior.
This is basically just downweighting things infinitely far away infinitely low.
All things in the universe get weighted and all get weighted equally. Things just get weighted in a particular order, nearer things get weighted "earlier" so to speak (not in a temporal sense), but not with more weight.
It's accepting unboundedness but not infinity. Unboundedness has its own problems, but it's more plausible than infinity.
"Unboundednes" is means usually something else. A universe with a sphere or torus topology is unbounded but finite in size. I'm talking about a plane topology universe here which is both unbounded and infinitely large.
But you seem to have something like hyperreal numbers in mind when you talk about infinity. Hyperreal numbers include "infinite numbers" (the first is called omega) which are larger than any real number. But if cosmologists talk about a universe which is spatially infinite, they only say that for any positive real number n, there is a place in the universe which is at least n+1 light-years away. They do not say "there is something which is omega light-years away". They do not treat infinite as a (kind of) number. That's more of a game played by some mathematicians who sometimes like to invent new numbers.
I'm not sure what distinction you're drawing here. Can you give a toy problem where your description differs from mine?
You might be certain that 100 observers exist in the universe. You are not sure who might be you, but one of the observers you regard as twice as likely to be you as each of the other ones, so you weigh it twice a strong.
But you may also be uncertain of how many observers exist. Say you are equally uncertain about the existence of each of 99 and twice as certain about the existence of a hundredth one. Then you weigh it twice as strong. (I'm not quite sure whether this is right.)
That's one objection among several, but the periodicity isn't the real issue - even without that it still must repeat at some point, even if not regularly.
Even in a finite universe there might be repetition. Possibly our universe is finite and contains not only Earth but also a planet we might call Twin-Earth very far away from Earth. Twin-Earth is a perfect duplicate of Earth. It's even called "Earth" by twin-earthlings. If a person X on Earth moves only his left arm, Twin-X on Twin-Earth also moves only his left arm. But this is merely (perfect) correlation, there is no stronger form of dependence, like counterfactual dependence. If X had moved his right arm instead, Twin-X still had moved only his left arm. This could not be the case if X and Twin-X were identical. Also, if X hurts his foot, Twin-X will also hurt his foot, but X will only feel the pain caused by X's foot and not the pain caused by the foot of Twin-X. They don't share a single mind.
All you really have is an irrational set of ratios between various "states of the world", calling that "infinity" seems like a stretch.
I would rather say that it's a stretch to regard infinity as a ordinary number, as you are apparently doing. The limit view of infinity doesn't do this. "Infinity" then just means that for any real number there is another real number which is larger (or smaller).
those hypotheses are more likely to be true
What do you mean by true here?
What we usually mean. But you can remove "to be true" here and the meaning of the sentence stays the same.
Probability is just a means to predict the future.
We can perfectly well (and do all the time) make probabilistic statements about the present or the past. I suggest to regard probability not so much as a "means" but as a measure of uncertainty, where P(A)=1/2 means I am (or perhaps: I should be) perfectly uncertain whether A or not A. This has nothing to do with predictions. (But as I said, the hypothesis of an infinite universe makes predictions anyway.)
Probabilities attached to statements that aren't predictive in nature are incoherent.
Where is the supposed "incoherence" here?
The best characterization of incoherence I know treats it as a generalization of logical contradiction: A and B are (to some degree) incoherent if P(A and B) < P(A)*P(B). Negative statistical dependence. I.e. each one is evidence against the other. But you seem to mean something else.
The same thing is true of the "hypothesis" that solipsism is false. It has no information content.
It is verified by just a single non-mental object. It has information content, just a very low one. Not as low as "something exists" (because this is also verified by mental objects) but still quite low. Only tautologies have no (i.e. zero) information content.
The problem with this line of reasoning is that we commonly use models we know are false to "explain" the world. "All models are wrong, some models are useful".
The common answer to that is that Newton's theory of gravity isn't so much wrong as it is somewhat inaccurate. A special case of Einstein's more accurate theory. A measure of (in)accuracy is generalization error in statistics. Low generalization error seems to be for many theories what truth is for ordinary statements. And if we would say of an ordinary statement A that it is "more likely" than an other statement B we would say that a theory X has a "lower expected" generalization error than a theory Y.
Also re causality, Hume already pointed out we can't know any causality claims.
Well, not only that! Hume also said that no sort of inductive inference is justified, probabilistic or not, so all predictions would be out of the window, not just ones about causal relationships. Because the evidence is almost always consistent with lots of possible but incompatible predictions. I would say that an objective a priori probability distribution over hypotheses (i.e. all possible statements) based on information content solves the problem. For indexical hypotheses I'm not quite certain yet, maybe there is something similar objective for an improved version of SIA. If there is no objective first prior then Hume is right and verificationism is wrong. What you predict would rely on an arbitrary choice of prior probabilities.
I think explanations are just fine without assuming a particular metaphysics. When we say "E because H", we just mean that our model H predicts E, which is a reason to apply H to other predictions in the future. We don't need to assert any metaphysical statements to do that.
That doesn't work for many reasons. Some barometer reading predicts a storm, but it doesn't explain it. Rather there is a common explanation for both the barometer reading and the storm: air pressure.
Also, explanation (because statements) are asymmetric. If B because A then not A because B. But prediction is symmetric: If A is evidence for B, then B is evidence for A. Because one is evidence for the other if both are positively probabilistically dependent ("correlated"). P(A|B) > P(A) implies P(B|A) > P(B). The rain predicts the wet street, so the wet street predicts the rain. The rain explains the wet street, so the wet street doesn't explain the rain.
There are even some cases where H explains E but H and E don't predict each other, i.e. they are not positively statistically dependent. These cases are known as Simpson's paradox.
N is the average number of civilizations per galaxy.
I was going to agree with this, but I realize I need to retract my earlier agreement with this statement to account for the difference between galaxies and the observable universe. We don't, in fact, have evidence for the "fact that N is at least 1." We have evidence that the number of civilizations in the universe is at least one. But this is likely to be true even if the probability of a civilization arising on any given galaxy is very low.
SDO treat N as the expected number of civilizations in the Milky Way, i.e. in our galaxy (page 2):
The Drake equation was intended as a rough way to estimate of the number of detectable/contactable civilizations in the Milky Way
If they interpret N in this way, then N is at least 1. They didn't account for this fact in a systematic way, even if some parameter estimations already should include some such considerations. (From your quote I don't find it clear whether this is really the case. Also SIA is a fairly new theory and as such unlikely to play a significant role in the historical estimates they looked at).
But what is it from a gods-eye perspective?
It doesn't seem meaningful to ask this.
It just occurred to me that you still need some prior probability for your sentence which is smaller than 1. If you condition on "My observations so far a ike-ish" and this statement for you has unconditional probability 1, then conditioning on it has not effect. Conditioning on a probability 1 statement is like not conditioning at all. But what is this prior probability and how could it be smaller than 1 for you? It seems to be necessarily true for you. I guess we are forced to consider some non-indexical (gods-eye) version of that statement, e.g. like the one I suggested in my last comment. Also your characterization of (your version of) SIA was quite informal, so there is room for improvement. My personal goal would be to make SIA (or a similar principle) nothing more than a corollary of Bayesian updating, possibly together with a general theory of indexical beliefs.
If some observer only has some probability of having had the same set of observations, then they get a corresponding weight in the distribution.
Good idea. Maybe it is not just the probability that the hypothetical observer had the same observations, it's the probability that the hypothetical observer exists and had the same observations. Not just what observations observers made is often a guess but also how many of them exist. Also, I don't think "had the same observations" is quite right to characterize the "total evidence". Because there could be observers like a Swamp Man (or Boltzmann brain etc) which have the same state of mind as you, and thus arguably the same total evidence, but whose memories formed just by accident and not because they actually made the experiences/observations they think they remember. So I think "has the same state of mind" is better to not exclude those freak observers to begin with, because we might be such a freak observer.
This breaks all Bayesian updates as probabilities become impossible to calculate.
I think you are referring to what is known as the measure problem in cosmology: What is the probability that a cow is two-headed if there are infinitely many one-headed and two-headed cows in the universe? Surely it is still much more probable that a cow is one-headed. There are apparently several solutions proposed in cosmology. For a universe with is spatially infinite I would estimate the probability of a cow to be one-headed by the ratio of the expected number of one-headed cows to the expected number of cows -- in a growing imaginary sphere around us. The sphere is of finite size and we take the probability of a cow being one-headed as the limit of the ratio as the size of the sphere goes towards infinity. Then surely the sphere at any finite size contains much more one-headed cows than two-headed cows (the latter are estimated at a much larger number because two-headedness is not evolutionary advantageous for cows). There are other proposed solutions. I think one can be optimistic here that probabilities are not impossible to calculate.
Which is a great reason to exclude infinite universes a priori.
I think the measure problem is merely a practical problem for us. Which would be an instrumental reason not to consider infinite universes if we don't like to work on the measure problem (if only considering universes with finite size has higher utility for us). But we would need an epistemic reason, in contrast to an instrumental reason, to a priori exclude a possibility by assigning it probability 0. I think there are three types of epistemic reasons to do this:
if we think that the idea of an infinite universe is logically contradictory (that seems not to be the case)
if we think that an infinite universe is infinitely unlikely (That seems only the case for infinite universes with infinite information content. But infinite universes can plausibly have finite and even low finite information content.)
If something is not the case to which we have direct epistemic access. I currently do not have a headache. Since we are perfectly competent in judging the contents of our mind, and a headache is in the mind, my probability of "I have a headache" is 0. (Unlike headaches and other observational evidence, infinite universes are not mental objects, so that option is also not viable here.)
To highlight the difference between practical/instrumental reasons/rationality and epistemic reasons/rationality: Consider Pascal's Wager. Pascal argued that believing in God has higher expected utility than not believing or being agnostic. Whether that argument goes through is debatable, but in any case it doesn't show that God exists (that his existence is likely). If subjectivly assigning high probability to a hypothesis has high utility, that doesn't mean that this hypothesis actually has high probability. And the other way round.
Such a infinite universe with pseudo randomness might be nearly indistinguishable from one with infinite information content.
I don't know how this is relevant.
You seemed to specifically object to universes with finite information content on grounds that they are just (presumably periodic) "loops". But they need not be any more loopy than universes with infinite information content.
I wrote two posts on this: https://www.lesswrong.com/posts/PSichw8wqmbood6fj/this-territory-does-not-exist and https://www.lesswrong.com/posts/zm3Wgqfyf6E4tTkcG/the-short-case-for-verificationism. I don't think ontological claims are meaningful except insofar as they mean a set of predictions, and infinite ontological claims are meaningless under this framework.
But you seem to be fine with anything on which you could possibly update. E.g. there could be evidence for or against the plane topology of the universe. The plane topology means the universe is infinitely large. And as I said, SIA seems to make the significant prediction that evidence which implies a finite universe has probability 0.
I know this opens a huge can of worms, but I also wanted to comment on this one:
By talking about the unseen causes of visible events, it is often possible for me to compress the description of visible events. By talking about atoms, I can compress the description of the chemical reactions I've observed. Sure, but a simpler map implies nothing about the territory.
If hypotheses (e.g. about the existence of hands and chairs and rocks and electrons and forces and laws) which assume the existence of things external to our mind greatly reduce the information content of our mental evidence, then those hypotheses are more likely to be true than a pure phenomenological description of the evidence itself. Because lower information content means higher a priori probability. If you entertained the hypothesis that solipsism is true, this would not compress your evidence at all, which means the information content of that hypothesis would be very high, which means it is very improbable. The map/territory analogy is not overly helpful here I think. If you mean with map "hypotheses", then simpler hypotheses do in fact (probabilistically) "imply" something about the world, because simpler hypotheses are more likely to be true.
Another point: There many people who say that the main task of science is not to make useful technology, or to predict the future, but to explain our world. If you have some evidence E and a hypothesis H, and that hypothesis is supposed to explain your evidence, then that explanation is correct if and only if the following is true:
E because H.
But the truth of any statement of the form of "y because x" arguably implies the truth of x and y. So H must be true in order to correctly explain your evidence. If H is true, and H asserts the existence of things external to your mind (hands, chairs, laws etc.) then those things exist. Almost any hypothesis talks about objects external to your mind. In fact, we wouldn't even call beliefs about objects internal to our mind ("I have a headache", "I have the visual impression of a chair in front of me", "I have a memory of eating pizza yesterday") hypotheses at all, we would just call them "evidence". If no external things exist, then all "y because x" statements would be false.
I'm not sure about your argument involving the "level IV multiverse". I think it is equivalent to modal realism (everything which possibly exists, exists). I'm not sure whether the information content of that hypothesis is high or low. (It is infinite if we think of it as a long description of every possible world. If the information content is very high, then the hypothesis is likely to be false, which would justify our belief that it is false. If it is in fact false, we have a justified true belief in the falsity of modal realism. Since this is not a Gettier case, we then would know that modal realism is false.)
SIA is a reason to expect very low values of N to be unlikely, since we would be unlikely to exist if N was that low. But the lowest values of N aren't that likely - probability of N<1 is around 33%, but probability of N<10^-5 is around 15. It seems there's at least a 10% chance that N is fairly close to 1, such that we wouldn't expect much of a filter. This should carry through to our posterior such that there's a 10% chance that there's no future filter.
I'm not quite sure I understand you here... Let me unpack this a little.
SIA is a reason to expect very low values of N to be unlikely, since we would be unlikely to exist if N was that low.
Yes, but not only that, according to SIA our existence is also a reason to expect high values of N to be likely, since we are more likely to exist if N is higher. But Sandberg, Drexler, and Ord (SDO) do not include this consideration. Instead, they identify the probability P(N<1) with the probability of us being alone in the galaxy (repeatedly, e.g. on page 5). But that's simply a mistake. P(N<1) is just the probability that a galaxy like ours is empty. (Or rather it is close to that probability, which is actually about e^-N as they say in footnote 3). But the probability of us being alone in the galaxy, i.e. that no other civilizations besides us exist in the galaxy, is rather the probability that at most one civilization exists in the galaxy, given that at least one civilization (us) exists in the galaxy. To calculate this would amount to apply SIA. Which they didn't do. This mistake arguably breaks the whole claim of the paper.
It seems there's at least a 10% chance that N is fairly close to 1, such that we wouldn't expect much of a filter. This should carry through to our posterior such that there's a 10% chance that there's no future filter.
What do you mean with "fairly close to one" here? SDO calculate densities, so we would need a range here. Maybe 0.9<N<1.1? 0.99<N<1.01? 0.5<N<1.5? I don't even know how to interpret such fraction intervals, given that we can't have a non-integer number of civilizations per galaxy.
The whole probability distribution for N should have been updated on the fact that N is at least 1. (They actually consider an update later on in the paper, but not on our existence, but on the Fermi observation, i.e. that we don't see signs of ETI.)
I'm not conditioning on "ike exists", and I'm not conditioning on "I exist". I'm conditioning on "My observations so far are ike-ish" or something like that. This rules out existing as anyone other than me, but leaves me agnostic as to who "I" am among the group of observers that also have had the same set of observations. And the SIA prior means that I'm equally likely to be any member of that set, if those members had an equal chance of existing.
This sounds interesting. The "or something like that" is crucial of course... Last time I thought your version of SIA might actually be close to FNC (Full Non-indexical Conditioning) by Radford Neal, which is mostly equivalent in results to SIa. But your "My observations so far are ike-ish" does have an indexical ("my") in it, while FNC ignores all indexical evidence. (This is initially a big advantage, since it is an open question how beliefs with indexicals, so-called self-locating credences, should be modelled systematically in Bayesian reasoning, which leads to the need for additional ad-hoc principles like SSA or SIA.) As far as I understand it, FNC conditions rather on something like "Someone has exactly this state of mind: [list of ike's total evidence, including memories and current experience]". Note that this is not a self-locating probability. But FNC (in contrast to SIA) leads to strange results when there are so many observers in the universe that it becomes virtually certain that there is someone (not necessarily you) with the same mind as you, or even certain that there exist an observer for any possible state of mind.
Maybe you know this already, but if not and if you are interested: in Neal's original paper there is a rather compact introduction to FNC from page 5 to 9, i.e. sections 2.1 to 2.3. The rest of the paper is not overly important. The paper is here: https://arxiv.org/abs/math/0608592 I'm saying this because you seem to have some promising intuitions which Neal also shares, e.g. he also wants to do away with the artificial "canceling out" of reference classes in SIA, and because FNC is, despite its problem with large universes, in some way an objective improvement over SIA, because it basically falls out of standard Bayesian updating if you ignore indexical information, in contrast to principles like SSA or SIA.
But if your approach really needs indexicals it still sounds plausible. Though there are some open questions related to indexicals. How should the unconditional probability of "My observations so far are ike-ish" be interpreted? For you, this probability is one, presumably. For me it is zero, presumably. But what is it from a gods-eye perspective? Is it undefined, because then "my" has no referent, as dadadarren seems to suggest? Or can the "my" be replaced? Maybe with "The observations of a random observer, who, according to ike's evidence, might be ike, are ike-ish"?
This rules out existing as anyone other than me, but leaves me agnostic as to who "I" am among the group of observers that also have had the same set of observations.
Actually this is a detail which doesn't seem to me quite right. It seems you are rather agnostic about who are you among the group of observers that, from your limited knowledge, might have had the same set of observations as you.
You're smuggling in a particular measure over universes here. You absolutely need to do the math along with priors and justification for said priors, you can't just assert things like this.
The priors are almost irrelevant. As long as an infinite universe with infinite observers has a prior probability larger than 0 being in such an universe is infinitely more likely than being in a universe with finitely many observers. But given that cosmologists apparently find an infinite universe the most plausible possibility, the probability should arguably be estimated much higher than 0%, apparently many of them find it higher than 50% if they believe in an infinite universe. Let's assume an infinite and an finite universe mit infinitely many observers are equally likely. Then the odds for being in on of those universes are, according to SIA, n:infinite, where n is the number of observers in the finite universe. We could wheigh these odds by almost any prior probabilities other than 50%/50% and the result wouldn't change: Infinite weighted by any non-zero probability is still infinite, and n stays a finite number regardless. It will always be infinitely more likely to be in the universe with infinitely many observers. So there are only two possibilities: either the prior probability of a universe with infinitely many observers is not 0, then SIA says we live in such an infinite universe with probability 1. Or the prior probability of and infinite universe is 0, then SIA leaves it at 0.
It's not clear to me this counts as an infinite universe. It should repeat after a finite amount of time or space or both, which makes it equivalent to a finite universe being run on a loop, which doesn't seem to count as infinite.
Why not? You might then have exact doppelgängers but you are not them. They are different persons. If you have a headache, your doppelgänger also has a headache, but you feel only your headache and your doppelgänger feels only his. If there are infinitely many of those doppelgängers, we have infinitely many persons. Also, a universe with infinite complexity would also have doppelgängers. Apart from that, simple laws and initial conditions can lead to chaotic outcomes, which are indistinguishable from random ones, i.e. from ones with infinite information content. Consider the decimal expansion of pi. It is not periodic like a rational number, it looks like a random number. Yet it can be generated with a very short algorithm. It is highly compressible, a random number is not, but this is the only qualitative difference. Another example are cellular automata like Conway's Game of Life, or fractals like the Mandelbrot set. Both show chaotic, random-looking behavior from short rules/definitions. Such a infinite universe with pseudo randomness might be nearly indistinguishable from one with infinite information content.
That's assuming all of this talk is coherent, which it might not be - our bandwidth is finite and we could never verify an infinite statement.
It depends on what you mean with "verify". If you mean "assign probability 1 to it" then almost nothing can be verified, not even that you have a hand. (You might be deceived by a Cartesian demon into thinking there is an external world.) If you mean with "verify", as you suggested in your last comment, to assign some probability after gaining evidence, then this is just updating.
The standard argument for the great filter depends on a number of assumptions, and as I said, my current understanding is this standard argument doesn't work numerically once you set up ranges for all the variables.
You are talking about the calculations by Sandberg, Drexler, and Ord, right? In a post where these results were discussed there was an interesting comment by avturchin:
It seems that SIA says that the parameters of the drake equation should be expected to be optimized for observers-which-could-be-us to appear, but exactly this consideration was not factored into the calculations of Sandberg, Drexler, and Ord. Which would mean their estimations for the expected number of civilizations per galaxy are way too low.
Yes, this is true in my model - conditioning on a filter in the first case yields 100 future filters vs 1 past filter, and in the second case yields 900 future filters vs 9 past filters. There's a difference between a prior before you know if humans exist and a posterior conditioning on humans existing.
Then what I don't quite understand is why the calculation of your toy model seems so different from the calculation in Katja's post. In her calculation there is a precise point where SIA is applied, while I don't see such a point I'm your calculation. Also, the original Bostom SIA ("SSA+SIA") does, as Dadadarren pointed out, involve a reference class whose effect then "cancels out" while you are, as you pointed out, trying to avoid reference classes to begin with.
Maybe your version of SIA is closer to something like FNC than to the original SIA. Perhaps you should try to give your version a precise definition. The core idea, as far as my limited understanding goes, is this: If hypothesis H makes my existence M more likely, then my existence M also makes H more likely. Because P(M|H) > P(M) implies P(H|M) > P(H). This of course doesn't work if P(M) is 1 to begin with, as you would expect if M means something like the degree of belief I have in my own existence, or in "I exist". So we seem to be forced to consider a "non-centered" version without indexical, i.e. "cubefox exists", which plausibly has a much lower probability than 1 from the gods eye perspective. If we call my indexical proposition M_i and the non-indexical proposition M_c, it becomes clear that the meaning of 'M' in "P(M|H) > P(M) implies P(H|M) > P(H)" is ambiguous. If it means: "P(M_c|H) > P(M_c) implies P(H|M_i) > P(H)" then this of course is not a theorem of probability theory anymore. So how is it justified? If we take M_i to simply imply M_c, then P(M_c) would also be 1 and the first inequality (P(M_c|H) > P(M_c)) would again be false.
Maybe I'm off the track here, but Dadadarren seems to be at least right in that the relation between indexical and non-indexical propositions is both important and not straightforward.
This depends on your measure over the set of possible worlds, but one can plausibly reject infinities in any possible world or reject the coherency of such.
Now that seems to me a surprising statement. As far as I'm aware, the most popular guess among cosmologist about the size of the universe is that it is infinitely large. That it has the topology of a plane rather than of a sphere or a torus. Why and how would we plausibly reject this widely held possibility? On the contrary, it seems that SIA presumptuously requires us to categorically reject the sphere and the torus possibility on pure a priori grounds because they imply a universe finite in size and thus with way too few observers.
The only a priori reason against a "big" universe I can think of is one with infinite complexity. By Ockham's razor, it would be infinitely unlikely. If simplicity is low complexity, and complexity is information content, then the complexity C is related to probability P with C(x) = -log_{2}P(x), or P(x) = 2^-C(x). If C(x) is infinite, P(x) is 0.
But an infinitely large universe doesn't mean infinite complexity, at least not in the information content sense of "incompressibility". An infinite universe may arise from quite simple laws and initial conditions, which would make its information content low, and its probability relatively high.
As I've written elsewhere, I'm a verificationist and don't think statements about what is per se are verifiable or meaningful - my anthropic statements are meaningful insofar as they predict future experiences with various probabilities.
Well, SIA seems to predict that we will encounter future evidence which would imply a finite size of the universe with probability 0. Which is just what you required. While the silly cosmologists have not ruled out a finite universe, we philosophers just did so on pure a priori grounds. :)
I think you are right that when we are not very certain about the existence / strength of a great filter, SIA Doomsday loses lots of its force. But I think the standard argument for the "great filter hypothesis" was always that a strong filter is extremely likely because even if just a single civilization decides to colonize/sterilize the galaxy (e.g. via von Neumann probes) it could do so comparatively quickly. If it spreads at 1% of the speed of light, it takes 1 million years to colonize the whole Milky Way, which is a very short amount of time compared to the age of the galaxy or even our solar system. Yet the fermi paradox suggests the Milky Way is not colonized to a significant extent. So the expected number civilizations in the galaxy is so low that we are likely one of the first, if not the first.
A different point about your toy model: Why do you assume 50% each for the filter being in the past/future? That seems to ignore SIA. The point of the SIA Doomsday argument is precisely that the filter, assuming it exists, is much, much more likely to be found in the future than in the past. Because SIA strongly favors possible worlds with more observers who could be us, and in a possible world with a past filter (i.e. "early" or "middle" filter in Katja's post) there are of course very few such observers (the filter prevents them from coming into existence), but in a world with a late filter there are much more of them. (Indeed, SIA's preference for more observers who could be us seems to be unbounded, to the point that it makes it certain that there are infinitely many observers in the universe.)
Here is the link to the argument again: https://meteuphoric.com/2010/03/23/sia-doomsday-the-filter-is-ahead/
Sorry if this is somewhat unrelated to the discussion here, but I don't think the SIA Doomsday can be dismissed so easily.
The great filter itself relies on assumptions about base rates of life arising and thriving which are very uncertain.
If we don't have overwhelming reason to think that the filter is in the past, or to think that there is no filter at all, SIA suggests that the filter is very, very likely in the future. SIA itself would, so to speak, be overwhelming evidence for a future filter; you would need overwhelming counter evidence to cancel this out. Or you do a Moorean Shift and doubt SIA, precisely because there apparently is no independent overwhelming evidence that such a filter is in the future. (Especially when we consider the fact that SIA pretty much rules out an AI as the future filter, since we do not only see no Aliens, we also see no rouge alien AIs. There is a separate post on this topic on her blog.) Or you doubt other details of the SIA Doomsday argument, but aside from SIA there aren't many it seems.
Regarding the first two examples: "brush your teeth" and "animal" are also in another aspect quite different: The first example is a task, a instruction, or a command. "animal" on the other hand is the name of a concept, and it can be used to form the monadic predicate "is an animal". Which names a property which members of a set (the set of animals) satisfy. Maybe the difference between composition and generalization becomes clearer (or disappears) if we compare composite/generalized predicates only or composite/generalized instructions only.
An additional point regarding the aspect of touchability: Grammatically, the main difference between the terms "tennis" and "tennis ball", seems to be the fact that the former is not countable, while the latter is. Because of this, you can easily make a predicate out of the later by applying the copula "is a": "is a tennis ball". This is not possible for "tennis": "is a tennis" doesn't make sense. So you can't associate tennis with a particular set of things which you could touch. Similar point holds for "fear". This seems to hold for most "abstract" properties: They are expressed by non-countable nouns.
(An interesting exception is the concept of a number. "Number" is a countable noun, but the concept may be called abstract nonetheless. This could hint at yet a different sense of abstractness.)
Similar to uncountable nouns, adjectives, like "large", are generally also not countable. They seem intuitively also to be judged abstract, especially when you convert them into properties by naming them via a (uncountable) noun: "Largeness" is not a thing you could touch.
However, the case is not so clear for verbs: "walks" can be converted to a noun by speaking of a "walk", which seems to be a countable property of a process. As a consequence, it seems not particularly abstract. In conclusion, abstractness in the "philosophical" sense seems to be closely related to uncountable nouns.
Verificationismism in the sense of the logical positivists is a theory of meaning. According to this theory, kowing the meaning of a statement p would amount to knowing the conditions under which it would be true and under which it would be false. (To give it a Bayesian slant, I like to widen this as "knowing what would be evidence for/against p). Is it this what you have in mind?
Verificationismism in this sense was used against postulating transcendent entities or state of affairs. Something is transcendent if it is beyond every possible experience. Therefore there is nothing which could verify of falsify facts about it. The logical positivists argued on the basis of verificationismism that statements about transcendent things (certain conceptions of God for example) are meaningless. Not false, but meaningless.
(Verificationismism lost a lot of popularity in the 1950s and 60s because there was very little progress in making the notion precise. Also, some apparently unverifiable theories (e.g. in astronomy) seemed to be perfectly meaningful. Whether those problems can be met I don't know. Another point is that verificationismism was meant only as a condition of meaningfulness of so-called synthetic statements. Statements are synthetic iff their truth depends not only in their meaning. In contrast, the truth of "analytic" statements only depends on their meaning. The logical positivists assumed that logical and mathematical statements were analytic. Since verificationismism doesn't apply to the meaning of those latter statements, it arguably isn't a theory of meaning in the general sense.)
But, provided you speak about this notion, why would verificationismism lead to external world anti-realism? Because statements like "there is a tree in my garden" cannot be truly "verified" -- because there might be no garden and no tree, and I might instead be deceived by a Cartesian demon?
For the "wider" conception mentioned above this wouldn't be a problem I think. Having the visual impression of a tree is at least some evidence for there being a tree, even though there might be no tree. Then the statement is meaningful.
On the narrower conception and with a strict sense of "verification" the statement "There is a tree in my garden" would indeed be meaningless. Because there is apparently no experience which would verify or falsify it definitely. The same would be true about all other synthetic statements about the world. This wouldn't mean that those statements are false and that external reality doesn't exist: It would "only" mean thst those statements are meaningless. But here is the problem for this theory: It is obviously not meaningless to say that there is a tree in my garden.
One could argue that synthetic statements aren't really about external reality: What we really mean is "If I were to check, my experiences would be as if there were a tree in what would seem to be my garden". Then our ordinary language wouldn't be meaningless. But this would be a highly revisionary proposal. We arguably don't mean to say something like the above. We plausibly simply mean to assert the existence of a real tree in a real garden.
So I would argue that "evidence verificationismism" is much more plausible than "definite verification/falsification verificationismism". The former would not lead to the conclusion that synthetic statements about the world are meaningless. Nor would it be in need of radical revisionism about the meaning of ordinary language.