Fair enough. (though...really you could in principle still handle filtered evidence in a formalish way. It just would require a bunch of additional complication regarding your priors and evidence on how the filter operates).
there's a sort of anthropic issue where if we already had compelling evidence (or no evidence) we wouldn't be having this discussion.
Yes, our discussion is based on the evidence we actually see. But, to then discount the evidence because if we had different evidence we wouldn't be having the same discussion, is to rule out updating on evidence at all, if that evidence would influence our discussion.
Is there a prior for the likely resolution of fuzzy evidence in general?
In my view, there is a general tendency to underestimate the likelihood of encountering weird-seeming evidence, and especially of encountering it indirectly via a filtering process where the weirdest and most alien-congruent evidence (or game-of-telephone enhanced stories) gets publicly disseminated. For this reason, a bunch of fuzzy evidence is not particularly strong evidence for aliens.
Agreed that paying attention to how evidence is filtered is super important. But, in principle, you can still derive conclusions from filtered evidence. It's just really hard, especially if the filter is strong and hard to characterize (as is the case with UAPs).
Glitches happen. Misunderstandings happen. Miscommunications happen. Coincidences happen. Weird-but-mundane things happen. Hoaxes happen. To use machine learning terminology, the real world occurs at temperature 1. We shouldn't expect P[observations] to be high - that would require temperature less than 1. The question is, is P[observations] surprisingly low, or surprisingly high for some different paradigm, to such an extent as would provide strong evidence for something outside of current paradigms? My assessment is no. (see my discussion of Nimitz for example)
Some additional minor remarks specifically on P[aliens]:
(Edit: switched things around to put the important stuff in the first paragraph)
This is to publicly confirm that I have received approximately $2000 USD equivalent.
Unless you dispute what timing is appropriate for the knowledge cutoff, I will consider the knowledge cutoff for the paradigm-shattering UAP-related revelations for me to send you $100k USD to be 11:59pm, June 14, 2028 UTC time.
Regarding if there is evidence convincing to you, but not to me, after the five years:
If the LW community overwhelmingly agrees (say >85%) that my refusal to accept the evidence available as of 5 years from the time of the bet as overcoming the prior against ontologically surprising things being responsible for some "UAPs" was unreasonable, then I would agree to pay. I wouldn't accept 50% of LessWrong having that view as enough, and don't trust the judgement of particular individuals even if I trust them to be intelligent and honest.
Evidence that arises or becomes publicly available after the 5 years doesn't count, even if the bet was still under dispute at the time of the new evidence.
I will also operate in good faith, but don't promise not to be a stickler to the terms (see for example Bryan Caplan on his successful bet that no member nation of the EU with a population over 10 million would leave before 2020 (which he won despite the UK voting to leave in 2016) (Bet 10 at https://docs.google.com/document/d/1qShKedFJptpxfTHl9MBtHARAiurX-WK6ChrMgQRQz-0)
If you agree to these, in addition to what was discussed above, then I would be willing to offer $100k USD max bet for $2k USD now.
I made the same argument myself (lol) in response to lsusr regarding Eliezer's bet with Bryan Caplan:
(hit "see in context" to see the rest of my debate with lsusr)
Somehow it feels different at 0.5% though, as compared to the relatively even odds in the Yudkowsky-Caplan bet. (It's not like I could earn, say, USD $200k in a few weeks before a deadline, like Eliezer could earn $100). 2% is getting closer to compensating for this issue for me though.
True, but you presumably have to have the ability to pay it someway or another, and that's still resources that could have been available for something else (e.g. could have gone in to debt anyway, if something happened to warrant doing so).
I did interpret it as a 0.5% thing though, and now that the OP has stated they would be ok with 2% that makes it significantly less unattractive - Charlie Steiner's offer, which OP provisionally accepted, seems not too far off from something I might want to copy.
However, the fact that OP is making this offer means, IMO, that they are likely to be convinced by evidence significantly less convincing that what I would be convinced by. So there's a not unlikely possibility that 5 years from now if I accept we'll get into an annoying debate over whether I'm trying to shirk on payment, when I'm just not convinced by whatever the latest UFO news is that he's been convinced by. It's also possible that other LessWrongers might also be convinced by such evidence that I wouldn't be convinced by - consider how there seems to be a fair amount of belief here regarding the Nimitz incident that if Fravor wasn't lying or exaggerating it must be something unusual like, if not aliens, then at least some kind of advanced technology (whereas I've pointed out that even if Fravor is honest and reasonably reliable (for a human), the evidence still looks compatible with conventional technology and normal errors/glitches).
That might be a hard-to-resolve sticking point since I don't really consider it that unlikely that a large fraction of LessWrongers might (given Nimitz) be convinced by what I would consider to be weak evidence, and even if it was left to my discretion whether to pay, the reputational hit probably wouldn't be worth the initial money.
BTW, I don't consider it super unlikely that there are discoveries out there to be made that would be pretty ontologically surprising, it's just that I mostly don't expect them either to be behind UAPs or to be uncovered in the next 5 years (though I suppose AI developments could speed up revelations...)
I also note that some incidents do seem to me like they could possibly be deliberate hoaxes perpetrated within the government against other government employees who then, themselves sincere, spread it to the public (e.g. the current thing and maybe Bob Lazar). If I were to bet I would specifically disclaim paying out merely because such hoaxes were found to be carried out by some larger conspiracy which was also doing a lot of other stuff as well, even if sufficiently extensive to cause ontological shock - I am not comfortable betting against that at 2%. I would be OK, if I were otherwise satisfied with the bet, on paying out conditional on such a conspiracy being proven to have access to an ontologically shocking level of technology relative to the expected level of secret government tech.
So I could get 0.5% of the committed payout right away, but would have to avoid spending the committed value for 5 years, even though the world could change significantly in a lot of non UAP-related ways in that time frame. That's not actually that attractive.
What she's mainly arguing there is that decoherence does not solve the measurement problem because it does not result in the Born rule without further assumptions. She also links another post where she argues that attempts to derive the Born rule via rational choice theory are non-reductionist.
It might be that she thinks that means that some separate collapse is likely in addition to the separation into a mixture via decoherence, where the collapse selects a particular outcome from the mixture, but even if that were true, such a collapse would, I think, have to occur after or simultaneously with decoherence or it would be observable.
None of this leads, as far as I can tell, to the strange expectations that you seem to have.