But that's not how I'm thinking of it in the first place -- I'm not positing any random selection process. I just don't see an immediately obvious flaw here:
And I still don't quite understand your response to this formulation of the argument. I think you're saying 'people who have ever lived and will ever live' is obviously the wrong reference class, but your arguments mostly target beliefs that I don't hold (and that I don't think I am implicitly assuming).
Sorry about the double reply, and it's been a while since I thought seriously about these topics, so I may well be making a silly mistake here, but --
There's a shop that uses a sequential ticketing system for queueing: each customer takes a numbered ticket when they enter, starting with ticket #1 for the first customer of the day. When I enter the shop, I know that it has been open for a couple of hours, but I have absolutely no idea when it closes (or even whether its 'day' is a mere 24 hours). I take my ticket and see that it's #20. I have also noticed that the customer flow seems to be increasing more than linearly, such that if the shop is open for another hour there will probably be another 20 customers, and if it's open for a few more hours there will be hundreds. Should I update towards the shop closing soon, on the grounds that otherwise my ticket number is atypically low? If so, wtf, and if not, what are the key differences between this and the doomsday argument?
"viewed from an achronous perspective, a low probability event has occurred — just like they always do right at the start of anything"
What's the 'low probability event'? I think this is the kind of framing I was disagreeing with in my original reply; there seems to be an implicit dualism here. So your reply isn't, from my perspective, addressing my reasons for finding anthropic reasoning difficult to completely dismiss.
If I roll a million-sided die, then no individual number rolled on it is more surprising than any other, not even a roll of 1 or 1,000,000 — UNLESS I'm playing an adversarial game where me rolling a 1 is uniquely good for my opponent. Then if I roll a 1 I should wonder if the die was fixed.
Yes, but if you haven't looked at the die yet, and the question of whether it's showing a number lower than 100 is relevant for some reason, you're going to strongly favour 'no'.
(That's not quite how I think about anthropic problems, though, because I don't think there's anything analogous to the dice roll -- hence my original complaint about smuggled dualism.)
I don't think you need completely specious reasoning to get to a kind of puzzling position, though. For us to be in the first <relatively small n>% of people, we don't need humanity to spread to the stars -- just to survive for a while longer without a population crash. And I think we do need some principled reason to be able to say "yes, 'I am in the first <relatively small n>% of people' is going to be false for the majority of people, but that's irrelevant to whether it's true or false for me".
Oh yeah, I should have made this clear in my reply to you (I'd written it in a different comment just a moment before):
I do find anthropic problems puzzling. What I find nonsensical are framings of those problems that treat indexical information as evidence -- e.g. in a scenario where person X (i.e. me) exists on both hypothesis A and hypothesis B, but hypothesis A implies that many more other people exist, I'm supposed to favour hypothesis B because I happen to be person X and that would be very unlikely given hypothesis A.
Yep (assuming I don't have a prior that heavily favours the red door case for some reason), but in this case I think I'm just applying ordinary bayesian reasoning to ordinary, non-identity-related evidence. The information I'm learning is not "I am this person", but "this person is still alive". That evidence is 99 times more likely in the green door case than the red door case, so I update strongly in favour of the green door case.
Fair point, and I do find anthropic problems puzzling. What I find nonsensical are framings of those problems that treat indexical information as evidence -- e.g. in a scenario where person X (i.e. me) exists on both hypothesis A and hypothesis B, but hypothesis A implies that many more other people exist, I'm supposed to favour hypothesis B because I happen to be person X and that would be very unlikely given hypothesis A.
I feel like this is coming from quite a male perspective. Obviously looks play a huge role in attractiveness for both genders, but for the average straight man you wouldn't be missing very much if you modeled their shallow/early-impressions attraction to women as 100% looks-based, whereas for the average straight woman that wouldn't work as well. So your event might be quite informative for the straight women who participated, but potentially confusing for the straight men.
I didn't mean to imply certainty, just uncertain expectation based on observation. Maybe I asked Fred, or the other customers, but I didn't receive any information about 'the end of the day' -- only confirmation of the trend so far.
(I'm not trying to be difficult for the sake of it, by the way! I just want to think these things through carefully and genuinely understand what you're saying, which requires pedantry sometimes.)
edit in response to your edit:
I think I'm not quite understanding the distinction here. Why is there an important difference between "this trend is based on mechanisms of which I'm ignorant, such as the other customers' work hours or their expectations about chili quality over time" and "this trend is based on different mechanisms of which I'm also ignorant, i.e. birth rates and chili inventory"?