Small world, I guess :) I knew I heard this type of argument before, but I couldn't remember the name of it.
So it seems like the grabby aliens model contradicts the doomsday argument unless one of these is true:
Thanks for the great writeup (and the video). I think I finally understand the gist of the argument now.
The argument seems to raise another interesting question about the grabby aliens part.
He's using the hypothesis of grabby aliens to explain away the model's low probability of us appearing early (and I presume we're one of these grabby aliens). But this leads to a similar problem: Robin Hanson (or anyone reading this) has a very low probability of appearing this early amongst all the humans to ever exist.
This low probability would also require a similar hypothesis to explain away. The only way to explain that is some hypothesis where he's not actually that early amongst the total humans to ever exist which means we turn out not to be "grabby"?
This seems like one the problems with anthropic reasoning arguments and I'm unsure how seriously to take them.
It doesn't seem crazy to me that a GPT type architecture with the "Stack More Layers" could eventually model the world well enough to simulate consequentialist plans - i.e given a prompt like:
"If you are a blender with legs in environment X, what would you do to blend apples?" and provide a continuation with a detailed plan like the above (and GPT4/5 etc with more compute giving slightly better plans - maybe eventually at a superhuman level)
It also seems like it could do this kind of consequentialist thinking without itself having any "goals" to pursue. I'm expecting the response to be one of the following, but I'm not sure which: