...GPT-2 does not - probably, very probably, but of course nobody on Earth knows what's actually going on in there - does not in itself do something that amounts to checking possible pathways through time/events/causality/environment to end up in a preferred destination class despite variation in where it starts out.
A blender may be very good at blending apples, that doesn't mean it has a goal of blending apples.
A blender that spit out oranges as unsatisfactory, pushed itself off the kitchen counter, stuck wires into electrical sockets in order to burn open y
Small world, I guess :) I knew I heard this type of argument before, but I couldn't remember the name of it.
So it seems like the grabby aliens model contradicts the doomsday argument unless one of these is true:
Thanks for the great writeup (and the video). I think I finally understand the gist of the argument now.
The argument seems to raise another interesting question about the grabby aliens part.
He's using the hypothesis of grabby aliens to explain away the model's low probability of us appearing early (and I presume we're one of these grabby aliens). But this leads to a similar problem: Robin Hanson (or anyone reading this) has a very low probability of appearing this early amongst all the humans to ever exist.
This low probability would also require a si...
Which is funny because there is at least one situation where robin reasons from first principles instead of taking the outside view (cryonics comes to mind). I'm not sure why he really doesn't want to go through the arguments from first principles for AGI.