Metaethics, decision theory and ethics are believed by @Wei Dai to be important problems related to AI alignment due to possibilities like optimizing the universe for random values, the AIs corrupting human values themselves and other issues which might lead to astronomical waste or even to wasting infinite[1] resources.
What I don't understand is how affecting infinite resources is possible in the first place, since even affecting an amount of resources close to the size of reachable universe is unlikely to be ethical.[2]
If we understand the Universe's nature[3] correctly, different civilisations will find it easy to reach technonogical maturity within a hundred years after the ASI is developed. Any planet on which a sapient lifeform could've originated can eventually either fail to produce a civilisation, produce it or be occupied. SOTA estimates imply that life could be sustained by at least 10% of stellar systems. The crux is whether life there actually appears and evolves towards being sapient. If it is likely, then either the zoo hypothesis is true or mankind is the first civilisation to appear in the accessible area, potentially letting the humans destroy more primitive alien lifeforms, which likely contradicts SOTA human values.
Other universes could be made to increase human value by things like us being run as a simulation, influencing the results of oracles or superintelligences changing their behavior in a manner depending on our decisions. What I fail to understand is the reasons for anyone to have the oracle or superintelligence depend on mankind's decisions. An ASI who changed some of its decisions based on humanity's deeds had to learn about them in the first place. Otherwise it might simulate mankind's decisions on a scale far smaller than the current one.
If mankind itself is run as a simulation and would like to escape it, then it is either a natural simulation or an artificial one. The latter option means that either the simulation's creators decided to let us out or that mankind or a human-created AI behaved adversarially towards these creators. If the creators let us out, then it would either mean that we are aligned to the goals that they have set (but why did they design us so inefficiently that our brains are wildly undertrained neural networks?) or that they came up with altruistic or acausal reasoning for doing so.
To conclude, arguments like the above seem to rule out the possibility of ethically having access to anything beyond the Solar System, a few adjacent ones and the little resources necessary to claim other systems, defend them and help their inhabitants.
Strictly speaking, Wei Dai also mentions numbers like 3^^^3, or a power tower of more than 7 trillion threes. But the data gathered by mankind as of now doesn't let us change of any hypothesis by more than a googol, meaning that a rational agent should either constrain its maximal utility function, face Pascal's Mugging by being promised to affect at least a googolplex of lives or receive proof for being as sure that it's impossible to affect that much resources as a mathematician can be confident in the proofs of theorems, unless the entire Peano Arithmetics ended up incoherent.
It may also be prevented by a more powerful and benevolent alien race which observes the Solar System and keeps track of mankind's progress. But this case means that we or the AIs who took over are powerless and not that we or they wasted anything.
And the nature of the AIs. However, if cheap-to-run AGIs have never been possible or alignable in the first place and mankind realises it, then the futures that we would like to avert are the easy-to-prevent slopworld and the medium-like scenario where progress mostly halts. But this is highly unlikely, since a human brain is an AGI equivalent by definition, and the same is likely true for uploads or human brain simulations.