andrew sauer

Posts

Sorted by New

Comments

The Neglected Virtue of Scholarship

Well, if it's eternal and sufficiently powerful, a kind of omnibenevolence might follow, insofar as it exerts a selection pressure on the things it feels benevolent towards, which over time will cause them to predominate. 

Unless it decides that it wants to keep things it hates around to torture them

Less Realistic Tales of Doom

This is often overlooked here (perhaps with good reason as many examples will be controversial). Scenarios of this kind can be very, very bad, much worse than a typical unaligned AI like Clippy.

For example, I would take Clippy over an AI whose goal was to spread biological life throughout the universe any day. I expect this may be controversial even here, but see https://longtermrisk.org/the-importance-of-wild-animal-suffering/#Inadvertently_Multiplying_Suffering for why I think this way.

Statistical Prediction Rules Out-Perform Expert Human Judgments

You might not even need to go to a different Tegmark universe lol, given that multiple people have independently come up with this idea

Acausal romance

I wonder if anyone has tried to argue for the existence of God in a similar way to this article?

Acausal romance

Oh man, I think I came up with something very similar to this whilst being extremely horny and extremely lonely

Username checks out

The Solomonoff Prior is Malign

In your section "complexity of conditioning", if I am understanding correctly, you compare the amount of information required to produce consequentialists with the amount of information in the observations we are conditioning on. This, however, is not apples to oranges: the consequentialists are competing against the "true" explanation of the data, the one that specifies the universe and where to find the data within it, they are not competing against the raw data itself. In an ordered universe, the "true" explanation would be shorter than the raw observation data, that's the whole point of using Solomonoff induction after all.

So, there are two advantages the consequentialists can exploit to "win" and be the shorter explanation. This exploitation must be enough to overcome those 10-1000 bits. One is that, since the decision which is being made is very important, they can find the data within the universe without adding any further complexity. This, to me, seems quite malign, as the "true" explanation is being penalized simply because we cannot read data directly from the program which produces the universe, not because this universe is complicated.

The second possible advantage is that these consequentialists may value our universe for some intrinsic reason, such as the life in it, so that they prioritize it over other universes and therefore it takes less bits to specify their simulation of it. However, if you could argue that the consequentialists actually had an advantage here which outweighed their own complexity, this would just sound to me like an argument that we are living in a simulation, because it would essentially be saying that our universe is unduly tuned to be valuable for consequentialists, to such a degree that the existence of these consequentialists is less of a coincidence than it just happening to be that valuable.

Chapter 1: A Day of Very Low Probability

Gung unf gb or na rqvg... gur svany rknz fbyhgvba jnf sbhaq ol gur pbzzhavgl.