Let's play around with the quantum measure some more. Specifically, let's posit a theory T that claims that the quantum measure of our universe is increasing - say by 50% each day. Why could this be happening? Well, here's a quasi-justification for it: imagine there are lots and lots of of universes, most of them in chaotic random states, jumping around to other chaotic random states, in accordance with the usual laws of quantum mechanics. Occasionally, one of them will partially tunnel, by chance, into the same state our universe is in - and then will evolve forwards in time exactly as our universe is. Over time, we'll accumulate an ever-growing measure.

That theory sounds pretty unlikely, no matter what feeble attempts are made to justify it. But T is observationally indistinguishable from our own universe, and has a non-zero probability of being true. It's the reverse of the (more likely) theory presented here, in which the quantum measure was being constantly diminished. And it's very bad news for theories that treat the quantum measure (squared) as akin to a probability, without ever renormalising. It implies that one must continually sacrifice for the long-term: any pleasure today is wasted, as that pleasure will be weighted so much more tomorrow, next week, next year, next century... A slight fleeting smile on the face of the last human is worth more than all the ecstasy of the previous trillions.

One solution to the "quantum measure is continually diminishing" problem was to note that as the measure of the universe diminished, it would eventually get so low that that any alternative, non-measure diminishing theory, not matter how initially unlikely, would predominate. But that solution is not available here - indeed, that argument runs in reverse, and makes the situation worse. No matter how initially unlikely the "quantum measure is continually increasing" theory is, eventually, the measure will become so high that it completely dominates all other theories.

New to LessWrong?

New Comment
33 comments, sorted by Click to highlight new comments since: Today at 9:50 AM

Is it fair to think of this as related to Pascal's mugging? That problem derived disproportionate EV from "utilities grow faster than complexities" (so we had a utility growing faster than its probability was shrinking), and this one derives them from "if this hypothesis is true, all utilities grow over time" (so we have a utility growing while its probability remains fixed).

Is it fair to think of this as related to Pascal's mugging?

Yes, very fair indeed. And even correct! :-)

the measure will become so high that it completely dominates all other theories.

Because the total amount of measure is a free parameter in quantum mechanics that we normally just fix at 1, an increase in the total amount of measure is not merely observationally indistinguishable from things staying the same, but is actually the same. One can shift measure around, but then you couldn't go above 1.

We can posit that the total amount of measure in the universe(s) is arbitrarily high, given an arbitrary long amount of time for the measure to grow (the usual "unbelievably longer than the total age of the universe raised to the power of the number of particles in the universe, etc...")

And since quantum mechanics cares about relative measure of different observations, we could also posit that total measure is infinite.

We could not have an infinite pool of measure to draw on, because if total measure was infinite, then any finite pieces could not interact without breaking linearity.

And, again, just because you can double the total amount of measure in your representation, doesn't mean that this number is physically meaningful. If the number was arbitrary to begin with, there's no reason to assume that changing it is meaningful.

We could not have an infinite pool of measure to draw on, because if total measure was infinite, then any finite pieces could not interact without breaking linearity.

Can you explain?

And, again, just because you can double the total amount of measure in your representation, doesn't mean that this number is physically meaningful. If the number was arbitrary to begin with, there's no reason to assume that changing it is meaningful.

But you're doubling the total amount of measure relative to the total measure of the rest of the universe, a change that is non-arbitrary for many decision theories.

Suppose I start with a big blob of measure in a boring universe, that is slowly turning into universes like ours. Linearity says that the the rate at which universes like ours appear is proportional to how big the big blob of measure is.

In fact, this is crucial to calling it "measure" rather than just "that number in quantum mechanics."

So if the rate of universes like our appearing is proportional to the size of the original blob, as we make the size of the original blob infinite, we also make the rate of universes like ours appearing infinite. We cannot have a finite number of universes like ours, but an infinite blob of measure turning into them - we can only have a proportionally smaller infinite amount of universes like ours. This requirement gives us back our old limitations about eventually running into a maximum.

Occasionally, one of them will partially tunnel, by chance, into the same state our universe is in - and then will evolve forwards in time exactly as our universe is.

So, pretending that this sort of thing has any significance, you would also expect some worlds to tunnel, but chance, into neighboring states, as might result from making different decisions. So, the argument for always sacrificing in favor of future gains falls down, most of the measure for world in which you get the future benefits of the sacrifice comes from quantum fluctuations, not the sacrifice itself, as both available worlds, where you make or don't make the sacrifice, accumulate measure from random tunneling, regardless of your choice. You should make your decision based on the amount of measure you actually affect, not the amount that happens to merge into the same state you might cause. (And the ever growing number of branches this theory says would be accumulating measure just shows further how ridiculous it is.)

Er... this isn't a serious theory of physics I've put forwards!

My critique of the physics was more of an aside. The main point was the critique of the decision theory, that under the assumptions of this non-serious theory of physics, most of the measure of the various outcomes are independent of your decisions, and you should only base you decisions on the small amount of measure you actually affect.

you should only base you decisions on the small amount of measure you actually affect.

But whether that small amount is increasing in time or not is very relevant to your decision (depending on how your theory treats measure in the first place).

My point was that under your assumptions, the amount you affect does not increase in time at all, only the amount you do not affect increases.

?

Er no, you can still make choices that increase of decrease utility. It's simply that the measure of the consequences of these choices keeps on increasing.

Suppose you are in a world with measure M and are choosing between A and B, where A results in world WA which includes an immediate effect worth 4 utilons per measure, and B results in world WB which includes a later effect at time T worth 3 utililons per measure. Suppose further that under your not-serious theory, at time T, random quantum fluctuations have added measure 10M to the worlds WA and WB. So your choice between A and B is a choice to either add measure M to world WA or world WB, so that choice A results in WA immediately having measure M worth 4M utililons and later at time T, WA having measure 11M (0 utilons) while WB has measure 10M (worth 30M utilons) for a total of 34M utilons, while choice B results in WB immediately having measure M, (worth 0 utilons), and at time T WA having measure 10M (worth 0 utilons) and WB having measure 11M (worth 33M utilons), so you choose A for 34M instead of B for 33M utilons, for the same reasons that without the non-serious theory, you would choose A for 4M utilons instead of B for 3M utilons. Your non-serious theory should not impact your decisions because your decisions do not control which worlds it adds measure to.

I was envisaging utilons being "consumed" at the time they were added (say people eating chocolate bars). So choosing A would add 4M utilons, and choosing B would add 33M utilons.

I was envisaging utilons being "consumed" at the time they were added (say people eating chocolate bars).

My example is entirely compatible with this.

So choosing A would add 4M utilons, and choosing B would add 33M utilons.

So the problem here is that you are not accounting for the fact that choosing A in the measure M world does not prevent the accumulation of measure 10M to world WB from quantum fluctuation. You get those 30M utilons whether you choose A or B, choosing A gets you an immediate 4M additional utilons, while choosing B gets you a deferred 3M utilons.

A and B could be logically incompatible worlds, not simply different branches of the multiverse.

I am not sure what you mean by "logically incompatible worlds", but if worlds WA and WB are the results of different available decisions of an agent embedded in a common precursor world, then they both follow the same laws of physics and just have their particles or whatever in different places, and in a quantum universe they just have different quantum states.

I may decide to go left or right at a crossroad. If I decide to go left (for good reasons, after thinking about it), then almost all of my measure will go left, apart from a tiny bit of measure that tunnels right for various reasons.

So if I decide on A, WB will exist, but only with the tiniest of measures.

Yes, that is how your decision gives your measure M to world WA or to world WB, but that shouldn't affect accumulation of measure into later states of these worlds by quantum fluctuation, so both worlds still get measure 10M from that.

Unless you mean that quantum fluctuations into later states of the world are directed by the normal evolution of the earlier states, including your decision, in which case, this process would be adding measure (perhaps not quantum measure, but counting as decision theoretic measure in the same way) to the initial state of the world in which you make the decision (because it is another instance of the same causal chain, that is, it produces the same result for the same reasons), so you get all 10M of the quantum fluctuation measure right away, and choice A gives 44M utilons while B still gives 33M utilons.

My model was of gradual proportional increase in utility, not absolute addition to every branch.

My model was of gradual proportional increase in utility

Yes, my example shows a proportional increase in measure between two times, and is indifferent to the gradual increase between these times. If you think the gradual increase is important, please provide an example that illustrates this.

not absolute addition to every branch.

I have already explained why adding the measure to a single branch is incoherent in both the cases where the decision causes or does not cause selection of the branch that receives the measure.

I have already explained why adding the measure to a single branch is incoherent in both the cases where the decision causes or does not cause selection of the branch that receives the measure.

I don't quite understand the point. I'm claiming that, for instance, if a branch has measure M at time 0, it will have measure 2M at time 1. i.e. it's measure at time 1 is twice that at time 0. If measure splits into N+N'=M, then the branch with N will go to 2N and that with N' will go to 2N'.

Are you claiming that a) this model is incoherent, or b) that this model does not entail what I'm claiming (that you should save for the future)?

Are you claiming that a) this model is incoherent, or b) that this model does not entail what I'm claiming (that you should save for the future)?

The basic model you described, even as alternative physics, is underspecified, and depending on how I try to steelman it so it is coherent, it doesn't entail what you claim, and if I try to steelman it so it entails what you say, it isn't coherent.

The big question is what worlds get to accumulate measure and why those particular worlds. If the answer is that all worlds accumulate measure, then the accumulation happens independently of your decision, so the effect should not impact your decision. If the answer is that the measure accumulation process looks somehow depends on what world your decision leads to, then the measure accumulation process in locating that world duplicates its causal structure, and by the globalized anti zombie principle, contains all the same conscious people as that world, so it adds to the worlds decision theoretical measure even before your model says it officially adds to its quantum measure (this is basically parallel to the argument for Many Worlds). What I think is incoherent is the idea that you can add measure to world state without adding measure to the process that selected that world state, which you try to do by supposing that your decision (and its intermediate effects) don't cause the later accumulation of measure, yet the measure magically accumulates in the world that results from your decision. (To account for this, you would have to follow the probability to outside the thought experiment.)

It feels like this should all be obvious if you understand why p-zombies are incoherent, why Many Worlds is obviously correct, and how these are related.

Consider this setup: you decide whether to buy ice cream now or chocolate later (chocolate ice cream unfortunately not being an option). Your mind will go through various considerations and analyses, and will arrive at a definite conclusion.

However, it's actually determined what your decision is - any Laplacian demon could deduce it from looking at your brain. It's all pretty clear, and quantum events are not enough to derail it (barring very very low measure stochastic events). So from the universe's perspective, you're not choosing anything, not shifting measure from anything to anything.

But you can't know your own decision before making it. So you have the impression of free will, and are using an appropriate decision theory. Most of these work "as if" your own decision determines which (logical) world will exist, and hence which world will get the increased measure. Or, if your prefer, you know that the world you decide on will get increased measure in the future, you are simply in ignorance of which one it will be. So you have to balance "ice cream before the increased measure" with "chocolate after the increased measure", even though you know one of these is impossible.

However, it's actually determined what your decision is - any Laplacian demon could deduce it from looking at your brain. It's all pretty clear, and quantum events are not enough to derail it (barring very very low measure stochastic events). So from the universe's perspective, you're not choosing anything, not shifting measure from anything to anything.

The logical structure of my decision still controls what world gets the measure. From Timeless Control:

Surely, if you can determine the Future just by looking at the Past, there's no need to look at the Present?

The problem with the right-side graph is twofold: First, it violates the beautiful locality of reality; we're supposing causal relations that go outside the immediate neighborhoods of space/time/configuration. And second, you can't compute the Future from the Past, except by also computing something that looks exactly like the Present; which computation just creates another copy of the Block Universe (if that statement even makes any sense), it does not affect any of the causal relations within it.

This is basically the same point as the one I keep making and you keep missing: The universe/Laplacian demon/whatever is adding quantum measure, in order to select the same world to add measure to that was selected by your decision, it has to duplicate the causal structure of your decision and the resulting world. (And since within this computation the same things happen for the same reasons as in the selected world, by the generalized anti zombie principle, the computation is adding measure to that world even at times before your model says it adds quantum measure.)

The universe/Laplacian demon/whatever is adding quantum measure, in order to select the same world to add measure to that was selected by your decision,

The demon is not adding quantum measure, or selecting anything. Every Everett branch is getting its measure multiplied - nobody's choice determines where the measure goes.

At least, from the outside perspective, for someone who knows what everyone else's choices are/will be (and whose own choices are not relevant), nobody's choice is determining where the measure goes. From the insider perspective, for someone who doesn't know their own decision - well, that depends on their decision theory, and how they treat measure.

Do you also disagree with , http://lesswrong.com/lw/g9n/false_vacuum_the_universe_playing_quantum_suicide/ btw? Because that's simply the same problem in reverse.

One solution to the "quantum measure is continually diminishing" problem was to note that as the measure of the universe diminished, it would eventually get so low that that any alternative, non-measure diminishing theory, not matter how initially unlikely, would predominate.

This is related to the doomsday argument. Once you accept the idea that you can use your existence as evidence, you can show that the total measure of consciousness in the universe must be finite, otherwise you would not be within a googolplex years of the big bang. This disproves the increasing quantum measure theory.

Due to standard entropy arguments, I would say that the chance that the aforementioned theory is correct and as such the universe is increasing in measure is orders of magnitude less likely than the reverse, that the universe is shrinking in measure. Thus, when summing over all possible worlds, the theory you suggest has a much lower weight than its reverse, and so is vastly outweighed.

This still leaves the opposite problem you mention, but seeing that these problems are opposites, it makes sense that only one can be the real problem.

is orders of magnitude less likely than the reverse

But exponential growth will make short work of orders of magnitude...

Clarification: The probability is orders of magnitude less. This is a difference more than maintained under exponential growth. Example: if p=0.1, q=0.01, then p^n=1/10^n, while q^n is 1/10^(2n). Thus for all n>0, p is at least 10 times q, and in fact is 10^n times q, a difference that rapidly grows as n grows. As you can see, far from making short work of it, exponential growth only broadens the gap.

What are analogs of p, q and n here?

It feels to me like you're assuming that P(the universe is increasing in measure) is a function of the universe's current measure, which seems odd. But if it's not, then (I believe Stuart's claim is) no matter how small the probability, an increasing universe eventually has enough value to make it a dominant hypothesis in terms of EV.

I am working on the assumption that we have a theory (of low probability) that posits that the universe is continually increasing its measure, rather than having an independent low probability of measure increase at every moment.