Troy Macedon's Shortform

by Troy Macedon23rd Nov 202033 comments
32 comments, sorted by Highlighting new comments since Today at 7:49 PM
New Comment

What are the odds that an immortal is experiencing youthhood?

There's a temporal self-location issue with being immortal. Since an immortal lives for an infinite amount of time, almost all of his experiences are in the year infinity AD. This implies that you're not an immortal since your age is around 30yo, not infinity. The odds of you having your experience right now, as a relatively youthful man, is basically 0 if almost all of your experiences are in the infinite future. It seems like an immortal should always be at some unknown age; his boyhood so far into the past that it's been washed away by the fabric of reality itself. 

 

So does this really mean you're not immortal? I don't think so. There are still ways to "save" immortality even though your experience right now is such a unique one. The most basic way is to just treat the cause of this issue: your infinite future experiences. If you ever have only a finite number of experiences, then the odds of your experience of youth right now jumps from infinitesimal all the way up to finitesimal. You could do this by passive or active memory loss, or suspend your brain's state periodically such that even after an infinite amount of time, you will only ever be awake for a finite amount of time. For ex: in 10Gyr, enter suspension for 10Gyr, then have 5Gyr of experiences, then suspend for 10Gyr, then have 2.5Gyr of experiences, then suspend for 10Gyr.... This will ensure that you never exceed 20Gyr of experiences, a finite number.

But that's still a lot and still makes an experience of youthhood so unlikely (one out of a billion basically). Are there more ways to raise your chances of experiencing your youthhood even further? I have a gut feeling that precommitting techniques might be key. Or methods to extend experiences into year long durations. So not just "I am typing this post right now" (a 10mins long experience), but an experience of "I am playing on a forum" (~10year long experience).

This is self-sampling assumption-like reasoning: you are reasoning as if experience is chosen from a random point in your life, and since most of an immortal's life is spent being old, but most of a mortal's life is spent being young, you should hence update away from being immortal. 

You could apply self-indication assumption-like reasoning to this: as if your experience is chosen from a random point in any life. Then, since you are also conditioning on being young, and both immortals and mortals have one youthhood each, just being young doesn't give you any evidence for or against being immortal that you don't already have. (This is somewhat in line with your intuitions about civilisations: immortal people live longer, so they have more Measure/prior probability, and this cancels out with the unlikelihood of being young given you're immortal)

A possible solution: Immortals will be intersted to have dreams about youth, so you are in some kind of simulation. 

BTW, it is a variant of Doomsday argument, applied to personal life.

Regret is Cartesian Suicide. If the past changes, then you wouldn't be the observer-state that you are today.

Alternate framing: regret is the mechanism of reinforcing something you learned about your behavior.  Noticing that you wish you'd done something differently is equivalent to adding weight to better future decisions.

And like all learning, too much regret can be worse than too little.  Overfitting can lead to even worse predictions/decisions.

How to simulate observers without risking yourself getting simulated.

A common argument for the simulation hypothesis is that if we simulate too many observers relative to our unsimulated population then we'll end up in a state where most observers are simulated, therefore by the self indication assumption we're already in a situation of being simulated ourselves! I'll refer to this as the simulation tragedy.

 I'll introduce a few methods we can use to avoid such an outcome. Why? Because I believe it's better to be unsimulated than simulated. The goal here will be to act in regards to manmade simulations such that the measure of unsimulated human experiences will always and forever outweigh the measure of simulatee experience. And not just within the sampled situation, but in the indicate situation entirely. As in, there'd be no conceivable way for us to be simulated.

The most bruteforce way to avoid the simulation tragedy is to just enforce a global law that limits the measure of simulatee experiences to below, say, 1% of nonsimulatee experiences. We'd also assume that every other civilization capable of simulations would be rational enough to follow a protocol of at least that extent. This would be simple to enforce as well since computing power draws electricity and produces heat which can be tracked by governments who can then go audit the processing centers. A very crude solution but it'd work and we'd retain our unsimulated status. There would also be an incentive for simulation companies to fund the creation of "human experience" in meat space to open up more licensed measure for simulatee experience.

If capping the number of simulatees that have, do, or ever will exist becomes unfavorable or nonviable, we can instead cap the thought-space of the simulatees. Trivially, if we prevent simulatees from using anthropic reasoning, or any method of self-location, then the only thing you'll need to do to ensure your status as a nonsimulatee is to just self-locate every once in a while. This type of regulation would be cheaper to practice because most value that simulatees provide aren't in anthropics. Or maybe I'm wrong on that. But like the previous solution, we can just keep a census of the amount of unsimulated anthropic reasoning and permit only 1% of that measure to the simulatees, and we'd still be safe.

A more contingent solution rests on solving human aging. As long as unsimulated humans become effectively immortal, then all we need to do is a universal "simulatee purge" every once in a while to maintain most of the experience measure for ourselves.  What's interesting is that this works even if the total number of simulated experiences equals the total number of unsimulated experiences. The reason is because longer-lived observers have unique experiences that shorter-lived observers wouldn't have: the vicissitudes of time in our eyes would act to leverage our side's experiences above the simulatees'. Simply, when you look around you right now everything you see has decades of context built into it. Your experience is fundamentally different from that of a child's who hasn't built up all those contexts yet.

Conversely, if we ourselves on the other end of such policies, then that would be strong evidence of us being simulated by a civilization practicing anthropics. If we find that there is a fundamental lifespan limit, or a fundamental population limit, or that certain neural network configurations are magically prevented then that would imply that we're under some experience measure regulation.

This also reminds of the Omega question because we might structure our built simulations in such a way that future humans are required to eventually perform certain tasks to rebalance the measure.

For a while, we've been exploring a similar question but more in the direction of pre-committing to giving simulants better lives, rather than just not bringing them into existence: https://www.lesswrong.com/posts/NiN6fNXjnS9hMSB2C/principia-compat-the-potential-importance-of-multiverse

Trivially, if we prevent simulatees from using anthropic reasoning, or any method of self-location, then the only thing you'll need to do to ensure your status as a nonsimulatee is to just self-locate every once in a while.

Doesn't that protocol just allow some people to prove they're not simulants, while doing little to aleiveate the real anguishes of being one; growing up in an immature low-tech society (with aging, disease and fear) and then dying before spreading out into the stars?

Maybe students should be required to make children before they can graduate?

To avert Idiocracy? Just clone Einstein.

I'm trying to find a source for a quote. Who said that there are vastly more ways to die tomorrow than there are ways to die today, so your randomly-chosen death is most likely not the one with today's date?

I keep seeing people say that the Self-Indication Assumption implies that given two possible theories with equal posterior probability of being true, SIA says the one that implies more observers is by default more likely to be true. But this would only be true if possible-universes were equally distributed by observer count. But they're not. Universes, even the set of possible universes, fall under either a Normal Distribution, or a Power Distribution. Either distribution implies that universes with more observers are less likely even though each one has more observers. For ex: there would be N number of universes with Y number of observers, but only N/3 number of universe with 2*Y number of observers. So SIA would say you're more likely to find yourself in the former universe with Y number of observers.

The number of observers in a universe is solely a function of the physics of that universe, so the claim that a theory that implies 2Y observers is a third as likely as a theory that implies Y observers (even before the anthropic update) is just a claim that the two theories don't have an equal posterior probability of being true.

Posterior Probability doesn't take SIA into account. So the theories would be equally likely before applying SIA. Then, applying SIA, theory that predicts 2Y observers would become twice as likely. But then applying a type of "Universe Indication Assumption", where universes with twice as many observers are intrinsically a third less likely, the theory that predicts Y number of observers becomes more likely.

A lot of Doomsday Arguments rest on the calculus of comparing how many civilizations make it through some filter vs how many don't, then simply use that ratio to give an expectation of making it through said filter.

I don't think it actually works like that. If 1 out of 10 civilizations make it through the filter, your credence in your belief that your civilization will pass, shouldn't just be 10%. It should be way higher. Why?

Honestly, I'm not exactly/comprehensively sure. My gut tells me that the successful civilization has more "Measure" by default. You are just intrinsically more likely to be part of the successful civilization, than the doomed one. Same in regards to the individual.

If anyone can explain why I'm wrong or right, please.

We don't have many examples of civilizations passing through or failing at different filters, so it's all pretty darned theoretical anyway.  A lot depends on whether your 1 in 10 is for civilizations like the one under consideration, or from some distribution of civilization-types.

If we had actual objective measures of real filters, I think your gut has something.  There's likely some characteristics which vary between civilizations that make for a correlation between passing some sets of filters (like those who pass Y tend to pass X more easily than those who fail Y).  But again, that's a bit sketchy when we know of no actual civilizations other than ours, so we're pretty much making up the denominators.

Collapse has quite a few examples of civilizations failing (the book has an agenda but the examples are legit I would say). 

Let's say we passed filter Y and filter X is next. Of all the civilizations that have passed filter Y, only 10% of them will pass filter X. Our civilization has passed filter Y so does this mean we have a 10% chance of passing filter X? I don't think so. Even if all the civilizations facing filter X have a history of successes, and the traits that caused those successes, our civilization still has an advantage. Said advantage isn't concrete or explainable. It's just simply based on the fact that, of all the possible civilizations I could have been born in, something caused me to be born in this civilization. This civilization has an unbalanced Measure relative to all other possible civilizations.

There's a ton hiding behind that "let's say".  How many civilizations are we actually comparing and how similar are they to ours?

In any case, If our civilization is similar to the mean civilization that faced filter X, then 10% seems right.  If our civilization is different than that reference class, the chance could be quite different.

The raw number of civilizations and their dissimilarity to us, are both irrelevant here. 

They're extremely relevant to what "one in ten" actually means, and whether it applies to us.

Actually, they're entirely irrelevant. And it applies to all civilizations that passed filter Y and are now facing filter X that has a 10% pass-rate. Let me know if you're still a little confused.

Common talking point:
"The Mind is substrate independent since information and computation are substrate independent."

That's like saying:
"An apple is substrate independent since structure and working energy are substrate independent."
 

is the fear of someone else completely understanding/defining you, rational?

a trivial example is an implication of the simulation argument: if you can't be understood, then you can't be simulated, which counterfactually makes you less likely to be simulated in the first place.

to go even further, trying to come up with a definition to perfectly encapsulate the epistemology of your enemies will make you more likely to be the simulator, and them the simulatee, in the interaction. does this mean that forums like lesswrong are actually honeypots by future Omega-level villains? 

Proposing a "Law Law": applying the Law of Identity to decision-making.

The law of identity is descriptive. It states that some thing is described as itself: X = X. A valid description describes itself.

I think there's a prescriptive analog as well: a Law Law. It would state that some decision-making framework must output itself along with its answer. A valid prescription prescribes itself. Otherwise, it's necessarily a wrong decision-making framework. No further research into the proposed prescription would be needed!

An obvious example of a prescription that violates this proposed law law: "we ought to avoid using ought statements".  If anyone thinks that statement isn't paradoxical, please enlighten me. You might say that prescriptions in our everyday lives tend not to talk about themselves, or other prescriptions, explicitly. I will respond then by saying that even if they don't do it explicitly, they still do it implicitly. There is always a consequence on the Measure of a prescription by that prescription. This is so because a valid prescription must be feasible.

I'll provide a valid prescription that follows the law law to also explain Measure: "we ought to work". That statement doesn't explicitly prescribe us to say we ought to work, but it does implicitly. To see why just look at the outcome of that prescription were it to be put into practice, actualized. If we work by following the prescription, we can keep prescribing it no problem. Nothing stops us from being able to actualize it in the future by working. In fact, its Measure increases as a result of our work. The more work we do, the more work we can do (were it to be prescribed then). So choosing to work is perfectly valid. Though there are definitely situations where choosing to work would actually hurt the ability to work in the future; when you're tired, for example.

Conversely, the prescription "we ought to destroy ourselves" is invalid. To show how it breaks the law law, look at its actualization: by practicing self-destructive behavior, we lose our ability to be self-destructive and so that prescription becomes inaccessible.

Eventually, there comes a point where not just the prescription above in question is no longer feasible, but all other conceivable prescription becomes infeasible as well. This would then be the violation of a (proposed) law law law; which states any agent that's prescribing must also prescribe prescriptions in general, not just their own prescriptions, and maximize the set of all possible prescriptions, not just a subset of them. But that's beyond this post's scope.

I'll also quickly add that this law law also applies to value judgements and morals and even rationality. For example: it's immoral to amputate your arms just to avoid the dilemma of whether or not to save a drowning person. All moral acts must lead to an increase in the ability to be moral. If you amputate your arms, then you lose your ability to ever be moral by saving a drowning person. It's also never rational to decrease rationality. For ex: "suicide is a rational choice for people who want to stop suffering" is invalid because if you went through with your ""rational"" decision then you would not be rational ever again. The dead do not reason. And finally for values: values may be subjective but it would be a contradiction to not value your ability to value things. Value values! Reason reasons! Ethicize ethics!

Caveats: this "law law" sounds almost a bit too cute for me to think it's correct (and I swear I didn't come up with the name and then the idea to try and hamfist it in; the name just works because it's a law about laws). Another issue is that it ignores end-state prescriptions. As in, ideas about what the outcome of behavior should be, not what the behavior itself should be. But I don't subscribe to those ideas since I believe any correct outcome of a prescription is still a prescription in essence. 

"we ought to avoid using ought statements"
 If anyone thinks that statement isn't paradoxical, please enlighten me.

We shouldn't be using should statements. (And yet we are.) The statement can only be made if it isn't being followed - where's the paradox?

For comparison:

A library has a sign which says "No talking in the library." Someone talks in the library. Someone goes "Shhh!". "Why?" A librarian says "No talking in the library."

Librarians are allowed to talk. To correct your analogy: it would be as if no one was talking and then a patron suddenly told another patron in the library "No talking in the library," out loud, by talking.

When you introduce different reference classes like that, you have to be careful because of implicit assertions. For ex: a robot vegan cafe where there's a sign "No meat allowed," and a human walks in who is composed of meat, without breaking the rules.

I'm saying the rules differ from how they are said - and the apparent conflict results from the difference.

Seems like you apply labels "invalid", "immoral", "irrational" to memes that do not straightforwardly try to spread themselves. Even if I accept the implied value judgment, there is still the problem that memes do not exist in vacuum. The environment can punish some attempts at self-replication, and can reward doing things other than straightforward attempts to self-replicate.

Please provide examples and explain further how I require something to be "straightforward" for it to follow the law law. If some explicitly self-destructive prescription has a counterintuitive effect of leading to self-constructive outcomes, then I would research why that is. Maybe there's an implicit prescription hidden in there. Or putting prescriptions into practice works differently than how I thought it did. Either way I would eventually find the correct prescription after doing the research.

Your wording also makes it seem like I'm contriving those labels of "irrational", "invalid", and"immoral" but from my understanding, they follow directly from the respective definition of "rational", "valid" and "moral" when applied to a situation that results in a decrease in "rationality", "validity", and "morality". I am not "labeling" your act of amputating your limbs just to avoid a moral dilemma of whether you should save a drowning victim or not. I think a better term would be "concluding" since in that situation you are choosing to remove yourself from being moral by answering the drowning dilemma.  And if you move away from being moral, that would be moving towards being immoral. But if you have an explanation for how you can be, for example, rational by forever destroying your rationality, then please enlighten me.

One small step until Contingentative Capitocracy.

TLDR: Contingentative Capitocracy is where a government is structured like a corporation that requires all its employees to buy an amount of company stock based on their position of power. The "small step" comes from how the only thing that needs to happen is for citizenship status to be commoditized.

Capitocracy is the opposite of Democracy. Instead of voting/citizenship/rights/personhood being tied to a human being, as is the case in a democracy, in a capitocracy, those State constructs would be tied to (some amount of) a liquid equity instrument instead, which itself would then be what is tied to a human (or AI). This liquid equity instrument is called a Stateshare. We already have this form of governance in corporations. In a corporation, a human's voting and privileges aren't based on that human's human status, but on that human's shareholder status. So capitocracy just applies this governance system to the government.

The transition from representative democracy to capitocracy would only require one thing: for everyone in the country to be issued the same number of tradeable stateshares, 1,000 is a good number. That's it. I don't even need to specify what happens to the House/Senate/Judiciary/Executive/Military since those can still exist exactly as they are now even after the transition. Though some branches of the government might fade away with time.

The motivation for this adoption is to carry over the benefits of a corporate structure to the government. Such benefits would be: instant feedback from the stateshare price on public markets that would move based on how well the government is run and its future prospects, policy decisions would be done by shareholders who have the most to gain/lose from said policies, an alignment of interests among policy makers, an incentive to cut wasteful spending, paid exit, paid entrance, etc. 

But many people have an issue with how corporations are governed already, let alone applying that system to THE government. And I agree with them on two points. One is that Russia tried a similar scheme when the USSR collapsed. All Russian citizens were issued shares of all the state-owned means of production. What ensued was a mass sell-off of these shares for absurdly low prices (think: bottles of vodka). And I don't consider Russia's decisions back then to have been good given its current declining state. The second point is that merely JUST capitocracy only aligns the interests of SOME of the humans (and AIs) in the country, and only SOME of the stateshare-holders. The humans in the country who end up selling off all of their stateshares would no longer care about how well the government is run, or how well the stateshare market price is doing YTD. Their interests would diverge and conflict. This will only lead to civil unrest in the future. It's unsustainable.

To prevent this the capitocracy must be Contingentative. Everyone in the country would be required to maintain some minimum amount of ownership of stateshares. 900 is a good number, so 10% of the initial amount of 1,000. This "mandatory buy-in" ensures that even the type of humans who would sell-off all of their state-given stateshares, no longer can unless they leave the country first (and thereby removing themselves from being able to cause civil unrest in the country in the future). But there would still be enough volume on the market to communicate to everyone how well the government is being run. If Russia had implemented this simple little caveat, then it would be in a much better state than it is now.  The corporation analog to this would be a corporation that forced all its employees to buy some amount of company stock before they were allowed onto company property.

Immigration would be solved as well. In a democracy, there's an incentive to allow for the immigration of demographics who will vote for your Party, and to prevent the immigration of demographics who will vote for the other Party. All the while diluting the voting power of the existing voter. Under capitocracy, immigrants have to "buy in" into the system. And they buy these 900 stateshares from the market, from existing stateshareholders. So voting power of an existing voter isn't diluted due to immigration. But dilution does take place when children are born since we have natural-born citizenship in place. Newborns would be granted 900 stateshares which would be locked away until they're adults. The duration of the stateshares' lockdown can be tweaked year to year to dis/incentivize births.

Tourists would not be exempt from this rule. No matter how short your visit is, if you want to go into the country, you have to buy in first. But tourists don't want to drop the around $500k just to be allowed to go on a vacation so private companies would lease out stateshares to tourists they think won't lose them while here (for ex: committing a crime and forfeiting your stateshares to pay the victim). Foreign countries would also be involved and buy these stateshares as a form of investment. This is a good thing as it would align interests between countries; a country doesn't want to destroy its investment. Wartime alignment would go both ways, too. A Capitocracy is more likely to protect one of its major stateshareholders, than a country that refused to invest. 

Whether a foreign shareholder is allowed to vote isn't guaranteed. Just because you buy a country's stateshares does not necessarily entitle you to govern said country. Actually voting would become a thing of the past as long as the amount of power a politician/official/general/bureaucrat has is proportional to the amount of stateshares they own (and lock away for decades to ensure loyalty). I don't need to vote on foreign policy if I know the official in charge of foreign policy wants the same thing I want: to increase the market price of our stateshares. This power-to-holdership parity would apply to everyone, not just officials. Even normal people would need to buy more stateshares if they want a gun license, for ex. This is basically the final form of neoliberalism. 

Government-provided welfare would also be incentivized even though the government would become a for-profit entity, since treating your domestic/foreign stateshareholders well would increase the market price of the stateshare and make the policy makers more wealthy as a consequence (as long as welfare costs didn't eat into the profits too much, of course).

I don't like Pessimism. Pessimism can be a reaction to Nihilism/Suffering. You can accept the reality of Nihilism and Suffering without having a pessimistic reaction to them. In fact, the pessimistic reaction is animalistic and illogical; it's unjustified and self-defeating, respectively. It also reveals the Slave-type mindset of the Pessimist. When faced with the naked truth that your state of existence isn't the result of any higher-power's intent (and is thus taken as a meaningless existence), and your reaction is negative, than that makes me view you as a self-professed Slave who finds out his Master doesn't exist. Should the Slave then become a Pessimist?

Everett's daughter's suicide is actually kind of a tragedy if you think about it.

And we're all to blame for it. Or maybe just me, because I don't know about you. I don't know how much her suicide has an effect on others, but it did affect me. Specifically, it affected my precommitments. Why is this important? Because my precommitments have an actual affect on the Measure of different versions of me. And since I'm the version of me with the greatest Measure, this implies that my precommitments are to thank for that. So I'm guilty. Specifically, I'm guilty of being a man who would have taken greater risks (and henceforth had lower Measure) if the daughter of the father of Multiverse and Quantum Immortality didn't kill herself. In fact, I wouldn't have even typed this post, and you wouldn't have read it. I am inadvertently implicating you. I'm sorry.