Posts

Sorted by New

Wiki Contributions

Comments

I was reading Outlawing Anthropics and especially this subconversation has caught my attention. I got some ideas; but that thread is nearly four years old, so I am commenting here instead of there.

My version of the simplified situation: There is an intelligent rational agent (her name is Abby, she is well versed in Bayesian statistics) and there are two urns, each urn containing two marbles. Three of the marbles are green. They are macroscopic, so distinguishable, but not for Abby's senses. Anyway, Abby can number them to be marbles 1, 2 and 3, she is just unable to "read" the number even on close examination. One marble is red, she can distinguish it, and it gets number 0. One urn has marbles 0 and 2, this is the "even" urn, the second has marbles 1 and 3, and is called odd. Again, Abby cannot distinguish urns without examining marbles. Now, assistent takes both urns to another room, computes 256th binary digit of exp(-1), and gets back with just one urn of corresponding parity. Abby is allowed to draw one marble (turns out it is green) and then urn is taken away and Abby is basically asked to state her subjective probability of the urn being odd (by accepting or refusing some bets). And only then she is told that in another room there is another person (Bart) who is being presented with the same choices after drawing the other marble from the very same urn. And finally, Abby is asked (informally) what is her averaged expectation of Bart's subjective probability of the urn being odd (now that she sees her marble is green). And, if this average is different from her subjective probability, why is she not taking that value as an indirect evidence in her calculations (which clearly means that the assistent is just messing with her).

Assumptions are that neither Abby nor Bart have a clue about binary digits of exp(-1), they are not able to compute that far and so they assign prior probability of the urn being odd to 50%. Another assumption is that Abby and Bart both have chosen their marbles randomly, and in fact, they do not even know which one of them was drawing first. So there are 4 "possible" worlds, numbered by the marble Abby "would" have drawn, all of them appearing equally probable before the marble drawing.

Question is (of course) what subjective probability should Abby use when accepting/refusing bets. And to give a witty retort to assistant's "why" question, where applicable; or else, to explain why Boltzmann brains are not that big obstacle to rationality.

And here I am, way over my time budget, having finished around one third of my planned comment. So I guess I shall leave you with questions for now, and I will resume commenting later.

Edit: Note to self: Do not forget to include http:// in links. RTFM.

Edit: "possible" worlds, numbered by marble Abby has drawn -> "possible" worlds, numbered by marble Abby "would" have drawn

Why, then, don't more people realize that many worlds is correct?

I am going to try and provide short answer, as I see it. (Fighting urge to write about different levels of "physical reality".)

Many Words is an Interpretation. An interpretation should translate from mathematical formalism towards practical algorithms, but MWI does not go all the way. Namely, it does not specify the quantum state an Agent should use for computation. One possible state agrees with "Schroedinger's experiment was definitely set up and started", another state implies "cat definitely turned out to be alive", but those certainties cannot occur simultaneously.

Bayesian inference in non-quantum physics also changes (probabilistic) state, but we can interpret it as a mere change of our beliefs, and not a change in the physical system. But in quantum mechanics, upon observation, the "objective" state fitting our knowledge changes. MWI says "fitting our knowledge" is not a good criterion of choosing quantum state to compute with (because no state can be fitting enough, as example with Shroedinger's cat shows) and we should compute with superposition of Agents. MWI may be more "objectively correct", but it does not seem to be more "practical" than Copenhagen interpretation. So physicists do like to cautiously agree with MWI, then wave hands, proclaim "Decoherence!" and at the end use Copenhagen interpretation as before.

Introductory books emphasize experiments, and experimental results do not come in form of superpositioned bits. So before student gets familiar enough with mathematical formalism to think about detectors in superposition, Copenhagen is already occupying slot for Interpretation.

political systems (such as democracy) are about power.

Precisely. Democracy allows the competition of governing ideas. Granting legitimacy to the winner (to became government) and making system stable.

I see idea of democracy in detecting power shifts without open conflicts. How many fighters would this party have if civil war erupted? Election will show. Number of votes may be very far from actual power (e.g. military strength) but it still can make the weaker side to not seek conflict anymore.

Without being explicit about what power ems will have, specifically in the meatworld, the question seems too ill-defined to me

Well, I am not even sure about powers of individual humans today. But I am sure that counting adult = 1 vote, adolescent = 0 votes is not precise. On the other hand, it does not need to be precise. Every form of power can be roughly transformed to "ability to campaign for more votes". Making votes more sophisticated would add a subgoal of "increasing voting power" that could become as taxing as actual conflict. Or not, I really have no idea; sociology is difficult.

Back on topic. I see problems when ems are more varied in personal power, compared to children vs adults variance of today. Would "voting weight" have do be more fine-grained? Would this weight be measured in a friendly competition, akin to sports of today? Or would there be privileged caste, and everyone else would have no voting rights? Would the voting rights be not granted for persons, but for military platforms instead? (Those platforms would not be actually used, they will exist just for signalling purposes.) Or will any simpleton barely managing digital signature be a voter? Subject to brain-washing by those with actual power?

I hope that these low-quality questions can help someone else to give high-quality answers.

But I want to stress that I do not see any problems specific to copy-ability of ems. Democracy only measures power of political party, democracy does not reflect on which methods have lead to the said power.

Thanks for the tip and for the welcome. Now I see that what I really needed was just to read the manual first. By the way, where is the appropriate place to write comments about how misleading the sanbox (in contrast with manual) actually is?

Yes, CEV is a slippery slope. We should make sure to be as aware of possible consequences as practical, before making the first step. But CEV is the kind of slippery slope intended to go "upwards", in the direction of greater good and less biased morals. In the hands of superintelligence, I expect CEV to extrapolate values beyond "weird", to "outright alien" or "utterly incomprehensible" very fast. (Abandoning Friendliness on the way, for something less incompatible with The Basic AI Drives. But that is for completely different topic.)

There's a deeper question here: ideally, we would like our CEV to make choices for us that aren't our choices. We would like our CEV to give us the potential for growth, and not to burden us with a powerful optimization engine driven by our childish foolishness.

Thank you for mentioning "childish foolishness". I was not sure whether such suggestive emotional analogies would be welcome. This is my first comment on LessWrong, you know.

Let me just state that I was surprised by my strong emotional reaction while reading the original post. As long as higher versions are extrapolated to be more competent, moral, responsible and so on; they should be allowed to be extrapolated further.

If anyone considers the original post to be a formulation of a problem (and ponders possible solutions), and if the said anyone is interested in counter-arguments based on shallow, emotional and biased analogies, here is one such analogy: Imagine children pondering their future development. They envision growing up, but they also see themselves start caring more about work and less about play. Children consider those extrapolated values to be unwanted, so they formulate the scenario as "problem of growing up" and they try to come up with a safe solution. Of course, you may substitute "play versus work" with any "children versus adults" trope of your chice. Or "adolescents versus adults", and so on.

Reades may wish to counter-balance any emotional "aftertaste" by focusing on The Legend of Murder-Gandhi again.

P.S.: Does this web interface have anything like "preview" button?

Edit: typo and grammar.