The Cartoon Guide to Löb's Theorem

Wow. I've never run into a text using "we have" as assuming something's provability, rather than assuming its truth.

So the application of the deduction theorem is just plain wrong then? If what you actually get via Lob's theorem is ◻((◻C)->C) ->◻C, then the deduction theorem does *not* give the claimed ((◻C)->C)->C, but instead gives ◻((◻C)->C)->C, from which the next inference does not follow.

14yI don't think I've ever used a text that didn't. "We have" is "we have as a
theorem/premise". In most cases this is an unimportant distinction to make, so
you could be forgiven for not noticing, if no one ever mentioned why they were
using a weird syntactic construction like that rather than plain English.
And yes, rereading the argument that does seem to be where it falls down. Though
tbh, you should probably have checked your own assumptions before assuming that
the question was wrong as stated.

Probability is Subjectively Objective

The issue is not want of an explanation for the phenomenon, away or otherwise. We have an explanation of the phenomenon, in fact we have several. That's not the issue. What I'm talking about here is the inherent, not-a-result-of-my-limited-knowledge probabilities that are a part of *all* explanations of the phenomenon.

Past me apparently insisted on trying to explain this in terminology that works well in collapse or pilot-wave models, but not in many-worlds models. Sorry about that. To try and clear this up, let me go through a "guess the beam-spli... (read more)

Newcomb's Problem and Regret of Rationality

I two-box.

Three days later, "Omega" appears in the sky and makes an announcement. "Greeting earthlings. I am sorry to say that I have lied to you. I am actually Alpha, a galactic superintelligence who hates that Omega asshole. I came to predict your species' reaction to my arch-nemesis Omega and I must say that I am disappointed. So many of you chose the obviously-irrational single-box strategy that I must decree your species unworthy of this universe. Goodbye."

Giant laser beam then obliterates earth. I die wishing I'd done more ... (read more)

Forcing Anthropics: Boltzmann Brains

"Why did the universe seem to start from a condition of low entropy?"

I'm confused here. If we don't go with a big universe and instead just say that our observable universe is the whole thing, then tracing back time we find that it began with a very small volume. While it's true that such a system wold necessarily have low entropy, that's largely because small volume = not many different places to put things.

Alternative hypothesis: The universe began in a state of maximal entropy. This maximum value was "low" compared to present day... (read more)

An Intuitive Explanation of Solomonoff Induction

"Specifically, going between two universal machines cannot increase the hypothesis length any more than the length of the compiler from one machine to the other. This length is fixed, independent of the hypothesis, so the more data you use, the less this difference matters."

This doesn't completely resolve my concern here, as there are infinitely many possible Turing machines. If you pick one and I'm free to pick any other, is there a bound on the length of the compiler? If not, then I don't see how the compiler length placing a bound on any spe... (read more)

04ycan anyone answer these concerns?

Probability is Subjectively Objective

You've only moved the problem down one step.

Five years ago I sat in a lab with a beam-spitter and a single-photon multiplier tube. I watched as the SPMT clicked half the time and didn't click half the time, with no way to predict which I would observe. You're claiming that the tube clicked every time, and the the part of me that noticed one half is very disconnected from the part of me that noticed the other half. The problem is that this still doesn't allow me to postdict which of the two halves the part of me that is typing this should have in his mem... (read more)

25yMoving the problem down one step puts it at the bottom.
One half of you should have one, and the other half should have the other. You
should be aware intellectually that it is only the disconnect between your two
halves' brains not superimposing which prevents you from having both experiences
in a singular person, and know that it is your physical entanglement with the
fired particle which went both ways that is the cause. There's nothing to
post-dict. The phenomenon is not merely explained, but explained away. The
particle split, on one side there is a you that saw it split right, on one side
there is a you that saw it split left, and both of you are aware of this fact,
and aware that the other you exists on the other side seeing the other result,
because the particle always goes both ways and always makes each of you. There
is no more to explain. You are in all branches, and it is not mysterious that
each of you in each branch sees its branch and not the others. And unless some
particularly striking consequence happened, all of them are writing messages
similar to this, and getting replies similar to this.

2014 Less Wrong Census/Survey

Did the survey, except digit ratio due to lack of precision measuring devices.

As for feedback, I had some trouble interpreting a few of the questions. There were some times when you defined terms like human biodiversity, and I agreed with some of the claims in the definition but not others, but since I had no real way to weight the claims by importance it was difficult for me to turn my conclusions into a single confidence measurement. I also had no idea weather the best-selling computer game question was supposed to account for inflation or general grow... (read more)

Probability is Subjectively Objective

The Many Physicists description never talked about the electron only going one way. It talked about detecting the electron. There's no metaphysics there, only experiment. Set up a two-slit configuration and put a detector at one slit, and you see it firing half the time. You may say that the electron goes both ways every time, but we still only have the detector firing half the time. We also cannot predict which half of the trials will have the detector firing and which won't. And everything we understand about particle physics indicates that both the 1/2 and the trial-by-trial unpredictability is NOT coming from ignorance of hidden properties or variables but from the fundamental way the universe works.

36yNo, I see it firing both ways every time. In one world, I see it going left, and
in another I see it going right. But because these very different states of my
brain involve a great many particles in different places, the interactions
between them are vanishingly nonexistent and my two otherworld brains don't
share the same thought. I am not aware of my other self who has seen the
particle go the other way.
We have both detectors firing every time in the world which corresponds to the
particle's path. And since that creates a macroscopic divergence, the one
detector doesn't send an interference signal to the other world.
We can predict it will go both ways each time, and divide the world in twain
along its amplitude thickness, and that in each world we will observe the way it
went in that world. If we are clever about it, we can arrange to have all
particles end in the same place when we are done, and merge those worlds back
together, creating an interference pattern which we can detect to demonstrate
that the particle went both ways. This is problematic because entanglement is
contagious, and as soon as something macroscopic becomes affected putting Humpty
Dumpty back together again becomes prohibitive. Then the interference pattern
vanishes and we're left with divergent worlds, each seeing only the way it went
on their side, and an other side which always saw it go the other way, with
neither of them communicating to each other.
Correct. There are no hidden variables. It goes both ways every time. The dice
are not invisible as they roll. There are instead no dice.

Occam's Razor

I don't think this is what's actually going on in the brains of most humans.

Suppose there were ten random people who each told you that gravity would be suddenly reversing soon, but each one predicted a different month. For simplicity, person 1 predicts the gravity reversal will come in 1 month, person 2 predicts it will come in 2 months, etc.

Now you wait a month, and there's no gravity reversal, so clearly person 1 is wrong. You wait another month, and clearly person 2 is wrong. Then person 3 is proved wrong, as is person 4 and then 5 and then 6 and 7 ... (read more)

Newcomb's Problem and Regret of Rationality

Suppose my decision algorithm for the "both boxes are transparent" case is to take only box B if and only if it is empty, and to take both boxes if and only if box B has a million dollars in it. How does Omega respond? No matter how it handles box B, it's implied prediction will be wrong.

Perhaps just as slippery, what if my algorithm is to take only box B if and only if it contains a million dollars, and to take both boxes if and only if box B is empty? In this case, anything Omega predicts will be accurate, so what prediction does it make?

Com... (read more)

07y* Box B appears full of money; however, after you take both boxes, you find
that the money in Box B is Monopoly money. The money in Box A remains
genuine, however.
* Box B appears empty, however, on opening it you find, written on the bottom
of the box, the full details of a bank account opened by Omega, containing
one million dollars, together with written permission for you to access said
account.
In short, even with transparent boxes, there's a number of ways for Omega to lie
to you about the contents of Box B, and in this manner control your choice. If
Omega is constrained to not lie about the contents of Box B, then it gets a bit
trickier; Omega can still maintain an over 90% success rate by presenting the
same choice to plenty of other people with an empty box B (since most people
will likely take both boxes if they know B is empty).
Or, alternatively, Omega can decide to offer you the choice at a time when Omega
predicts you won't live long enough to make it.
That depends; instead of making a prediction here, Omega is controlling your
choice. Whether you get the million dollars or not in this case depends on
whether Omega wants you to have the million dollars or not, in furtherance of
whatever other plans Omega is planning.
Omega doesn't need to predict your choice; in the transparent-box case, Omega
needs to predict your decision algorithm.

17yDeath by lightning.
I typically include such disclaimers such as the above in a footnote or more
precisely targeted problem specification so as to avoid any avoid-the-question
technicalities. The premise is not that Omega is an idiot or a sloppy
game-designer.
You took box B. Putting it down again doesn't help you. Finding ways to be
cleverer than Omega is not a winning solution to Newcomblike problems.

37yThe naive presentation of the transparent problem is circular, and for that
reason ill defined (what you do depends on what's in the boxes depends on
omega's prediction depends on what you do...). A plausible version of the
transparent newcomb's problem involves Omega:
1. Predicting what you'd do if you saw box B full (and never mind the case
where box B is empty).
2. Predicting what you'd do if you saw box B empty (and never mind the case
where box B is full).
3. Predicting what you'd do in both cases, and filling box B if and only if
you'd one-box in both of them.
Or variations of those. There's no circularity when he only makes such
"conditional" predictions.
He could use the same algorithms in the non-transparent case, and they would
reduce to the normal newcomb's problem usually, but prevent you from doing any
tricky business if you happen to bring an X-ray imager (or kitchen scales) and
try to observe the state of box B.

07yIn the first case, Omega does not offer you the deal, and you receive $0,
proving that it is possible to do worse than a two-boxer.
In the second case, you are placed into a superposition of taking one box and
both boxes, receiving the appropriate reward in each.
In the third case, you are counted as 'selecting' both boxes, since it's hard to
convince Omega that grabbing a box doesn't count as selecting it.

A Rationalist's Account of Objectification?

The problem isn't objectification of women, it's a lack of non-objectified female characters.

Men are objectified a *lot* in media. As a simple example, the overwhelming majority of mooks are male, and these characters exist solely to be mowed down so the audience can see how awesome the hero(ine) is (or sometimes how dangerous the villain is). They are hapless, often unthinking and with basically no backstory to speak of. Most of the time they aren't even given names. So why doesn't this common male objectification bring outrage?

I think the reason is tha... (read more)

6[anonymous]6yThe types of objectification are different, as you touch on. Men are not
sexually objectified as often. When they are, they are shown in a position of
power or self-direction, with women in contrasting positions of passiveness and
submissiveness. This is most visible in advertising because it's the place where
men are portrayed as specifically male rather than as people (with the
assumption that all people worth knowing about or portraying must be men).
Your example of random mooks? They're there to shoot and die and follow orders.
You can replace them with robots or ambulatory plants or aliens with no
discernable gender. Calvin Klein ads? The men are there to be masculine.
Men are allowed to be short or tall, fat or thin, strong or weak. They can have
long noses and bulbous noses and button noses and earlobes that hang down. Women
have several molds they can fit -- they can be crones or grandmothers, or they
can be minor variants of generic white sexy woman at different ages, between
fifteen and thirty.
Even when women are portrayed as skilled, intelligent people with their own
backstories and interests, you'd be hard pressed to find one that isn't
portrayed in a way to make sexual objectification easy, even if it makes no
sense with their story. Amita from Far Cry 4, for instance, is one of two
leaders of a terrorist group fighting against an oppressive dictatorship. You'd
expect that she'd have scars. You'd expect she'd be too busy to maintain long
hair. You'd expect muscles. You'd expect powerful body language. You wouldn't
exactly expect her to have turquoise earrings, wear eyeliner, have immaculately
plucked eyebrows, have skin as smooth as marble, and wear a pouty / concerned
expression half the time.
The huge problem is that women's perceived value can never exceed the ease with
which they can be objectified.

2013 Less Wrong Census/Survey

Took the survey. I definitely did have an IQ test when I was a kid, but I don't think anyone ever told me the results and if they did I sure don't remember it.

Also, as a scientist I counted my various research techniques as new methods that help make my beliefs more accurate, which means I put something like 2/day for trying them and 1/week for them working. In hindsight I'm guessing this interpretation is not what you meant, and that science in general might count as ONE method altogether.

Can You Prove Two Particles Are Identical?

But there's also the observed matter-antimatter asymmetry. Observations strongly indicate that right now we have a lot more electrons than positrons. If it was just one electron going back and forth in time (and occasionally being a photon), we'd expect at most one extra electron.

Not to mention the fact that positrons = electrons going backwards in time only works if you ignore gravity.

Can You Prove Two Particles Are Identical?

There's also the observed matter-antimatter asymmetry. Even if you want to argue that virtual electrons aren't real and thus don't count, it still seems to be the case that there are a lot more electrons than positrons. If it was just one electron going back and forth in time, we'd expect at most one extra electron.

Not to mention the fact that positrons = electrons going backwards in time only works if you ignore gravity.

0[anonymous]7y"Well, maybe they are hidden in the protons or something"
[http://en.wikipedia.org/wiki/One-electron_universe]
;-)

Timeless Identity

Eliezer, why no mention of the no-cloning theorem?

Also, some thoughts this has triggered:

Distinguishability can be shown to exist for some types of objects in just the same way that it can be shown to not exist for electrons. Flip two coins. If the coins are indistinguishable, then the HT state is the same as the TH state, and you only have three possible states. But if the coins are distinguishable, then HT is not TH, and there are four possible states. You can experimentally verify that the probability obeys the latter situation, and not the former. ... (read more)

05yIndeed. It is disappointing to see this buried at the bottom of the page. I
don't think the no-cloning and no-teleportation theorems have any serious
implications for Eliezer's arguments for life extension (although, it might have
some implications for how he anticipates being recovered later). But, it does
have some implications for the ideas about identity presented here. Here is the
relevant text:
In fact, having read the entire QM sequence, I am not under the impression that
I am made out of atoms at all! I am an ever-decohering configuration of
amplitude distributions. Furthermore since I know my configuration can never be
decomposed and transmitted via classical means
[https://en.wikipedia.org/wiki/No-teleportation_theorem], I also know that the
scanner/teleporter so-defined can't possibly exist.
Now, if you want to talk about entangling my body at point A, with some matter
at point B, and via some additional information transmitted via normal channels,
move me from point A to point B that way - now we have something to talk about.
But the original proposition, of a teleporter which can move me from point A to
point B, but can also, with some minor tweaking, be turned into a scanner which
would "merely" create a copy of me at point A, is an absurdity. It is impossible
to copy the configuration that makes up "me". The original classical teleporter
kills the people who use it, because the configuration of amplitude constructed
at point B can't possibly match, even in principle the one destroyed at point A.

Timeless Identity

Okay, we need to be really careful about this.

If you sign up for cryonics at time T1, then the not-signed-up branch has lower amplitude after T1 than it had before T1. But this is very different from saying that the not-signed up branch has lower amplitude after T1 than it would have had after T1 if you had not signed up for cryonics at T1. In fact, the latter statement is necessarily false if physics really is timeless.

I think this latter point is what the other posters are driving at. It is true that if there is a branch at T1 where some yous go down ... (read more)

Pascal's Mugging: Tiny Probabilities of Vast Utilities

Edit: Looks like I was assuming probability distributions for which Lim (Y -> infinity) of Y*P(Y) is well defined. This turns out to be monotonic series or some similar class (thanks shinoteki).

I think it's still the case that a probability distribution that would lead to TraderJoe's claim of P(Y)*Y tending to infinity as Y grows would be un-normalizable. You can of course have a distribution for which this limit is undefined, but that's a different story.

Counterexample: P(3^^^...3)(n "^"s) = 1/2^n P(anything else) = 0 This is normalized because the sum of a geometric series with decreasing terms is finite. You might have been thinking of the fact that if a probability distribution on the integers is monotone decreasing (i.e. if P(n)>P(m) then n <m) then P(n) must decrease faster than 1/n. However, a complexity-based distribution will not be monotone because some big numbers are simple while most of them are complex.

Beauty quips, "I'd shut up and multiply!"

You can have a credence of 1/2 for heads in the absence of which-day knowledge, but for consistency you will also need P(Heads | Monday) = 2/3 and P(Monday) = 3/4. Neither of these match frequentist notions unless you count each awakening after a Tails result as half a result (in which case they both match frequentist notions).

Why Are Individual IQ Differences OK?

With individual differences, people are being judged as individuals, and on the basis of their individual capabilities.

With racial differences, people are being judged as members of a race, and not on the basis of their individual capabilities.

At least, that's the fear.

Pascal's Muggle: Infinitesimal Priors and Strong Evidence

But what numbers are you allowed to start with on the computation? Why can't I say that, for example, 12,345,346,437,682,315,436 is one of the numbers I can do computation from (as a starting point), and thus it has extremely small complexity?

38yYou could say this -- doing so would be like describing your own language in
which things involving 12,345,346,437,682,315,436 can be expressed concisely.
So Kolmogorov complexity is somewhat language-dependent. However, given two
languages in which you can describe numbers, you can compute a constant such
that the complexity of any number is off by at most that constant between the
two languages. (The constant is more or less the complexity of describing one
language in the other). So things aren't actually too bad.
But if we're just talking about Turing machines, we presumably express numbers
in binary, in which case writing "3" can be done very easily, and all you need
to do to specify 3^^^3 is to make a Turing machine computing ^^^.

Pascal's Muggle: Infinitesimal Priors and Strong Evidence

I'm not familiar with Kolmogorov complexity, but isn't the aparent simplicity of 3^^^3 just an artifact of what notation we happen to have invented? I mean, "^^^" is not really a basic operation in arithmetic. We have a nice compact way of describing what steps are needed to get from a number we intuitively grok, 3, to 3^^^3, but I'm not sure it's safe to say that makes it simple in any significant way. For one thing, what would make 3 a simple number in the first place?

88yIn the nicest possible way, shouldn't you have stopped right there? Shouldn't
the appearance of this unfamiliar and formidable-looking word have told you that
I wasn't appealing to some intuitive notion of complexity, but to a particular
formalisation that you would need to be familiar with to challenge? If instead
of commenting you'd Googled that term, you would have found the Wikipedia
article [http://en.wikipedia.org/wiki/Kolmogorov_complexity] that answered this
and your next question.

08yYou can as a rough estimate of the complexity of a number take the amount of
lines of the shortest program that would compute the number from basic
operations. More formally, substitute lines of a program with states of a Turing
Machine.

Pascal's Muggle: Infinitesimal Priors and Strong Evidence

Just thought of something:

How sure are we that P(there are N people) is not at least as small as 1/N for sufficiently large N, even without a leverage penalty? The OP seems to be arguing that the complexity penalty on the prior is insufficient to generate this low probability, since it doesn't take much additional complexity to generate scenarios with arbitrarily more people. Yet it seems to me that after some sufficiently large number, P(there are N people) *must* drop faster than 1/N. This is because our prior must be normalized. That is:

Sum(all non-ne... (read more)

-18yThe problem is that the Solomonoff prior picks out 3^^^3 as much more likely
than most of the numbers of the same magnitude because it has much lower
Kolmogorov complexity.

48yHm. Technically for EU differentials to converge we only need that the number of
people we expectedly affect sums to something finite, but having a finite
expected number of people existing in the multiverse would certainly accomplish
that.

Pascal's Muggle: Infinitesimal Priors and Strong Evidence

Just gonna jot down some thoughts here. First a layout of the problem.

- Expected utility is a product of two numbers, probability of the event times utility generated by the event.
- Traditionally speaking, when the event is claimed to affect 3^^^3 people, the utility generated is on the order of 3^^^3
- Traditionally speaking, there's nothing about the 3^^^3 people that requires a super-exponentially large extension to the complexity of the system (the univers/multivers/etc). So the probability of the event does
*not*scale like 1/(3^^^3) - Thus Expected Payoff

07yYou are wrong, and I will explain why.
If you "have ((◻C)->C)", that is an assertion/assumption that ◻((◻C)->C). By
Loeb's theorem, It implies that ◻C. This is different from what you wrote, which
claims that ((◻C)->C) implies ◻C.

Boredom vs. Scope Insensitivity

Uh... what?

Sqrt(a few billion + n) is approximately Sqrt(a few billion). Increasing functions with diminishing returns don't approach Linearity at large values, their growth becomes really Small (way sub-linear, or nearly constant) at high values.

This may be an accurate description of what's going on (if, say, our value for re-watching movies falls off slower than our value for saving multiple lives), but it does not at all strike me as an argument for treating lives as linear. In fact, it strikes me as an argument for treating life-saving as More sub-linear than movie-watching.

48yIt's not the overall growth rate of the function that becomes linear at high
values; it's the local behavior. We can approximate: sqrt(1000000),
sqrt(1001000), sqrt(1002000), sqrt(1003000) by: 1000, 1000.5, 1001, 1001.5. This
is linear behavior.

Nonperson Predicates

Food for thought:

This whole post seems to assign moral values to actions, rather than states. If it is morally negative to end a simulated person's existence, does this mean something different that saying that the universe without that simulated person has a lower moral value than the universe with that person's existence? If not, doesn't that give us a moral obligation to create and maintain all the simulations we can, rather than

*avoiding*their creation? The more I think about this post, the more it seems that the optimum response is to simulate as

29yYou're touching on some unresolved issues, and some issues that are resolved but
complicated to solve without maths beyond my grasp.
From what I understand, there's a lot of our current and past values involved,
and how we would think now and want now vs what we would think and want
post-modification.
To pick a particularly emotional subject for most people, let's suppose there's
some person "K" who's just so friggen good at sex and psychological domination
that even if they rape someone that person will, after the initial shock and
trauma, quickly recover within a day and immediately without further
intervention become permanently addicted to sex, with their mind rewiring itself
to fully enjoy a life full of sex with anyone they can have sex with for the
rest of their life, and from their own point of view finding that life as
fulfilling as possible.
Is K then morally obligated to rape as many people as possible?
In this kind of questions, people usually have strong emotional moral
convictions.

Probability is Subjectively Objective

This is silly. To say that there is some probability in the universe is not to say that everything has randomness to it. People arguing that there is intrinsic probability in physics don't argue that this intrinsic probability finds its way into the trillionth digit of pi.

Many Physicists: If I fire a single electron at two slits, with a detector placed immediately after one of the slits, then I detect the electron half the time. Furthermore, leading physics indicates that no ammount of information will ever allow me to accurately predict which trials wi... (read more)

19yDr. Many the Physicist would be wrong about the electron too. The electron goes
both ways, every time. There's no chance involved there either.
But you're right, it is not the ten trillionth digit of pi that proves it.

SotW: Be Specific

Replace "the next two seconds" with "the two seconds subsequent to my finishing this wish description"

SotW: Be Specific

Constraint: Within the next two seconds, you must perform only the tasks listed, which you must perform in the specified order. Task 1. Exchange your definition of decrease with your definition of increase Task 2. --insert wish here-- Task 3. Self-terminate

This is of course assuming that the I don't particularly care for the genie's life.

-19yCan you recite that whole list in under two seconds?

Timeless Physics

Uh... what?

c is the speed of light. It's an observable. If I change c, I've made an observable change in the universe --> universe no longer looks the same?

Or are you saying that we'll change t and c both, but the measured speed of light will become some function of c and t that works out to remain the same? As in, c is no longer the measured speed of light (in a vacuum)? Then can't I just identify the difference between this universe and the t -> 2t universe by seeing whether or not c is the speed of light?

I also think you're stuck on restrictin... (read more)

Timeless Physics

A coupleof things:

- You begin by describing time translation invariance, even relating it to space translation invariance. This is all well and good, except that you then you ask:

"Does it make sense to say that the global rate of motion could slow down, or speed up, over the whole universe at once—so that all the particles arrive at the same final configuration, in twice as much time, or half as much time? You couldn't measure it with any clock, because the ticking of the clock would slow down too."

This one doesn't make as much sense to me. T... (read more)

09yIf you change the value of c as you scale time then physics will stay the same.

The Futility of Emergence

The even/odd attribute of a collection of marbles is not an emergent phenomenon. This is because as I gradually (one by one) remove marbles from the collection, the collection has a meaningful even/odd attribute all the way down, no matter how few marbles remain. If an attribute remains meaningful at all scales, then that attribute is not emergent.

If the accuracy of fluid mechanics was nearly 100% for 500+ water molecules and then suddenly dropped to something like 10% at 499 water molecules, then I would not count fluid mechanics as an emergent phenomenon. I guess I would word this as "no jump discontinuities in the accuracy vs scale graph."

I see three distinct issues with the argument you present.

First is line 1 of your reasoning. A finite universe does not entail a finite configuration space. I think the cleanest way to see this is through superposition. If |A> and |B> are two orthogonal states in the configuration space, then so are all states of the form a|A> + b|B>, where a and b are complex numbers with |a|^2 + |b|^2 = 1. There are infinitely many such numbers we can use, so even from just two orthogonal states we can build an infinite configuration space. That said, there's... (read more)