# 38

Today at lunch I was discussing interesting facets of second-order logic, such as the (known) fact that first-order logic cannot, in general, distinguish finite models from infinite models. The conversation branched out, as such things do, to why you would want a cognitive agent to think about finite numbers that were unboundedly large, as opposed to boundedly large.

So I observed that:

1. Although the laws of physics as we know them don't allow any agent to survive for infinite subjective time (do an unboundedly long sequence of computations), it's possible that our model of physics is mistaken. (I go into some detail on this possibility below the cutoff.)
2. If it is possible for an agent - or, say, the human species - to have an infinite future, and you cut yourself off from that infinite future and end up stuck in a future that is merely very large, this one mistake outweighs all the finite mistakes you made over the course of your existence.

And the one said, "Isn't that a form of Pascal's Wager?"

I'm going to call this the Pascal's Wager Fallacy Fallacy.

You see it all the time in discussion of cryonics. The one says, "If cryonics works, then the payoff could be, say, at least a thousand additional years of life." And the other one says, "Isn't that a form of Pascal's Wager?"

The original problem with Pascal's Wager is not that the purported payoff is large. This is not where the flaw in the reasoning comes from. That is not the problematic step. The problem with Pascal's original Wager is that the probability is exponentially tiny (in the complexity of the Christian God) and that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God).

However, what we have here is the term "Pascal's Wager" being applied solely because the payoff being considered is large - the reasoning being perceptually recognized as an instance of "the Pascal's Wager fallacy" as soon as someone mentions a big payoff - without any attention being given to whether the probabilities are in fact small or whether counterbalancing anti-payoffs exist.

And then, once the reasoning is perceptually recognized as an instance of "the Pascal's Wager fallacy", the other characteristics of the fallacy are automatically inferred: they assume that the probability is tiny and that the scenario has no specific support apart from the payoff.

But infinite physics and cryonics are both possibilities that, leaving their payoffs entirely aside, get significant chunks of probability mass purely on merit.

Yet instead we have reasoning that runs like this:

1. Cryonics has a large payoff;
2. Therefore, the argument carries even if the probability is tiny;
3. Therefore, the probability is tiny;
4. Therefore, why bother thinking about it?

(Posted here instead of Less Wrong, at least for now, because of the Hanson/Cowen debate on cryonics.)

Further details:

Pascal's Wager is actually a serious problem for those of us who want to use Kolmogorov complexity as an Occam prior, because the size of even the finite computations blows up much faster than their probability diminishes (see here).

See Bostrom on infinite ethics for how much worse things get if you allow non-halting Turing machines.

In our current model of physics, time is infinite, and so the collection of real things is infinite. Each time state has a successor state, and there's no particular assertion that time returns to the starting point. Considering time's continuity just makes it worse - now we have an uncountable set of real things!

But current physics also says that any finite amount of matter can only do a finite amount of computation, and the universe is expanding too fast for us to collect an infinite amount of matter. We cannot, on the face of things, expect to think an unboundedly long sequence of thoughts.

The laws of physics cannot be easily modified to permit immortality: lightspeed limits and an expanding universe and holographic limits on quantum entanglement and so on all make it inconvenient to say the least.

On the other hand, many computationally simple laws of physics, like the laws of Conway's Life, permit indefinitely running Turing machines to be encoded. So we can't say that it requires a complex miracle for us to confront the prospect of unboundedly long-lived, unboundedly large civilizations. Just there being a lot more to discover about physics - say, one more discovery of the size of quantum mechanics or Special Relativity - might be enough to knock (our model of) physics out of the region that corresponds to "You can only run boundedly large Turing machines".

So while we have no particular reason to expect physics to allow unbounded computation, it's not a small, special, unjustifiably singled-out possibility like the Christian God; it's a large region of what various possible physical laws will allow.

And cryonics, of course, is the default extrapolation from known neuroscience: if memories are stored the way we now think, and cryonics organizations are not disturbed by any particular catastrophe, and technology goes on advancing toward the physical limits, then it is possible to revive a cryonics patient (and yes you are the same person). There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities.

# 38

New Comment
Some comments are truncated due to high volume. Change truncation settings

You reference a popular idea, something like "The integers are countable, but the real number line is uncountable." I apologize for nitpicking, but I want to argue against philosophers (that's you, Eliezer) blindly repeating this claim, as if it was obvious or uncontroversial.

Yes, it is strictly correct according to current definitions. However, there was a time when people were striving to find the "correct" definition of the real number line. What people ended up with was not the only possibility, and Dedekind cuts (or various other t... (read more)

5Patrick11yThat's not the Lowenheim-Skolem Theorem. You've confused finite with countable (i.e. finite or countably infinite). Here's a simple example of a theory that can't be satisfied with a finite model. 1. exists x forall y (x != s(y)) 2. forall x,y (s(x) = s(y) -> x = y) Any model that satisfies this must have at least 1 element by axiom 1. Call it 0. s(0) != 0 so the model must have at least 2 elements. s(s(0)) != s(0) by axiom 2. So the model has at least 3 elements. Suppose we have n distinct elements in our model, all obtained from applying s to 0 the appropriate number of times. Then we need one more, since s(s(...(s(n))) [applied n times] != s(s(...s(n))) [applied n-1 times] (this follows from axiom 2.) So any model that satisfies these axioms must be infinite. (Incidentally, you can get theories that specify the natural numbers with more precision: see http://en.wikipedia.org/wiki/Robinson_Arithmetic [http://en.wikipedia.org/wiki/Robinson_Arithmetic]).

Mathematicians routinely use "infinite" to mean "infinite in magnitude". For example, the concept "The natural numbers" is infinite in magnitude, but I have picked it out using only 19 ascii characters. From a computer science perspective, it is a finite concept - finite in information content, the number of bits necessary to point it out.

Each of the objects in the set of the Peano integers is finite. The set of Peano integers, considered as a whole, is infinite in magnitude, but finite in information content.

Mathematician's routine speech sometimes sounds as if a generic real number is a small thing, something that you could pick up and move around. In fact, a generic real number (since it's an element of an uncountable set) is infinite in information content - they're huge, and impossible to encounter, much less pick up.

Lowenheim-Skolem allows you to transform proofs that, on a straightforward reading, claim to be manipulating generic elements of uncountable sets (picking up and moving around real numbers for example), with proofs that claim to be manipulating elements of countable sets - that is, objects that are finite in information content.

1[anonymous]11yLöwenheim-Skolem only applies to first-order theories. While there are models of the theory of real closed fields that are countable, referring to those models as "the real numbers" is somewhat misleading, because there isn't only one of them (up to model-theoretic isomorphism). Also, if you're going to measure information content, you really need to fix a formal language first, or else "the number of bits needed to express X" is ill-defined. Basically, learn model theory before trying to wield it.
0Nebu6yI don't know model theory, but isn't the crucial detail here whether or not the number of bits needed to express X is finite or infinite? If so, then it seems we can handwave the specific formal language we're using to describe X, in the same way that we can handwave what encoding for Turing Machines generally when talking about Kolmogorov complexity, even though to actually get a concrete integer K(S) representing the Kolmogorovo complexity of a string S requires us to use a fixed encoding of Turing Machines. In practice, we never actually care what the number K(S) is.
0[anonymous]6yLet's say I have a specific model of the real numbers in mind, and lets pretend "number of bits needed to describe X" means "log2 the length of the shortest theory that proves the existence of X." Fix a language L1 whose constants are the rational numbers and which otherwise is the language of linear orders. Then it takes a countable number of propositions to prove the existence of any given irrational number (i.e., exists x[1] such that x[1] < u[1], ..., exists y[1] such that y[1] > v[1], ..., x[1] = y[1], ... x[1] = x[2], ..., where the sequences u[n] and v[n] are strict upper and lower bounding sequences on the real number in question). Now fix a language L2 whose constants are the real numbers. It now requires one proposition to prove the existence of any given irrational number (i.e., exists x such that x = c). The difference between this ill-defined measure of information and Kolomogrov complexity is that Turing Machines are inherently countable, and the languages and models of model theory need not be. (Disclaimer: paper-machine2011 knew far more about mathematical logic than paper-machine2016 does.)
0Jiro6yWhether a theory proves the existence of X may be an undecideable question.
0gjm6yHow many bits it takes to describe X is an undecidable question when defined in other ways, too.
0Jiro6yThe definition "length of the shortest program which minimizes (program length + runtime)" isn't undecideable, although you could argue that that's not what we normally mean by number of bits.
1gjm6yAdding program length and runtime feels to me like a type error.
7komponisto11ySo, given this, what exactly is your complaint? You started off criticizing Eliezer (and whomever else) for saying "The integers are countable, but the real number line is uncountable" - I suppose on the grounds that everything in the physical universe is countable, or something. (You weren't exactly clear.) But now you point out (correctly) that there is a perfectly good interpretation of this statement which in no way depends on there being an uncountable number of physical things anywhere, or otherwise violates your (not-exactly-well-defined) philosophy. So haven't you just defeated yourself?
3Johnicholas11yI have a knee-jerk response, railing against uncountable sets in general and the real numbers in particular; it's not pretty and I know how to control it better now.
5[anonymous]11yI'm fairly confident that for your purposes you could live with the computable numbers (that is: those numbers whose decimal expansion can be computed by ), and as long as you didn't need anything stronger than integration amenable to quadrature, you'd be just fine. There are people who take this route, but I can't think of any off the top of my head. Knuth once stated that he'd like to write a calculus book roughly following this path, but, well, he's got other things on his mind. EDIT: I should point out also that the computable numbers are countable (by the usual Godel encoding of whatever machine is rattling off the digits for you), and that for all practical intents and purposes they're probably equivalent to whatever calculus-related mischief is in play at the moment.
2Johnicholas11yThere's some weirdnesses down that route - for example, it turns out that you can't distinguish zero from nonzero, so the step function is actually uncomputable. My contrarian claim is that everyone could live with the nameable numbers - that is, the numbers that can be pointed out using a finite number of books to describe them. People who really strongly care about the uncountability of the reals have a hard time coming up with a concrete example of what they'd miss.
6[anonymous]11yI don't understand. Those also seem to fall prey to Also, Lebesgue measure theory, Gal(C/R) = Z/2Z, and some pathological examples in the history of differential geometry without which the current definition of a manifold would have been much more difficult to ascertain. Off the top of my head. There are certainly other things I would miss.
0Johnicholas11yThose are theories, which are not generally lost if you switch the underlying definitions aptly - and they are sometimes improved (if the new definitions are better, or if the switch demonstrates an abstraction that was not previously known). People can't pick out specific examples of numbers that are lost by switching to using nameable numbers, they can only gesture at broad classes of numbers, like "0.10147829..., choosing subsequent digits according to no specific rule". If you can describe a specific example (using Lebesgue measure theory if you like), then that description is a name for that number.
2[anonymous]11yI really wish I had the time to explicitly write out the reasons why I believe these examples are compelling reasons to use the usual model of the real numbers. I tried, but I've already spent too long and I doubt they would convince you anyway. So? Omega could obliterate 99% of the particles in the known universe, and I wouldn't be able to name a particular one. If it turns out in the future that these nameable numbers have nice theoretic properties, sure. The effort to rebuild the usual theory doesn't seem to be worth the benefit of getting rid of uncountability. (Or more precisely, one source of uncountability.) I think I've spent enough time procrastinating on this topic. I don't see it going anywhere productive.
2Johnicholas11ySuppose someone played idly manipulating small objects, like bottlecaps, in the real world. Suppose they formed (inductively) some hypotheses, picked some axioms and a formal system, derived the observed truths as a consequence of the axioms, and went on to derive some predictions regarding particular long or unusual manipulations of bottlecaps. If the proofs are correct, the conclusions are true regardless of the outcome of experiments. If you believe that mathematics is self-hosting; interesting and relevant and valuable in itself, that may be sufficient for you. However, you might alternatively take the position that contradictions with experiment would render the previous axioms, theorems and proofs less interesting because they are less relevant. Generic real numbers, because of their infinite information content, are not good models of physical things (positions, distances, velocities, energies) that a casual consumer of mathematics might think they're natural models of. If you built the real numbers from first-order ZFC axioms, then they do have (via Lowenheim-Skolem) some finite-information-content correspondences - however, those objects look like abstract syntax trees, ramified with details that act as obstacles to finding an analogous structure in the real world.

I'm not sure what a "nameable number" is. Whatever countable naming scheme you invent, I can "name" a number that's outside it by the usual diagonal trick: it differs from your first nameable number in the first digit, and so on. (Note this doesn't require choice, the procedure may be deterministic.) Switching from reals to nameable numbers seems to require adding more complexity than I'm comfortable with. Also, I enjoy having a notion of Martin-Löf random sequences and random reals, which doesn't play nice with nameability.

2Johnicholas11yYou're correct to point out that I'm being too vague, and I'm making mistakes speaking as if nameable numbers constitute a set or a single alternative to the reals or the rationals. However, I've been a consumer of theorems and proofs that casually use real numbers as if they're lightweight objects. There is considerable effort involved to parse the underlying concepts out of the theorems and proofs, and re-formalize them using something reasonable (completions of the natural numbers under various operations like additive inverse, multiplicative inverse, square root, roots of polynomials in general, roots of differential equations in general). Those are all different sets of nameable numbers, and they're all countable. I would prefer that mathematicians routinely perceived "the reals" as a peculiar construction, and instead of throwing it in routinely when working on concepts in geometry or symmetry as the standard tool to modeling positions and distances, thought about what properties they actually need to get the job they're doing done.
4wedrifid11yWhy is it that mathematicians so love the idea of doing their work blindfolded and with their hands tied behind their backs? Someone invented the reals. They're awesome things. And people invented all sorts of techniques you can use the reals for. Make the most of it! Leave proving stuff about when reals are useful to and how such a peculiar construction can be derived and angsting about how deep and convoluted the basis must be to specialists in angsting about how deep and convoluted the basis for using reals is.
1Johnicholas11yIt's the same as programmers insisting on introducing abstractions decoupling their code from the framework and libraries that they're using; modularity to prevent dependency hell.
1wedrifid11yI think it is the same as programmers choosing to use languages with built in support for floating point calculations and importing standard math and stats libraries as appropriate. This is an alternative to rolling your own math functions to model your calculations based off integers or bytes. Your shot [http://lesswrong.com/lw/1gw/contrarianism_and_reference_class_forecasting/1a65] .
1Johnicholas11yAre you suggesting that modeling real numbers with floating point is a good practice? Yes, it is a standard practice, and it may be the best compromise available for a programmer or a team on a limited time budget, but the enormous differences between real numbers and floating point numbers mean that everything that was true upstream in math-land regarding real numbers, becomes suspect and have to be re-checked, or transformed into corresponding not-quite-the-same theorems and proofs. If we (downstream consumers of mathematics) could get mathematicians to use realistic primitives, then we could put a lot of numerical analysts out of work (free up their labor to do something more valuable).
7cousin_it11yDo you think some constructivist representation of numbers can do better than IEEE floats at removing the need for numerical analysis in most engineering tasks, while still staying fast enough? I'm veeeeery skeptical. It would be a huge breakthrough if it were true.
1Johnicholas11yYes, that's my position. In fact, if you had hardware support for an intuitionistic / constructivist representation (intervals, perhaps), my bet would be that the circuits would be simpler than the floating-point hardware implementing the IEEE standard now.
7cousin_it11yI'm not an expert in the field, but it seems to me that intervals require strictly more complexity than IEEE floats (because you still need to do floating-point arithmetic on the endpoints) and will be unusable in many practical problems because they will get too wide. At least that's the impression I got from reading a lot of Kahan. Or do you have some more clever scheme in mind?
1Johnicholas11yYes, if you have to embrace the same messy compromises, then I am mistaken. My belief (which is founded on studying logic and mathematics in college, and then software development after college) is that better foundations, with effort, show through to better implementations.
1wedrifid11yGood, certainly. Not a universally optimal practice though. There are times when unlimited precision is preferable, despite the higher computational overhead. There are libraries for that too.
5Nisan11yI know of one mathematician who thinks the real numbers are a peculiar construction in the context of topology because of the pathological things you can do with them — continuous nowhere-differentiable curves, space-filling curves, and so on. That's why she studies motivic/A1 homotopy theory instead of classical homotopy theory; only polynomial functions are allowed.
2komponisto11ySo you would prefer that, instead of having one all-purpose number system that we can embed just about any kind of number we like into (not to mention do all kinds of other things with), we had a collection of distinct number systems, each used for some different ad-hoc purpose? How would this be an improvement? You might consider the fact that, once upon a time, people actually started with the natural numbers -- and then, over the ages, gradually felt the need to expand the system of numbers further and further until they ended up with the standard objects of modern mathematics, such as the real number system. This was not a historical accident. Each new kind of number corresponds to a new kind of operation people wanted to do, that they couldn't do with existing numbers. If you want to do subtraction, you need negative numbers; if you want to do division, you need rationals; and if you want to take limits of Cauchy sequences, then you need real numbers. I don't understand why this should cause computer-programming types any anxiety. A real number is not some kind of mysterious magical entity; it is just the result of applying an operation, call it "lim", to an object called a "sequence" (a_n). Real numbers are used because people want to be able to take limits (the usefulness of doing which was established decisively starting in the 17th century). So long as you allow the taking of limits, you are going to be working with the real numbers, or something equivalent. Yes, you could try to examine every particular limit that anyone has ever taken, and put them all into a special class (or several special classes), but that would be ad-hoc, ugly, and completely unnecessary.
3Sniffnoy11yI think you're being a bit uncharitable here. You've just moved the infinitude/"mysterious magicalness" from talking about real numbers to talking about sequences of rational numbers, and it is in fact possible to classify sequences as definable vs. undefinable, as well as computable vs. uncomputable. (Though definability personally seems a bit ad-hoc to me, seeing as it depends on the ambient theory.) I don't think it's really extraordinary to claim that an undefinable or uncomputable sequence is a bit mysterious and possibly somehow unreal. EDIT Apr 30: Oops! Obviously definability depends only on the ambient language, not the actual ambient theory... that makes it rather less ad-hoc than I suggested. EDIT: I should probably add, though, that this whole argument seems mostly pointless. Aside from where cousin_it and Jonicholas got to talking about how to represent numbers in computers - and that seems to have hardly anything to do with actual real numbers seeing as those can't be represented in computers - it seems to be basically just Jonicholas saying "I don't like the reals for common constructivist reasons" and other people saying "Regardless, they're valid objects of study". Is there more to it than that?
1Johnicholas11yNo, there's nothing substantive beyond that. My understanding is this thread was started, and to some extent kept rolling, by an unrelated thread, where I was behaving extremely hostile to EY, and several people went through all my back posts, looking for things to downvote. Patrick found something.
5Patrick11yTo put myself in the clear, I came across this old comment because I was looking through Doug S's old posts (because I was idly curious). I replied to your comment because I'm ridiculously pedantic, a virtue in mathematics. I haven't downvoted any of your comments and I harbor no feelings of antipathy towards you. Eliezer's a big boy and he can take care of himself. Now, back to the math debate! I don't think it's legitimate to conflate countable sets with sets with finite information content. Here are two counter examples. 1. The set of busy beaver numbers (a subset of the naturals). 2. The digits of Chaitin's Omega [make an ordered pair of (digit, position) to represent these] (see http://en.wikipedia.org/wiki/Chaitin%27s_constant [http://en.wikipedia.org/wiki/Chaitin%27s_constant]). It's been proved that these sets can't be constructed with any algorithm.
0nshepperd11yI think the finite information content comes from being an element of a countable set. Like every other real number, the digits of Chaitin's constant themselves form a countable set (a sequence), while that set is a member of the uncountable R. Similarly, the busy beaver set is a subset of N, and drawn from the uncountable set 2^N. Countable sets are useful (or rather, uncountable ones are inconvenient) because you can set up a normalized probability distribution over their contents. But... the set {Chaitin's Constant} is countable (it has one element) but I still can't get Omega's digits. So there still seems to be a bit of mystery here.
0Johnicholas11y"The set of busy beaver numbers" communicates, points to, refers to, picks out, a concept in natural language that we can both talk about. It has finite information content (a finite sequence of ascii characters suffices to communicate it). An analogous sentence in a sufficiently formal language would still have finite information content. Note that a description, even if it is finite, is not necessarily in the form that you might desire. Transforming the description "the sequence of the first ten natural numbers" into the format "{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}" is easy, but the analogous transformation of "the first 10 busy beaver numbers" is very difficult if not impossible. As nshepperd points out, an element of a countable set necessarily has finite information content (you can pick it out by providing an integer - "the fifth one" or whatever), while generic elements of uncountable sets cannot be picked out with finite information.
7komponisto11yThat was deliberate. (How was it uncharitable?) It may not be extraordinary, but it's still a confusion. A confusion that was resolved a century ago, when set theory was axiomatized, and the formalist view emerged. The Cantor/Kronecker debate is over: Cantor was right, Kronecker was wrong. The source of this confusion seems to be a belief that correspondences between mathematical structures and the physical world are properties of the mathematical structures in question, rather than properties of the physical world. This is a kind of map/territory confusion.
4Sniffnoy11yA good point.
0Sniffnoy11ySorry, uncharitable was the wrong word there. I meant you didn't address the actual apparent problem. Your new comment does.
6Sniffnoy11yBy "nameable number" he seems to just mean a definable number - in general an object is called "definable" if there is some first-order property that it and only it satisfies. (Obviously, this dependson just what the surrouding theory is. Sounds like he means "definable in ZFC".) The set of all definable objects is countable, for obvious reasons. With this definition, your diagonal trick actually doesn't work (which is good, because otherwise we'd have a paradox): Definability isn't a notion expressible in the theory itself, only in the metatheory. Hence if you attempt to "define" something in terms of the set of all definable numbers, the result is only, uh, "metadefinable". (I gave myself a real headache once over the idea of the "first undefinable ordinal"; thanks to JoshuaZ for pointing out to me why this isn't a paradox.) EDIT: I should point out, using definable numbers seems kind of awful, because they're defined (sorry, metadefined :P ) in terms of logic-stuff that depends on the surrounding theory. Computable numbers, though more restrictive, might behave a little better, I expect... EDIT Apr 30: Oops! Obviously definability depends only on the ambient language, not the actual ambient theory... that makes it rather less awful than I suggested.
2abramdemski9yWe could similarly argue that the definable objects should be thought of as "meta-countable" rather than countable, right? The reals-implied-by-a-theory would always be uncountable-in-the-theory. (I'm tempted to imagine a world in which this ended the argument between constructivists and classicists, but realistically, one side or the other would end up feeling uneasy about such a compromise... more likely, both.)
0Sniffnoy9yI think you're confusing levels here. When I spoke of "the surrounding theory" above, I didn't mean the, uh, actual ambient theory. (Sorry about that -- I may have gotten a little mixed up myself) And indeed, like I said, definability only depends on the language, not the theory. Well -- of course it still depends on the actual ambient theory. But working internal to that (which I was doing), it only depends on the language. And then one can talk about the metalanguage, staying internal to the same ambient theory, etc... (mind you, all this is assuming that the ambient theory is powerful enough to talk about this sort of thing). So at no point was I intending to vary the actual ambient theory, like you seem to be talking about. Warning: I don't quite understand just how logicians think of these things and so may be confused myself.
2Sniffnoy11yI'm confused; this is true for any real closed field. What are you getting at with this?
0[anonymous]11yA mistake. I was thinking of C as the so-called "generic complex numbers." You're right that if you replace C with the algebraic closure of whatever countable model's been dreamed up, then C = R[i] and that's it. Admittedly I'm only conjecturing that Gal(C/K) will be different for some K countable, but I think there's good evidence in favor of it. After all, if K is the algebraic closure of Q, then Gal(C/K) is gigantic. It doesn't seem likely that one could "fix" the other "degrees of freedom" with only countably many irrationals.
2Sniffnoy11yOf course, whether a number is definable or not depends on the surrounding theory. Stick to first-order theory of the reals and only algebraic numbers will be definable! Definable in ZF? Or what? EDIT Apr 30: Oops! Obviously definability depends only on the ambient language, not the actual ambient theory... no difference here between ZF and ZFC...
0bogdanb11ySorry for what might be a silly question, but what do you mean by “generic real number”? In the sense of “one number picked at random from the set”, a “generic natural number” would also be a huge and impossible to encounter—almost all natural numbers would need more bits then are Plank volumes in the universe to represent—and it doesn’t seem that you’re trying to say that.
3Johnicholas11yIf you start selecting things at random, then you need a probability distribution. Many routinely used probability distributions over the natural numbers give you a nontrivial chance of being able to fit the number on your computer. There are, of course, corresponding probability distributions over the reals (take a probability distribution over the natural numbers and give zero probability to anything else). However, the routinely used probability distributions on the reals give zero probability to the output being a natural number, a rational number, describable with a finite algebraic equation, or in fact, being able to fit the number on your computer. One of the problems with real numbers is that if someone trying to do Bayesian analysis of a sensor that reads 2.000..., or 3.14159... using one of these real number distributions as their prior, cannot conclude that the quantity measured probably is 2 or pi, no matter how many digits of precision the sensor goes out to.
1bogdanb11yI get that the sensor thing was only an example, but still: it doesn’t seem like a real objection. I mean, you’re not going to have (or need) a sensor with infinity decimals of precision. (Or perhaps I’m not understanding you?) In terms of “selecting things at random”, for any practical use I can think of you’ll be selecting things like intervals, not real numbers. I don’t quite see how that’s relevant to the formalism you use to reason about how and what you’re calculating. I think there’s some big understanding gap here. Could you explain (or just point to some standard text), how does one reason about trivial things like areas of circles and volumes of spheres without using reals?
4Johnicholas11yPerhaps you've confused the "pi has a decimal expansion goes on forever without seeming pattern" with "a generic real number has a decimal expansion that goes on forever without pattern"? Pi does have a finite representation, "Pi". We use it all the time, and it's just as precise as "2" or "0.5". Specifically, you could start with the rationals, and complete it by including all solutions to differential equations. Pi would be included, and many other numbers, but you'd still only have a countable set - because every number would have one or more shortest definitions - finite information content. If you had a probability distribution over such a set, it would naturally favor hypotheses with short definitions. If it started out including pi as a possibility, and you gathered sufficient sensor data consistent with pi (a finite amount), the probability distribution would give pi as the best hypothesis. This is reasonable behavior. You have to do non-obvious mucking around with your prior to get that sort of reasonableness with standard real-number probabilities. As others have pointed out, any specific countable system of numbers (such as the "solutions to differential equations" that I mentioned) is susceptible to diagonalization - but I see no reason to "complete" the process, as if being susceptible to diagonalization were a terrible flaw above all others. All the entities that you're actually manipulating (finite precision data and finite, stateable hypotheses like "exactly 2" and "exactly pi") are finite-information-content, and "completing" the reals against diagonalization makes essentially all the reals infinite-information-content - a cure in my mind far worse than the disease.
1bogdanb11y(Note: I’m not arguing in this particular post, just asking clarifying questions, as you seem to have the issues much clearer in your mind than I do.) 1) It seems one can start with naturals, extend them to integers, then to rationals, then to whatever set results from including solutions to differential equations (does that have a standard name?). I imagine there are countably infinite many constructions like that, am I right? They seem to “divide” the numbers “finer” (I’d welcome a hint to more formal description of this), though they aren’t necessarily totally ordered in terms of how “fine” they are, and that the limit of this process after an infinity of extensions seems to be the reals. (Am I missing something important until here? In particular, we can reach the reals much faster, is there some important property in particular the countable extensions have in general, other than their result set being countable and their individual structure?) 2) Do you have other objections to real numbers that do not involve probabilities, probability distributions, and similar information theory concepts? 3) I don’t quite grok your π example. It seems to me that a finite amount of sensor data will always only be able to tell you it’s consistent with all values in the interval π±ε; if you’re using a sufficiently “dense” set, even just the rationals, you’ll have an infinity of values in that interval, while using the reals you’ll have an uncountable one. In the countable case you’ll have to have probabilities for the countable infinity of consistent values, which could result in π being the most probable one, and in the uncountable one you’ll need a probability distribution function, which could as well have π as the most probable. (In particular, I can’t see a reason why you couldn’t find a the probability distribution function that has exactly the same value as your probability function when applied to the values in your π-containing countably-infinite set and is “well-b
5Johnicholas10y1) Yes, there are countably many constructions of various kinds of numbers. The construction can presumably be written down, and strings are finite-information-content entities. Yes, they're normally understood to form a set-theoretic lattice - the integers are a subset of the gaussian integers, and the integers are a subset of the rationals, and both the gaussian and rationals are a subset of the complex plane. However, the reals are not in any well-defined sense "the" limit of that lattice - you could create a contrived argument that they are, but you could also create an argument that the natural limit is something else, either stopping sooner, or continuing further to include infinities and infinitiesimals or (salient to mathematicians) the complex plane. Defenders of the reals as a natural concept will use the phrase "the complete ordered field", but when you examine the definition of "complete" they are referencing, it uses a significant amount of set theory (and an ugly Dedekind cuts construction) to include everything that it wants to include, and exclude many other things that might seem to be included. 2) Yes. I think the reals are a history-laden concept; they were built in fear of set-theoretic and calculus paradoxes, and without awareness of the computational approach - information theory and Godel's incompleteness. They are useful primarily in the way that C++ is useful to C++ programmers - as a thicket or swamp of complexity that defends the residents from attack. Any mathematician doing useful work in a statistical, calculus-related, or topological field who casually uses the reals will need someone else, a numerical analyst or computer scientist, to laboriously go through their work and take out the reals, replacing them with a computable (and countable) alternative notion - different notions for different results. Often, this effort is neglected, and people use IEEE floats where the mathematician said "real", and get ridiculous results - or wors
2bogdanb10yHi John! Thank you very much for taking the time to answer at such length. The links you included were also very interesting, thanks. I think I got a bit of insight into the original issue (way up in the comments, when I interjected in your chat with Patrick). With respect to the points closer in this thread, it’s become more like teaching than an actual discussion. I’m much too little educated in the subject, so I could contribute mostly with questions (many inevitably naïve) rather than insights. I’ll stop here then; though I am interested, I’m not interested enough right now to educate myself, so I won’t impose on your time any longer. (That is, not unless you want to. I can continue if for some reason you’d take pleasure in educating me further.) Thank you again for sharing your thoughts :-)
5Eliezer Yudkowsky11yOnly in first-order logic. In second-order logic, you can actually talk about the natural numbers as distinguished from any other collection, and the uncountable reals. Amusingly, if you insist that we are only allowed to talk in first-order logic, it is impossible for you to talk about the property "finite", since there is no first-order formula which expresses this property. (Follows from the Compactness Theorem for first-order logic - any set of first-order formulae which are true of unboundedly large finite collections also have models of arbitrarily large infinite cardinality.) Without second-order logic there is no way to talk about this property of "finiteness", or for that matter "countability", which you seem to think is so important.
3Johnicholas11yYes, that's my understanding as well. Proof theory for second-order logic seems to be problematic, and I have a formalist stance towards mathematics in general, which leads me to suspect that the standard definitions of second-order logic are somehow smuggling in uncountable infinities, rather than justifying them. But I admit second-order logic is not something I've studied in depth.
9cousin_it11yYeah, second-order logic is basically set theory in disguise. I'm not sure why Eliezer likes it. Example from the Wikipedia page [http://en.wikipedia.org/wiki/Second-order_logic]:
4cousin_it11yYou can capture the property "finite" with a first-order sentence over the "standard integers", I think. This leaves open the mystery of what exactly the "standard integers" are, which looks lightly less mysterious than the mystery of "sets" required for second-order logic.
0[anonymous]11yAn equivalent (and in my opinion less misleading) way of putting this is to say that there's no first-order formula which expresses the property of being infinite.

There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities.

Expected utility is the product of two things, probability and utility. Saying the probability is smaller is not a complete argument.

"There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities."

That doesn't seem at all obvious to me. First, our current society doesn't allow people to die, although today law enforcement is spotty enough that they can't really prevent it. I assume far future societies will have excellent law enforcement, including mind reading and total surveillance (unless libertarians seriously get their act together in the next hundred ye... (read more)

0[anonymous]11yThe threat of dystopia stresses the importance of finding or making a trustworthy, durable institution that will relocate/destroy your body if the political system starts becoming grim. Of course there is no such thing. Boards can become infiltrated. Missions can drift. Hostile (or even well-intentioned) outside agents can act suddenly before your guardian institution can respond. But there may be measures you can take to reduce fell risk to acceptable levels. You could make contracts with (multiple) members of the younger generation of cryonicists, on condition that they contract with their younger generation, etc. to guard your body throughout the ages. You can hide a very small bomb in your body that continues to countdown slowly even while frozen (don't know if we have the technology yet, but it doesn't sound too sophisticated) so as to limit the amount of divergence from now that you are willing to expose yourself to [explosion small enough to destroy your brain, but not the brain next to you] You can have your body hidden and known only to cryonicist leaders. You can have your body's destruction forged. No matter what arrangements you make, if you choose to freeze yourself you can never get the probability of being indefinitely tortured upon reanimation down to zero. So what is an acceptable level of risk? I'll give you a lower bound: the probability that a terrorist group has already secretly figured out how to extend life indefinitely, and is on route to kidnap you now. I don't think all the suggestions I made put together will suffice. But it is worth very much effort inventing more (and not necessarily sharing them all online), and making them possible if you are considering freezing yourself.
1Ulysses11yThe threat of dystopia stresses the importance of finding or making a trustworthy, durable institution that will relocate/destroy your body if the political system starts becoming grim. Of course there is no such thing. Boards can become infiltrated. Missions can drift. Hostile (or even well-intentioned) outside agents can act suddenly before your guardian institution can respond. But there may be measures you can take to reduce fell risk to acceptable levels (i.e: levels comparable to current risk of exposure to, as Yudkowsky mentioned, secret singularity-in-a-basement): 1. You could make contracts with (multiple) members of the younger generation of cryonicists, on condition that they contract with their younger generation, etc. to guard your body throughout the ages. 2. You can hide a very small bomb in your body that continues to countdown slowly even while frozen (don't know if we have the technology yet, but it doesn't sound too sophisticated) so as to limit the amount of divergence from now that you are willing to expose yourself to [explosion small enough to destroy your brain, but not the brain next to you]. 3. You can have your body hidden and known only to cryonicist leaders. 4. You can have your body's destruction forged. I don't think any combination of THESE suggestions will suffice. But it is worth very much effort inventing more (and not necessarily sharing them all online), and making them possible if you are considering freezing yourself.
-7Will_Sawin11y
0Gurkenglas9yThere is a minuscule probability that during the next 10 seconds, nanomachines produced by a fresh GAI sweep in through your window and capture you for infinite life and thus, by your argument, infinite hell. Building on your argumentation, the case can be made that you should strive to minimize the probability of that outcome. Therefore, suicide. Edit: My point has already been made by Eliezer. Lets see how this retracting thingy works.

"that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God)." Utilitarian would rightly attack this, since the probabilities almost certainly won't wind up exactly balancing. A better argument is that wasting time thinking about Christianity will distract you from more probable weird-physics and Simulation Hypothesis Wagers.

A more important criticism is that humans just physiologically don't have any emotions that scale linearly. To the extent that we approximate utility functions, we approximate ones with bounded utility, although utilitarians have a bounded concern with acting or aspiring to act or believing that they aspire to act as though they have concern with good consequences that is close to linear with the consequences, i.e. they have a bounded interest in 'shutting up and multiplying.'

utilitarians have a bounded concern with acting or aspiring to act or believing that they aspire to act as though they have concern with good consequences that is close to linear with the consequences

I know this is not what you were suggesting, but this made me think of goal systems of the form "take the action that I think idealized agent X is most likely to take," e.g. WWAIXID.

A huge problem with these goal systems is that the idealized agent will probably have very low-entropy probability distributions, while your own beliefs have very high entropy. So you'll end up acting as if you believed with near-certainty the single most likely scenario you can think of.

Another problem, of course, is that you'll take actions that only make sense for an agent much more competent than you are. For example, AIXI would be happy to bet $1 million that it can beat Cho Chikun at Go. 0gjm7yIn the relevant circumstances, I too might be happy to bet$1M that AIXI can beat Cho Chikun at go.
0[anonymous]12yThis seems like a non-standard way of thinking that needs some explanation. It's not clear to me that it matters whether my emotions scale linearly, if I'll reflectively endorse the statement "if there are X good things, and you add an additional good thing, the goodness of that doesn't depend on what X is". It's also not clear to me that utilitarians can be seen as having an intrinsic preference for utilitarian behavior as opposed to a belief that their "true" preferences are utilitarian.

Johnicholas:

I agree with your sentiment, however:

There is a perfectly good description of the real numbers that is not ugly. Namely, the real numbers are a complete Archimedean ordered field.

To actually construct them, I think using (Cauchy) convergent sequences of rational numbers would be much less ugly than using Dedekind cuts.

Also, the Löwenheim–Skolem theorem only applies to first-order logic, not second-order logic. Why are you constraining me to use only first-order logic? You have to explain that first.

"first-order logic cannot, in general, distinguish finite models from infinite models."

Specifically, if a fist order theory had arbitrarily large finite models, then it has an infinite one.

There is no first-order sentence which is true in all and only finite models and not in any infinite models.

Sketch of conventional proof: The compactness theorem says that if a collection of first-order sentences is inconsistent, then a finite subset of those first-order sentences is inconsistent.

To a sentence or theory true of all finite sets, adjoin the infinite series of statements "This model has at least one element", "This model has at least two elements" (that is, there exist a and b with a != b), "This model has at least t... (read more)

Yvain wrote: "The deal-breaker is that I really, really don't want to live forever. I might enjoy living a thousand years, but not forever. "

I'm curious to know how you know that in advance? Isn't it like a kid making a binding decision on its future self?

As Aubrey says, (I'm paraphrasing): "If I'm healthy today and enjoying my life, I'll want to wake up tomorrow. And so on." You live a very long time one day at a time.

The “isn’t that like Pascal’s wager?” response is plausibly an instance of dark side epistemology, and one that affects many aspiring rationalists.

Many of us came up against the Pascal’s wager argument at some point before we gained much rationality skill, disliked the conclusion, and hunted around for some means of disagreeing with its reasoning. The overcomingbias thread discussing Pascal’s wager strikes me as including a fair number of fallacious comments aimed at finding some rationale, any rationale, for dismissing Pascal’s wager.

If these arguments t... (read more)

The fallacious arguments against Pascal's Wager are usually followed by motivated stopping.

Utilitarian would rightly attack this, since the probabilities almost certainly won't wind up exactly balancing.

Utilitarian's reply seems to assume that probability assignments are always precise. We may plausibly suppose, however, that belief states are sometimes vague. Granted this supposition, we cannot infer that one probability is higher than the other from the fact that probabilities do now wind up exactly balancing.

Pablo,

Vagueness might leave you unable to subjectively distinguish probabilities, but you would still expect that an idealized reasoner using Solomonoff induction with unbounded computing power and your sensory info would not view the probabilities as exactly balancing, which would give infinite information value to further study of the question.

The idea that further study wouldn't unbalance estimates in humans is both empirically false in the cases of a number of smart people who have undertaken it, and looks like another rationalization.

Eliezer, it seems to me that you may be being unfair to those who respond "Isn't that a form of Pascal's wager?". In an exchange of the form

Cryonics Advocate: "The payoff could be a thousand extra years of life or more!"

Cryonics Skeptic: "Isn't that a form of Pascal's wager?"

I observe that CA has made handwavy claims about the size of the payoff, hasn't said anything about how the utility of a long life depends on its length (there could well be diminishing returns), and hasn't offered anything at all like a probability calcul... (read more)

g,

This is based on the diavlog with Tyler Cowen, who did explicitly say that decision theory and other standard methodologies doesn't apply well to Pascalian cases.

@Yvain: Don't look at the future as containing you, ask what can the future do worse or better, if it's in possession of the information about you. It can reconstruct you-alive using that information, and let the future you enjoy the life in the future, or it could reconstruct you-alive and torture it for eternity. But in which of these cases the future will actually get better or worse, depending on whether you give the future the information about your structure? Is the torture-people future going to get better because you don't give them specifically th... (read more)

these posts are useful to calibrate the commitment and self incentive biases. based on the probabilities espoused (80%, bad outcomes are 'exotic') i say the impact is 1000x. the world looks pretty utopian from the a/c cooled academics labs in US in anno domini 2009.

My question is very specific, can you elaborate on what you mean by "holographic limits on quantum entanglement"? I did a search but all I got was woo-woo websites.

Thank you.

Vladimir, hell is only one bit away from heaven (minus sign in the utility function). I would hope though that any prospective heaven-instigators can find ways to somehow be intrinsically safe wrt this problem.

Steven, even the minus-utility hell won't get worse because it has information useful for the positive-utility eutopia. Only and specifically the positive-utility eutopia could have a use for such information. You win from providing this information in case of a good outcome, and you don't lose in case of a bad outcome.

Carl, it clearly isn't based only on that since Eliezer says "You see it all the time in discussion of cryonics".

Eliezer, thanks I've found material on the holographic principle and did some reading myself. it's an intriguing idea, but an idea so far that has no experimental basis yet. Aside from unconfirmed source of noise in a gravitational wave experiment, it's not known if holographic principle/cosmological information bound actually plays a role. Why did you include that in your post, were you just including another possible example of how universe seems to conspire against our ambitions.

Pascal Wager != Pascal Wager Fallacy. If original Pascal wager didn't depend on a highly improbable proposition (existence of a particular version of god), it would be logically sound (or at least more sound then it is). So, I don't see a problem comparing cryonics advocacy logic with Pascal's wager.

On the other hand, I find some of the probability estimates cryonics advocates make to be unsound, so for me, this way of cryonics advocacy does look like a Pascal Wager Fallacy. In particular, I don't see why cryonics advocates put high probability values on b... (read more)

What if we phrase a Pascal's Wager-like problem like this:

If every winner of a certain lottery receives $300 million, a ticket costs$1, the chances of winning are 1 in 250 million, and you can only buy one ticket, would you buy that ticket?

There's a positive expected value in dollars, but 1 in 250 million is basically not gonna happen (to you, at least).

@ doug S

I defeat your version of the PW by asserting there is no rational lottery operator who goes forth with the business plan to straight up lose $50million. thus the probability of your scenario, as w the christian god, is zero. vroman, see the post on Less Wrong about least-convenient possible worlds. And the analogue in Doug's scenario of the existence of (Pascal's) God isn't the reality of the lottery he proposes -- he's just asking you to accept that for the sake of argument -- but your winning the lottery. I think a heuristic something like this is often involved: "If someone claims a high benefit (at any probability) for some costly implausible course of action, there's a good chance they're (a) consciously trying to exploit me, (b) infected by a parasitic meme, or (c) getting off on the delusion that they have a valuable Cause. In any of those cases, they'll probably have plenty of persuasive invalid arguments; if I try to analyze these, I may be convinced in spite of myself, so I'd better find whatever justification I can to stop thinking." vroma... (read more) vroman: Two words - rollover jackpots. I read and understood the Least convenient possible world post. given that, then let me rephrase your scenario slightly If every winner of a certain lottery receives$X * 300 million, a ticket costs $X, the chances of winning are 1 in 250 million, you can only buy one ticket, and$X represents an amount of money you would be uncomfortable to lose, would you buy that ticket?

answer no. If the ticket price crosses a certain threshold, then I become risk averse. if it were $1 or some other relatively inconsequential amount of money, then I would be rationally compelled to buy the nearly-sure loss ticket. 0SecondWind9yIf you'd be rationally compelled to buy one low-cost ticket, then after you've bought the ticket you should be rationally compelled to buy a ticket. And then rationally compelled to buy a ticket. Sure, at each step you're approaching the possibility with one fewer dollar, but by your phrasing, the number of dollars you have does not influence your decision to buy a ticket (unless you're broke enough that$1 is not longer a relatively inconsequential amount of money). This method seems to require an injunction against iteration.

Nick,

"Islam and Christianity may not balance, but what about Christianity and anti-Christianity?" Why would you think that Christianity and anti-Christianity plausibly balance exactly? Spend some time thinking about the distribution of evolved minds and what they might simulate, and you'll get divergence.

Why would you think that Christianity and anti-Christianity plausibly balance exactly?

Because I've been thinking about algorithmic complexity, not the actions of agents. Good point.

Specifically, thinking of the algorithmic complexity of the religion - if I were to use priors here, I should be thinking about utility(belief)*prior probability of algorithms computing functions from beliefs to reward or punishment.

Ask yourself if you would want to revive someone frozen 100 years ago. Most Americans of the time were unabashedly racist, had little concept of electricity and none of computing, had vaguely heard of automobiles, etc. They'd be awakened into a world that they don't understand, a world that judges them by mysterious criteria. It would be worse than being foreign, because the new culture's values were formed at least partially in reaction to the perceived problems of the past.

Ask yourself if you would want to revive someone frozen 100 years ago.

Yes. They don't deserve to die. Kthx next.

Ask yourself if you would want to revive someone frozen 100 years ago. Yes. They don't deserve to die. Kthx next.

I wish that this were on Less Wrong, so that I could vote this up.

4jefftk10yIt is now.
2Benya9yVery well. Upvoted now!

Does nobody want to address the "how do we know U(utopia) - U(oblivion) is of the same order of magnitude as U(oblivion) - U(dystopia)" argument? (I hesitate to bring this up in the context of cryonics, because it applies to a lot of other things and because people might be more than averagely emotionally motivated to argue for the conclusion that supports their cryonics opinion, but you guys are better than that, right? right?)

Carl, I believe the point is that until I know of a specific argument why one is more likely than the other, I have no c... (read more)

"Most Americans of the time were unabashedly racist, had little concept of electricity and none of computing, had vaguely heard of automobiles, etc."

So if you woke up in a strange world with technologies you don't understand (at first) and mainstream values you disagree with (at first), you would rather commit suicide than try to learn about this new world and see if you can have a pleasant life in it?

Steven,

Information value.

irrationality-punishers and immorality-punishers seem far less unlikely than nonchristianity-punishers

If you mean "in rough proportion to the algorithmic complexity of Christianity", nonmajoritarianism-punishers, and presumably plenty of other simple entities, would effectively be nonchristianity-punishers. Probably still true, though.

Steven, to account for the especially egoist morality, all you need to do is especially value future-you. I don't see how it changes my points.

Nick, Christians are not a majority (and if they were, an alternative course would be to try to shift majority opinions to something easier to believe, preferably before you died but it has to get done...)

I'm not claiming that U(utopia) - U(oblivion) ~ U(oblivion) - U(dystopia + revival + no suicide), but the question is whether the factor describing the relative interval, is greater than the factor of diminished probability for U(dystopia + revival + no suicide), which seems large. Also, steven points out for the benefit of altruists that if it's not you... (read more)

"I'm curious to know how you know that in advance? Isn't it like a kid making a binding decision on its future self? As Aubrey says, (I'm paraphrasing): "If I'm healthy today and enjoying my life, I'll want to wake up tomorrow. And so on." You live a very long time one day at a time."

Good point. I usually trust myself to make predictions of this sort. For example, I predict that I would not want to eat pizza every day in a row for a year, even though I currently like pizza, and this sort of prediction has worked in the past. But I shoul... (read more)

One more thing: Eliezer, I'm surprised to be on the opposite side as you here, because it's your writings that convinced me a catastrophic singularity, even one from the small subset of catastrophic singularities that keep people alive, is so much more likely than a good singularity. If you tell me I'm misinterpreting you, and you assign high probability to the singularity going well, I'll update my opinion (also, would the high probability be solely due to the SIAI, or do you think there's a decent chance of things going well even if your own project fails?)

Nick, I'm now sitting here being inappropriately amused at the idea of Hal Finney as Dark Lord of the Matrix.

Eliezer, thanks for responding to that. I'm never sure how much to bring up this sort of morbid stuff. I agree as to what the question is.

Also, steven points out for the benefit of altruists that if it's not you who's tortured in the future dystopia, the same resources will probably be used to create and torture someone else.

It was Vladimir who pointed that out, I just said it doesn't apply to egoists. I actually don't agree that it applies to altru... (read more)

I don't have to tell you that it's easier to get a Singularity that goes horribly wrong than one that goes just right

Don't the acceleration-of-history arguments suggest that there will be another singularity, a century or so after the next one? And another one shortly after that, etc?

What are the chances that they will all go exactly right for us?

If the problem is a programmer who tried to give it a sense of morality but ended up using a fake utility function or just plain screwing up, he might well end with a With Folded Hands scenario or Parfit's Mere Addition Paradox (I remember Eliezer saying once - imagine if we get an AI that understands everything perfectly except freedom) . And that's just the complicated failure - the simple one is that the government of Communist China develops the Singularity AI and programs it to do whatever they say.

For whatever relief it's worth, someone who thought... (read more)

Yvain, while it's hard to get a feel on what exactly happens when one of the meddling dabblers tries to give their AI a goal system, I would mostly expect those AIs to end up as paperclip maximizers, or at most, tiling the universe with tiny molecular smiley-faces. Nothing sentient.

Most AIs gone wrong are just going to dissassemble you, not hurt you. I think I've emphasized this a number of times, which is why it's surprising that I've seen both you and Robin Hanson, respectable rationalists both, go on attributing the opposite opinion to me.

Eliezer, "more AIs are in the hurting class than in the disassembling class" is a distinct claim from "more AIs are in the hurting class than in the successful class", which is the one I interpreted Yvain as attributing to you.

Isn't there already a good deal of experience regarding the attitudes/actions of the most intelligent entity known (in current times, humans) towards cryonically suspended potential sentient beings (frozen embryos)?

Yvain, people seem to have a hedonic set point. If you currently prefer life to non-life, I highly doubt you would not if you lived in Saudi Arabia or Burma.

"If it is possible for an agent - or, say, the human species - to have an infinite future, and you cut yourself off from that infinite future and end up stuck in a future that is merely very large, this one mistake outweighs all the finite mistakes you made over the course of your existence." Doesn't this arbitrarily favor future events? But future-self isn't current-self, it's literally a different person. Distinguishing between desirable outcomes is tautological, your values precede evaluation.

It's odd that the article author shows as [deleted] (Eliezer is the author).

2RobinZ12yI assume it appears that way because the article's been deleted - it doesn't appear under its tags, for example.

If it is possible for an agent - or, say, the human species - to have an infinite future, and you cut yourself off from that infinite future and end up stuck in a future that is merely very large, this one mistake outweighs all the finite mistakes you made over the course of your existence.

The problem with Pascal's Wager is that it allows absurdly large utilities into the equation. If I'm looking at a nice fresh apple, and it's 11:45am just before lunch, and breakfast was at 7am, then suppose the utility increment from eating that apple is X. I'd subj... (read more)

No one pointed this out but Muslims consider Christians to be people of the Book and allow them to go to heaven assuming they are good Christians.

Further, Hindu and Buddhists believe in reincarnation and believe that if one is a good Christian one will become reincarnated possibly as a Hindu or Buddhist next time around so it is safe to ignore them in calculating Pascals wager. Also, the Hindu's have a claim that Christians, Islam, and Judaism all worship the Hindu Brahmam.

Catholics also since Vatican II believe that it is possible for everyone that is n... (read more)

At a more practical level, Pascal's Wager's main failure is to strategically believe rather than rationally believe. Also, the notion that God would put up with a belief of that sort.

This particular failure mode applies to very few other arguments.

2wedrifid10yThat isn't a failure mode. Strategic belief is a perfectly valid desiradum maximization strategy. The only time when strategic belief is an actual failure mode is when you intrinsically value correct belief. In which case you don't strategic belief and so do not fail.

How did this post get attributed to [deleted] instead of to Eliezer? I'm 99% sure this post was by him, and the comments seem to bear it out.

2Elo5yI see Eliezer_Yudkowsky as account that it was posted from. Unsure what you are seeing.
0Gram_Stone5yAdditional data point: I see [deleted].
0Morendil5yMe, as well. (Edit: looking at Internet Archive's cached snapshots [http://web.archive.org/web/20100515000000*/http://lesswrong.com/lw/z0/the_pascals_wager_fallacy_fallacy/] , all of them that I checked look that way to me too.) (Edit2: it has looked that way to others as well [http://lesswrong.com/lw/z0/the_pascals_wager_fallacy_fallacy/20sh] for quite some time. I wouldn't worry about it.)
2gjm5yCertainly not worth worrying about. It seems just to be a consequence of the article being deleted. But I wonder why it was deleted.