All of omnizoid's Comments + Replies

Well, UDASSA is false  As I argue elsewhere, any view other than SIA implies the doomsday argument.  The number of possible beings isn't equal to the number of "physically limited beings in our universe," and there are different arrangements for the continuum points.  

Did you notice that I linked the very same article that you replied with? :P I'm aware of the issues with UDASSA, I just think it provides a clear example of an imaginable atheistic multiverse containing a great many possible people.

The argument for Beth 2 possible people is that it's the powerset of continuum points.  SIA gives reason to think you should assign a uniform prior across possible people.  There could be a God-less universe with Beth 2 people, but I don't know how that would work, and even if there's some coherent model one can make work without sacrificing simplicity, P(Beth 2 people)|Theism>>>>>>>>>>>>>>>>>>>>>>P(Beth 2 people)|Atheism.  You need to fill in the details more beyond just saying "there are Beth 2 people," which will cost simplicity.  

Remember, this is just part of a lengthy cumulative case.

I think the cardinality should be Beth(0) or Beth(1) since finite beings should have finite descriptions, and additionally finite beings can have at most Beth(1)(if we allow immortality) distinct sequences of thoughts, actions, and observations, given that they can only think, observe, act, in a finite number of ways in finite time, so if you quotient by identical experiences and behaviors you get Beth(0) or Beth(1)(you might think we can e.g. observe a continuum amount of stuff in our visual field but this is an illusion, the resolution is bounded). The Bekenstein bound also implies physically limited beings in our universe have a finite description length. I don't think it's hard to imagine such a universe, e.g. consider all possible physical theories in some formal language and all possible initial conditions of such theories. This might be less simple to state than "imagine an infinitely perfect being" but it's also much less ambiguous, so it's hard to judge which is actually less simple. My perspective on these matters is influenced a lot by UDASSA, which recovers a lot of the nice behaviors of SIA at the cost of non-uniform priors. I don't actually think UDASSA is likely a correct description of reality, but it gives a coherent pictures of what an atheistic multiverse containing a great many possible people could look like.

If theism is true then all possible people exist but they're not all here.  SIA gives you a reason to think many exist but says nothing about where they'd be.  Theism predicts a vast multiverse. 

The cases are non-symmetrical because a big universe makes my existence more likely but it doesn't make me more likely to get HTTTTTTTHTTHHTTTHTTTHTHTHTTHHTTTTTTHHHTHTTHTTTHHTTTTHTHTHHHHHTTTTHTHHHHTHHHHHHHTTTTHHTHHHTHTTTTTHTTTHTTHHHTHHHTHHTHTHTHTHTHHTHTHTTHTHHTTHTHTTHHHHHTTTTTTHHTHTTTTTHHTHHTTHTTHHTTTHTTHTHTTHHHTTHHHTHTTHHTTHTTTHTHHHTHHTHHHHTHHTHHHTHHHHTTHTTHTHHTHTTHTHHTTHHTTHHTH.  The most specific version of the evidence is I get those sequence of coin flips, which is unaffected by the number of people, rather than that someone does that.  My view follows trivially from the widely adopted SIA which I argued for in the piece--it doesn't rely on some basic math error.

I didn't attack his character, I said he was wrong about lots of things. 

1M. Y. Zuo4mo
Did you skim or skip over reading most of the comment?

//If you add to the physical laws code that says "behave like with Casper", you have re-implemented Casper with one additional layer of indirection. It is then not fair to say this other world does not contain Casper in an equivalent way.//

No, you haven't reimplemented Casper, you've just copied his physical effects.  There is no Casper, and Casper's consciousness doesn't exist.  

Your description of the FDT stuff isn't what I argued.  

//I've just skimmed this part, but it seems to me that you provide arguments and evidence about consciousnes... (read more)

I think this comment is entirely right until the very end.  I don't think I really attack him as a person--I don't say he's evil or malicious or anything in the vicinity, I just say he's often wrong.  Seems hard to argue that without arguing against his points.  

I never claimed Eliezer says consciousness is nonphysical--I said exactly the opposite.

If you look at philosophers with Ph.Ds who study decision theory for a living, and have a huge incentive to produce original work, none of them endorse FDT.  

I don't think the specific part of decision theory where people argue over Newcomb's problem is large enough as a field to be subject to the EMH. I don't think the incentives are awfully huge either. I'd compare it to ordinal analysis, a field which does have PhDs but very few experts in general and not many strong incentives. One significant recent result (if the proof works then the ordinal notation in question would be most powerful proven well-founded) was done entirely by an amateur building off of work by other amateurs (see the section on Bashicu Matrix System):

About three quarters of academic decision theorists two box on Newcombe's problem.  So this standard seems nuts.  Only 20% one box.

That's irrelevant. To see why one-boxing is important, we need to realize the general principle - that we can only impose a boundary condition on all computations-which-are-us (i.e. we can choose how both us and all perfect predictions of us choose, and both us and all the predictions have to choose the same). We can't impose a boundary condition only on our brain (i.e. we can't only choose how our brain decides while keeping everything else the same). This is necessarily true. Without seeing this (and therefore knowing we should one-box), or even while being unaware of this principle altogether, there is no point in trying to have a "debate" about it.

My goal was to get people to defer to Eliezer.  I explicitly say he's an interesting thinker who is worth reading. 

I think that “the author of the post does not think the post he wrote was bad” is quite sufficiently covered by “hardly any”.

I didn't say Eliezer was a liar and a fraud.  I said he was often overconfident and eggregiously wrong, and explicitly described him as an interesting thinker who was worth reading. 

The examples just show that sometimes you lose by being rational.  

Unrelated, but I really liked your recent post on Eliezer's bizarre claim that character attacks last is an epistemic standard. 

But part of the whole dispute is that people don't agree on what "rational" means, right? In these cases, it's useful to try to avoid the disputed term—on both sides—and describe what's going on at a lower level. Suppose I'm a foreigner from a far-off land. I'm not a native English speaker, and I don't respect your Society's academics any more than you respect my culture's magicians. I've never heard this word rational before. (How do you even pronounce that? Ra-tee-oh-nal?) How would you explain the debate to me?

It seems like both sides agree that FDT age... (read more)

What's your explanations of why virtually no published papers defend it and no published decision theorists defend it?  You really think none of them have thought of it or anything in the vicinity? 

Yes. Well, almost. Schwarz brings up disposition-based decision theory, which appears similar though might not be identical to FDT, and every paper I've seen on it appears to defend it as an alternative to CDT. There are some looser predecessors to FDT as well, such as Hofstadter's superrationality, but that's too different imo. Given Schwarz' lack of reference to any paper describing any decision theory even resembling FDT, I'd wager that FDT's obviousness is merely only in retrospect.

I mean like, I can give you some names.  My friend Ethan who's getting a Ph.D was one person.  Schwarz knows a lot about decision theory and finds the view crazy--MacAskill doesn't like it either.

Is there anything about those cases that suggest it should generalize to every decision theorist, or that this is as good a proxy for how much FDT works as the beliefs of earth scientists are for whether the Earth is flat or not? For instance, your samples consist of a philosopher not specialized in decision theory, one unaccountable PhD, and one single person who is both accountable and specializes in decision theory. Somehow, I feel as if there is a difference between generalizing from that and generalizing from every credentialed expert that one could possibly contact. In any case, its dubious to generalize from that to "every decision theorist would reject FDT in the same way every earth scientist would reject flat earth", even if we condition on you being totally honest here and having fairly represented FDT to your friend. I think everyone here would bet $1,000 that if every earth scientist knew about flat earth, they would nearly universally dismiss it (in contrast to debating over it or universally accepting it) without hesitation. However, I would be surprised if you would bet $1,000 that if every decision theorist knew about FDT, they would nearly universally dismiss it.

I wouldn't call a view crazy for just being disbelieved by many people.  But if a view is both rejected by all relevant experts and extremely implausible, then I think it's worth being called crazy!  

I didn't call people crazy, instead I called the view crazy.  I think it's crazy for the reasons I've explained, at length, both in my original article and over the course of the debate.  It's not about my particular decision theory friends--it's that the fact that virtually no relevant experts agree with an idea is relevant to an assessmen... (read more)

My claim is that there is not yet people who know what they are talking about, or more precisely, everyone knows roughly as much about what they are talking about as everyone else. Again, I'd like to know who these decision theorists you talked to were, or at least what their arguments were. The most important thing here is how you are evaluating the field of decision theory as a whole, how you are evaluating who counts as an expert or not, and what arguments they make, in enough detail that one can conclude that FDT doesn't work without having to rely on your word.
Let's say, fundamental differences in worldview. I judge wireheading to be a step short of suicide, simulations to be no more than places that may be worth visiting on occasion, and most talk of "happiness" to be a category error. And the more zeros in an argument, the less seriously I am inclined to take it.
For some reason the words "flagrantly, confidently, and egregiously wrong" come to mind.

You can make it with Parfit's hitchiker, but in that case there's an action before hand and so a time when you have the ability to try to be rational.  

There is a path from the decision theory to the predictor, because the predictor looks at your brain--with the decision theory it will make--and bases the decision on the outputs of that cognitive algorithm. 

1Oskar Mathiasen5mo
I don't think the quoted problem has that structure. So S causes one boxing tendencies, and the person putting money in the box looks only at S. So it seems to be changing the problem to say that the predictor observes your brain/your decision procedure. When all they observe is S which, while causing "one boxing tendencies", is not causally downstream of your decision theory. Further if S where downstream of your decision procedure, then fdt one boxes whether or not the path from the decision procedure to the contents of the boxes routes through an agent. Undermining the criticism that fst has implausible discontinuities.

The Demon is omniscient.  

FDTists can't self-modify to be CDTists, by stipulation.  This actually is, I think, pretty plausible--I couldn't choose to start believing FDT. 

So it's crazy to believe things that aren't supported by published academic papers? I think if your standard for "crazy" is believing something that a couple people in a field too underdeveloped to be subject to the EMH disagree with and that there are merely no papers defending it, not any actively rejecting it, then probably you and roughly every person on this website ever count as "crazy". Actually, I think an important thing here is that decision theory is too underdeveloped and small to be subject to the EMH, so you can't just go "if this crazy hypothesis is correct then why hasn't the entire field accepted it, or at least having a debate over it?" It is simply too small to have fringe, in contrast to non-fringe positions. Obviously, I don't think the above is necessarily true, but I still think you're making us rely too much on your word and personal judgement. On that note, I think it's pretty silly to call people crazy based on either evidence they have not seen and you have not showed them (for instance, whatever counterarguments the decision theorists you contacted had), or evidence as weak/debatable as the evidence you have put forth in this post, and which has come to their attention only now. Were we somehow supposed to know that your decision theorist acquaintances disagreed beforehand? If you have any papers from academic decision theorists about FDT, I'd like to see them, whether favoring or disfavoring it. IIRC Soares has a Bachelor's in both computer science and economics and MacAskill has a Bachelor's in philosophy.

Yeah, I agree I have lots of views that LessWrongers find dumb.  My claim is just that it's bad when those views are hard to communicate on account of the way LW is set up.  

As shminux describes well, it's possible to write about controversial views in a way that doesn't get downvoted into nirvana. To do that, you actually have to think about how to write well. The rate limits, limits the quantity but that allows you to spend more time to get the quality right. If you are writing in the style you are writing you aren't efficiently communicating in the first place. That would require to think a lot more about what the cruxes actually are. 

I think it's not just the views but also (mostly?) the way you write them.

This is hindsight, but next time instead of writing "I think Eliezer is often wrong about X, Y, Z" perhaps you should first write three independent articles "my opinion on X", "my opinion on Y", my opinion on Z", and then one of two things will happen -- if people agree with you on X, Y, Z, then it makes sense to write the article "I think Eliezer is often wrong" and use these three articles as evidence... or if people disagree with you on X, Y, Z, then it doesn't really make sense t... (read more)

The description is exactly as you describe in your article.  I think my original was clear enough, but you describe your interpretation, and your interpretation is right.  You proceed to bite the bullet.  

Your original description doesn't specify subjunctive dependence, which is a critical component of the problem.

How'd you feel about a verbal debate? 

I have written before why FDT is relevant to solving the alignment problem. I'd be happy to discuss that to you.

Philosophy is pretty much the only subject that I'm very informed about.  So as a consequence, I can confidently say Eliezer is eggregiously wrong about most of the controversial views I can fact check him on.  That's . . . worrying. 

5Jackson Wagner6mo
Some other potentially controversial views that a philosopher might be able to fact-check Eliezer on, based on skimming through an index of the sequences: * Assorted confident statements about the obvious supremacy of Bayesian probability theory and how Frequentists are obviously wrong/crazy/confused/etc.  (IMO he's right about this stuff.  But idk if this counts as controversial enough within academia?) * Probably a lot of assorted philosophy-of-science stuff about the nature of evidence, the idea that high-caliber rationality ought to operate "faster than science", etc.  (IMO he's right about the big picture here, although this topic covers a lot of ground so if you looked closely you could probably find some quibbles.) * The claim / implication that talk of "emergence" or the study of "complexity science" is basically bunk.  (Not sure but seems like he's probably right?  Good chance the ultimate resolution would probably be "emergence/complexity is a much less helpful concept than its fans think, but more helpful than zero".) * A lot of assorted references to cognitive and evolutionary psychology, including probably a number of studies that haven't replicated -- I think Eliezer has expressed regret at some of this and said he would write the sequences differently today.  But there are probably a bunch of somewhat-controversial psychology factoids that Eliezer would still confidently stand by.  (IMO you could probably nail him on some stuff here.) * Maybe some assorted claims about the nature of evolution?  What it's optimizing for, what it produces ("adaptation-executors, not fitness-maximizers"), where the logic can & can't be extended (can corporations be said to evolve?  EY says no), whether group selection happens in real life (EY says basically never).  Not sure if any of these claims are controversial though. * Lots of confident claims about the idea of "intelligence" -- that it is a coherent concept, an important trait, etc.  (Vs some philosophers w

I felt like I was following the entire comment, until you asserted that it rules out zombies.

If you only got rid of consciousness behavior would change.  

You might be able to explain Chalmers' behavior, but that doesn't capture the subjective experience. 

4Steven Byrnes6mo
Oh, I see, the word “only” here or “just” in your previous comment were throwing me off. I was talking about the following thing that you wrote: [single quotes added to fix ambiguous parsing.] Let’s label these two worlds: * World A (“the world where consciousness causes the things”), and * World B (the world where “the things would be caused the same physical way as they are with consciousness, but there would be no consciousness”). Your perspective seems to be: “World A is the truth, and World B is a funny thought experiment. This proposal is type-D dualist.” I am proposing an alternative perspective: “World B is the true causally-closed physical laws of the universe (and by the way, the laws of physics maybe look different from how we normally expect laws of physics to look, but oh well), and World A is an physically equivalent universe but where consciousness exists as an epiphenomenon. This proposal is type-E epiphenomenalist.” Is there an error in that alternative perspective? Let’s say I write the sentence: “my wristwatch is black”. And let’s say that sentence is true. And let’s further say it wasn’t just a lucky guess. Under those assumptions, then somewhere in the chain of causation that led to my writing that sentence, you will find an actual watch, and it’s actually black, and photons bounced off of that watch and went into my eye (or someone else’s eye or a camera etc.), thus giving me that information. Agree? By the same token: Let’s say that Chalmers writes the sentence “I have phenomenal consciousness, and it has thus-and-such properties”. And let’s say that sentence is true. And let’s further say it wasn’t just a lucky guess. Under those assumptions, then somewhere in the chain of causation that led Chalmers to write that sentence, you will find phenomenal consciousness, whatever it is (if anything), with an appropriate place in the story to allow Chalmers to successfully introspect upon it—to allow Chalmers to somehow “query” phenomenal co

It's not epiphenomenalism because the law invokes consciousness.  On the interactionalist account, consciousness causes things rather than just the physical stuff causing things.  If you just got rid of consciousness, you'd get a physically different world.  

I don't think that induction on the basis of "science has explained a lot of things therefore it will explain consciousness" is convincing.  For one, up until this point, science has only explained physical behavior, not subjective experience.  This was the whole point (see Goff's book Galileo's error).  For another, this seems to prove too much--it would seem to suggest that we could discover the corect modal beliefs in a test tube. 

7Steven Byrnes6mo
First of all, I was making the claim “science will eventually be able to explain the observable external behavior wherein David Chalmers moves his fingers around the keyboard to type up books about consciousness”. I didn’t say anything about “explaining consciousness”, just explaining a particular observable human behavior. Second of all, I don’t believe that above claim because of induction, i.e. “science can probably eventually explain the observable external behavior of Chalmers writing books about consciousness because hey, scientists are smart, I’m sure they’ll figure it out”. I agree that that’s a pretty weak argument. Rather I believe that claim because I think I already know every step of that explanation, at least in broad outline. (Note that I’m stating this opinion without justifying it.) OK, but then the thing you’re talking about is not related to p-zombies, right? I thought the context was: Eliezer presented an argument against zombies, and then you / Chalmers say it’s actually not an argument against zombies but rather an argument against epiphenomenalism, and then you brought up the Casper thing to illustrate how you can have zombies without epiphenomenalism. And I thought that’s what we were talking about. But now you’re saying that, in the Casper thing, getting rid of consciousness changes the world, so I guess it’s not a zombie world? Maybe I’m confused. Question: if you got rid of consciousness, in this scenario, does zombie-Chalmers still write books about consciousness, or not? (If not, that’s not zombie-Chalmers, right? Or if so, then isn’t it pretty weird that getting rid of consciousness makes a physically different world but not in that way? Of all things, I would think that would be the most obvious way that the world would be physically different!!)
1M. Y. Zuo4mo
I would have to agree with the parent, why present your writing in such a way that is almost guaranteed to turn away, or greatly increase the skepticism of, serious readers?  A von-Neumann-like character might have been able to get away with writing in this kind of style, and still present some satisfactory piece, but hardly anyone less competent. It is some months later so I am writing this with the benefit of hindsight, but it seems almost self-negating.  Especially since a large portion of the argument rests on questions regarding Yudowsky's personal writing style, character, personality, world view, etc., which therefore draw into sharp contrasts the same attributes of any writer calling those out. i.e. even if every claim regarding Yudowsky's personal failings turns out to be 100% true, that would still require someone somewhat better in those respects to actually gain the sympathy of the audience.
They are factual questions about high-level concepts (in physicalism, of course) and high-level concepts depend on values - without values even your experiences at one place are not the same things as your experiences in another place.
In our world my laptop doesn't fall because there is a table under it. In another world Flying Spaghetty Monster holds my laptop. And also FSM sends light in my (version of me from other world) eyes, so I think there is a table. And FSM copies all other causal effects which are caused by the table in our world. This other world is imaginable, therefore, the table is non-physical. What exactly makes this a bad analogy with your line of thought?
That's true for all fields where there are experts on a subject who use one paradigm and other people who propose a different paradigm.  It tells you little about the merit of alternative paradigms. 
Says who? If you divide your ontology however you want, you can have a conceivability argument about non-physicality of melons. Which is, by the way, is addressed in Eliezer's reply to Chalmers.
5Garrett Baker6mo
I don't know if Eliezer is irrational about animal consciousness. There's a bunch of reasons you can still be deeply skeptical of animal consciousness even if animals have nocioceptors (RL agents have nocioceptors! They aren't conscious!), or integrated information theory & global workspace theory probably say animals are 'conscious'. For example, maybe you think consciousness is a verbal phenomenon, having to do with the ability to construct novel recursive grammars. Or maybe you think its something to do with the human capacity to self-reflect, maybe defined as making new mental or physical tools via methods other than brute force or local search. I don't think you can show he's irrational here, because he hasn't made any arguments to show the rationality or irrationality of. You can maybe say he should be less confident in his claims, or criticize him for not providing his arguments. The former is well known, the latter less useful to me. I find Eliezer impressive, because he founded the rationality community which IMO is the social movement with by far the best impact-to-community health ratio ever & has been highly influential to other social moments with similar ratios, knew AI would be a big & dangerous deal before virtually anyone, worked on & popularized that idea, and wrote two books (one nonfiction, and the other fanfiction) which changed many peoples' lives & society for the better. This is impressive no matter how you slice it. His effect on the world will clearly be felt for long to come, if we don't all die (possibly because we don't all die, if alignment goes well and turns out to have been a serious worry, which I am prior to believe). And that effect will be positive almost for sure.

Notably, about three quarters of decision theorists two box.  I wasn't arguing for non-physicalism so much as arguing that Eliezer's specific argument against physicalism shows that he doesn't know what he's talking about.   Pain is a subset of suffering--it's the physical version of suffering, but the same argument can be made for suffering.  I didn't comment on Everetianism because I don't know enough (just that I think it's suspicious that Eliezer is so confident) nor on probability theory.  I didn't claim there was a contradiction between Bayesian and frequentist methods. 

Notably, about three quarters of decision theorists two box.

I know... and I cannot wrap my head around it. They talk about causality and dominant strategies, and end up assigning non-zero weight to a zero-probability possible world. It's maddening. 

Eliezer's specific argument against physicalism shows that he doesn't know what he's talking about

I see. Not very surprising given the pattern. I guess my personal view is that non-physicalism is uninteresting given what we currently know about the world, but I am not a philosopher.

I didn't comment on Evere

... (read more)

Yeah I can see how that could be annoying.  In my defense, however, I am seriously irritated by this and I think there's nothing wrong with being a big snarky sometimes.  Eliezer seemed to think in this FaceBook exchange that his view just falls naturally from understanding consciousness.  But that is a very specific and implausible model. 

Your father followed FDT and had the same reasons to procreate as you.  He is relevantly like you. 

3Lukas Finnveden5mo
That would mean that believed he had a father with the same reasons, who believed he had a father with the same reasons, who believed he had a father with the same reasons... I.e., this would require an infinite line of forefathers. (Or at least of hypothetical, believed-in forefathers.) If anywhere there's a break in the chain — that person would not have FDT reasons to reproduce, so neither would their son, etc. Which makes it disanalogous from any cases we encounter in real life. And makes me more sympathetic to the FDT reasoning, since it's a stranger case where I have less strong pre-existing intuitions.
...which makes the Procreation case an unfair problem. It punishes FDT'ers specifically for following FDT. If we're going to punish decision theories for their identity, no decision theory is safe. It's pretty wild to me that @WolfgangSchwarz either didn't notice this or doesn't think it's a problem. A more fair version of Procreation would be what I have called Procreation*, where your father follows the same decision theory as you (be it FDT, CDT or whatever).

Suppose that I beat up all rational people so that they get less utility.  This would not make rationality irrational.  It would just mean that the world is bad for the rational.  The question you've described might be a fine one, but it's not what philosophers are arguing about in Newcombe's problem.  If Eliezer claims to have revolutionized decision theory, and then doesn't even know enough about decision theory to know that he is answering a different question from the decision theorists, that is an utter embarrassment that significa... (read more)

Sorry, I said twin case, I meant the procreation case! 

The simulation case seems relevantly like the normal twin case which I'm not as sure about. 

Legible precommitment is not crazy!  Sometimes, it is rational to agree to do the irrational thing in some case.  If you have the ability to make it so that you won't later change your mind, you should do that.  But once you're in that situation, it makes sense to defect. 

As far as I can tell, the procreation case isn't defined well enough in Schwarz for me to enage with it. In particular, in what exact way are the decision of my father and I entangled? (Just saying the father follows FDT isn't enough.) But, I do think there is going to be a case basically like this where I bite the bullet here. Noteably, so does EDT.
Cool, so you maybe agree that CDT agents would want to self modify into something like FDT agents (if they could). Then I suppose we might just disagree on the semantics behind the word rational. (Note that CDT agents don't exactly self-modify into FDT agents, just something close.)

I agree!  Eliezer deserves praise for writing publicly about his ideas.  My article never denied that.  It merely claimed that he often confidently says things that are totally wrong. 

I really appreciate that!  Though if you like the things I write, you can find my blog at

Ommizoid's blog is indeed high quality. Good Writing As Hypnosis, for example, is really good. I would love it if Scott Alexander had more competition. Evil: A Reflection is good too.

Your points I think are both addressed by the point MacAskill makes that, perhaps in some cases it's best to be the type of agent that follows functional decision theory.  Sometimes rationality will be bad for you--if there's a demon who tortures all rational people, for example.  And as Schwarz points out, in the twin case, you'll get less utility by following FDT--you don't always want to be a FDTist.  

I find your judgment about the blackmail case crazy!  Yes, agents who give in to blackmail do worse on average.  Yes, you want to... (read more)

Sometimes rationality will be bad for you--if there's a demon who tortures all rational people, for example

At some point this gets down to semantics. I think a reasonable question to answer is "what decision rule should be chosen by an engineer who wants to build an agent scoring the most utility across its lifetime?" (quoting from Schwarz). I'm not sure if the answer to this question is well described as rationality, but it seems like a good question to answer to me. (FDT is sort of an attempted answer to this question if you define "decision rule" somewhat narrowly.)

I can't seem to find this in the linked blog post. (I see discussion of the twin case, but not a case where you get less utility from precommiting to follow FDT at the start of time.) What about the simulation case? Do you think CDT with non-indexical preferences is crazy here also? More generally, do you find the idea of legible precommitment to be crazy?

The fact that someone argues his positions publicly doesn't make it so that they necessarily have an idea what they're talking about.  Deepak Chopra argues his positions publicly.

and Deepak deserves praise for that even if his positions are wrong.

We are stipulating that we would have the same evidence in both cases, so it would lead to the same beliefs, just with different truth values. 

That just moves the problem back one step. The processes that lead to the evidence in the untruth universe can't be the same as the ones which lead to the similar-looking evidence in the truth universe (unless you get Gettiered, and then the people in the truth universe don't actually have knowledge.) So if you don't ignore history, the worlds still differ in ways other than just the fate of Joan.

We're asking what's good for the person, not what deal they'd accept.  If we ask whether the person who is constantly tortured is well off, the answer is obviously no!  If naive OLT is true, then they would be well off.  It doesn't matter if they can ever use the knowledge. 

Is the naïve OLT so naïve that it always assign the same fixed amount of Value to the same bit of knowledge no matter what? Anyway, I'm still not convinced that a person in constant pain should automatically be not well off. Who is better off, a world-famous scientist billionaire with a terrible illness causing constant pain, or a beggar without terrible illnesses living a miserable life in some third-world slum? I've some trouble figuring out a similar scenario of well-off people involving literal torture, but that's because, as I said earlier, the very concept of "torture" involves jailers deliberately inflicting harm to segregated people. You say that OTL fails just because you can't imagine any realistic counterbalance to the torture itself. But since we are already in the realm of hypotheses, consider a fantasy setting where the demon-king routinely torture his generals, each of whom rules a whole realm anyway.

There are as many even numbers as there are total numbers.  They are the same cardinality.  

Yes, but the natural density of even numbers is 0.5. And that is the natural extension to infinity of your intuition that there are more happy than unhappy people in the HEAVEN universe. If you were born as a person at random in HEAVEN, you'd be most likely happy!

If you rearrange heaven to hell, you get a different average.  So you either have to think rearrangement matters or that they're equal.  

No, you don't. This is like saying that if you rearrange the even numbers, they stop being roughly half of all naturals. They're still one every two. If you pick a large enough ensemble, you notice that. The arrangement with one unhappy person per galaxy is very convenient, but it's the other way around - if the arrangement was inconvenient but the ratio was given, we could group them this way to make the calculation simpler. Relevant concept: Natural Density.

You can also get the total of a single galaxy--the problem is how you count up things in an infinite world. 

Yes but the total accumulates, the average does not. UTOT=∑GUG=∞⟨U⟩=limN→∞NUGN=UG

You have not understood the problem.  There are not more happy people than unhappy people in any rigorous sense--the infinities are of the same cardinality.  And the pasadena game scenario gives indeterminate averages.  Also, average utilitarianism is crazy, and implies you should create lots of miserable people in hell as long as they're slightly less miserable than existing people. 

I'm not an average utilitarian either - I don't think it's easy to define a good utility function at all, and I wrote a whole post to jokingly talk of this problem. My point was that only totalists would encounter this specific issue. If the galaxy has 1 trillion people, of which only one is unhappy, you can easily get the average for a single galaxy, which is finite. And since all galaxies have the same average, it can't really change if you just take more of them, no? Even numbers have the same cardinality as natural numbers, but we can still say that the density of even numbers on the natural numbers line is 1/2. This is not a Pasadena scenario, this is just a regular old limit of the ratio of two linear functions. Average utilitarianism has other issues, but on this, it captures our intuition exactly right.
Load More