All of momothefiddler's Comments + Replies

Delayed Gratification vs. a Time-Dependent Utility Function

I can't figure out an answer to any of those questions without having a way to decide which utility function is better. This seems to be a problem, because I don't see how it's even possible.

0Dorikka10yCan you taboo 'better'?
Delayed Gratification vs. a Time-Dependent Utility Function

But why does it matter what they think about it for the short time before it happens, compared to the enjoyment of it long after?

So you positively value "eating ice cream" and negatively value "having eaten ice cream" - I can relate. What if the change, instead of making you dislike ice cream and like veggies, made you dislike fitness and enjoy sugar crashes? The only real difference I can see is that the first increases your expected lifespan and so increases the overall utility. They both resolve the conflict and make you happy, though, so aren't they both better than what you have now?

I guess you're right. It's the difference between "what I expect" and "what I want".

0Dorikka10yI'm suspicious of the implied claim that the 'change in sustained happiness over time' term is so large in the relevant utility calculation that it dominates other terminal values. No -- liking sugar crashes would cause me to have more sugar crashes, and I'm not nearly as productive during sugar crashes as otherwise. So if I evaluated the new situation with my current utility function, I would find increased happiness (which is good), and very decreased productivity (which is more bad than the happiness is good). So, to clarify, liking sugar crashes would be significantly worse than what I have now, because I value other things than pleasure. I kinda suspect that you would have the same position -- modifying other sentiences' utility functions in order to maximize happiness, but evaluating changes to your own utility function with your current utility function. One of the more obvious problems with this asymmetry is that if we had the power to rewire each other's brain, we would be in conflict -- each would, in essence, be hostile to the other, even though we would consider our intentions benevolent. However, I'm unsatisfied with the 'evaluate your proposed change to someone's utility function with their CEV'd current utility function', because quite a bit is relying on the 'CEV' bit. Let's say that someone was a heroin addict, and I could rewire them to remove their heroin addiction (so that it's the least-convenient-possible-world, let's say that I can remove the physical and mental withdrawal as well). I'm pretty sure that their current utility function (which is super-duper time discounted -- one of the things heroin does) would significantly oppose the change, but I'm not willing to stop here, because it's obviously a good thing for them. So the question becomes 'what should I actually do to their current utility function to CEV it, so I can evaluate the new utility function with it.' Well, first I'll strip the actual cognitive biases (including the super
Delayed Gratification vs. a Time-Dependent Utility Function

As far as I can tell, the only things that keep me from reducing myself to a utilon-busybeaver are a) insufficiently detailed information on the likelihoods of each potential future-me function, and b) an internally inconsistent utility function

What I'm addressing here is b) - my valuation of a universe composed entirely of minds that most-value a universe composed entirely of themselves is path-dependent. My initial reaction is that that universe is very negative on my current function, but I find it hard to believe that it's truly of larger magnitude th... (read more)

Delayed Gratification vs. a Time-Dependent Utility Function

Hm. If people have approximately-equivalent utility functions, does that help them all accomplish their utility better? If so, it makes sense to have none of them value stealing (since having all value stealing could be a problem). In a large enough society, though, the ripple effect of my theft is negligible. That's beside the point, though.

"Avoid death" seems like a pretty good basis for a utility function. I like that.

1Alerus10yYeah I agree that the ripple effect of your personal theft would be negligible. I see it as similar to littering. You do it in a vacuum, no big deal, but when many have that mentality, it causes problems. Sounds like you agree too :-)
Delayed Gratification vs. a Time-Dependent Utility Function

So you, like I, might consider turning the universe into minds that most value a universe filled with themselves?

4bryjnar10yI'd consider it. On reflection, I think that for me personally what I care about isn't just minds of any kind having their preferences satisfied, even if those are harmless ones. I think I probably would like them to have more adventurous preferences! The point is, what I'm looking at here are my preferences for how the world should be; whether I would prefer a world full of wire-headers or one full of people doing awesome actual stuff. I think I'd prefer the latter, even if overall the adventurous people didnt' get as many of their preferences satisfied. A typical wire-header would probably disagree, though!
Delayed Gratification vs. a Time-Dependent Utility Function

I'm not saying I can change to liking civil war books. I'm saying if I could choose between A) continuing to like scifi and having fantasy books, or B) liking civil war books and having civil war books, I should choose B, even though I currently value scifi>stats>civil war. By extension, if I could choose A) continuing to value specific complex interactions and having different complex interactions, or B) liking smiley faces and building a smiley-face maximizer I should choose B even though it's counterintuitive. This one is somewhat more plausible,... (read more)

0Alerus10yRight, so if you can choose your utility function, then it's better to choose one that can be better maximized. Interestingly though, if we ever had this capability, I think we could just reduce the problem by using an unbiased utility function. That is, explicit preferences (such as liking math versus history) would be removed and instead we'd work with a more fundamental utility function. For instance, death is pretty much a universal stop point since you cannot gain any utility if you're dead, regardless of your function. This would be in a sense the basis of your utility function. We also find that death is better avoided when society works together and develops new technology. Your actions then might be dictated by what you are best at doing to facilitate the functioning and growth of society. This is why I brought up society damaning as being potentially objectively worse. You might be able to come up with specific instances of actions that we associate as society-damaging that seem okay, such as specific instances of stealing, but then they aren't really society damaging in the grand scheme of things. That said, I think as a rule of thumb stealing is bad in most cases due to the ripple effects of living in a society in which people do that, but that's another discussion. The point is there may be objectively better choices even if you have no explicit preferences for things (or you can choose your preferences). Of course, that's all conditioned on whether you can choose your utility function. For our purposes for the foreseeable future, that is not the case and so you should stick with expected utility functions.
Delayed Gratification vs. a Time-Dependent Utility Function

You're saying that present-me's utility function counts and no-one else's does (apart from their position in present-me's function) because present-me is the one making the decision? That my choices must necessarily depend on my present function and only depend on other/future functions in how much I care about their happiness? That seems reasonable. But my current utility function tells me that there is an N large enough that N utilon-seconds for other peoples' functions counts more in my function than any possible thing in the expected lifespan of present-me's utility function.

0bryjnar10ySure. That might well be so. I'm not saying you have to be selfish! However, you're talking about utilons for other people - but I doubt that that's the only thing you care about. I would kind of like for Clippy to get his utilons, but in the process, the world will get turned into paperclips, and I care much more about that not happening! So if everyone were to be turned into paperclip maximizers, I wouldn't necessarily roll over and say, "Alright, turn the world into paperclips". Maybe if there were enough of them, I'd be OK with it, as there's only one world to lose, but it would have to be an awful lot!
Delayed Gratification vs. a Time-Dependent Utility Function

Say there's a planet, far away from ours, where gravity is fairly low, atmospheric density fairly high, and the ground uniformly dangerous, and the sentient resident species has wings and two feet barely fitted for walking. Suppose, also, that by some amazingly unlikely (as far as I can see) series of evolutionary steps, these people have a strong tendency to highly value walking and negatively value flying.

If you had the ability to change their hardwired values toward transportation (and, for whatever reason, did not have the ability to change their non-n... (read more)

0Dorikka10yI think that an important question would be 'would their current utility function assign positive utility to modifying it in the suggested manner if they knew what they will experience after the change?', or, more briefly, 'what would their CEV say?' It might seem like they would automatically object to having their utility function changed, but here's a counterexample to show that it's at least possible that they would not: I like eating ice cream, but ice cream isn't very healthy -- I would much rather like eating veggies and hate eating ice cream, and would welcome the opportunity to have my preferences changed in such a way. I'm not very sure what precisely you mean with Aumann's Agreement Theorem applying to utility, but I think the answer's 'no' -- AFAIK, Aumann's Agreement Theorem is a result of the structure of Bayes Theorem, and I don't see a relation which would allow us to conclude something similar for different utility functions.
Delayed Gratification vs. a Time-Dependent Utility Function

If I considered it high-probability that you could make a change and you were claiming you'd make a change that wouldn't be be of highly negative utility to everyone else, I might well prepare for that change. Because your proposed change is highly negative to everyone else, I might well attempt to resist or counteract that change. Why does that make sense, though? Why do other peoples' current utility functions count if mine don't? How does that extend to a situation where you changed everyone else? How does it extend to a situation where I could change e... (read more)

1bryjnar10yThe way I'm thinking about it is that other people's utility functions count (for you, now) because you care about them. There isn't some universal magic register of things that "count"; there's just your utility function which lives in your head (near enough). If you fundamentally don't care about other people's utility, and there's no instrumental reason for you to do so, then there's no way I can persuade you to start caring. So it's not so much that caring about other people's utility "makes sense", just that you do care about it. Whether the AI is doing a bad thing (from the point of view of the programmer) depends on what the programmer actually cares about. If he wants to climb Mount Everest, then being told that he will be rewired to enjoy just lying on a sofa doesn't lead to him doing so. He might also care about the happiness of his future self, but it could be that his desire to climb Mount Everest overwhelms that.
Delayed Gratification vs. a Time-Dependent Utility Function

I like this idea, but I would also, it seems, need to consider the (probabilistic) length of time each utility function would last.

That doesn't change your basic point, though, which seems reasonable.

The one question I have is this: In cases where I can choose whether or not to change my utility function - cases where I can choose to an extent the probability of a configuration appearing - couldn't I maximize expected utility by arranging for my most-likely utility function at any given time to match the most-likely universe at that time? It seems that would make life utterly pointless, but I don't have a rational basis for that - it's just a reflexive emotional response to the suggestion.

0Alerus10yYeah I agree that you would have to consider time. However, my feeling is that for the utility calculation to be performed at all (that is, even in the context of a fixed utility), you must also consider time through the state of being in all subsequent states, so now you just add and expected utility calculation to each of those subsequent states (and therefore implicitly capture the length of time it lasts) instead of the fixed utility. It is possible, I suppose, that the probability could be conditional on the previous state's utility function too. That is, if you're really into math one day it's more likely that you could switch to statistics rather than history following that, but if you have it conditioned on having already switched to literature, maybe history would be more likely then. That makes for a more complex analysis, but again, approximations and all would help :p Regarding your second question, let me make sure I've understood it correctly. You're basically saying couldn't you change the utility function, what you value, on the whims of what is most possible? For instance, if you were likely to wind up stuck in a log cabin that for entertainment only had books on the civil war, that you change your utility to valuing civil war books? Assuming I understood that correctly, if you could do that, I suppose changing your utility to reflect your world would be the best choice. Personally, I don't think humans are quite that malleable and so you're to an extent kind of stuck with who you are. Ultimately, you might also find that some things are objectively better or worse than others; that regardless of the utility function some things are worse. Things that are damaging to society, for instance, might be objectively worse than alternatives because the consequential reproductions for you will almost always be bad (jail, a society that doesn't function as well because you just screwed it up, etc.). If true, you still would have some constant guiding princi
Not for the Sake of Happiness (Alone)

Without a much more precise way of describing patterns of neuron-fire, I don't think either of us can describe happiness more than we have so far. Having discussed the reactions in-depth, though, I think we can reasonably conclude that, whatever they are, they're not the same, which answers at least part of my initial question.

Thanks!

Not for the Sake of Happiness (Alone)

I believe you to be sincere when you say

I've certainly had the experience of changing my mind about whether X makes the world better, even though observing X continues to make me equally happy -- that is, the experience of having F(Wa+X) - F(Wa) change while H(O(Wa+X)) - H(O((Wa)) stays the same

but I can't imagine experiencing that. If the utility of a function goes down, it seems my happiness from seeing that function must necessarily go down as well. This discrepancy causes me to believe there is a low-level difference between what you consider happ... (read more)

2TheOtherDave10yI agree that if two things are indistinguishable in principle, it makes sense to use the same label for both. It is not nearly as clear to me that "what makes me happy" and "what makes the world better" are indistinguishable sets as it seems to be to you, so I am not as comfortable using the same label for both sets as you seem to be. You may be right that we don't use "happiness" to refer to the same things. I'm not really sure how to explore that further; what I use "happiness" to refer to is an experiential state I don't know how to convey more precisely without in effect simply listing synonyms. (And we're getting perilously close to "what if what I call 'red' is what you call 'green'?" territory, here.)
Delayed Gratification vs. a Time-Dependent Utility Function

Well, I'm not sure making the clones anencephalic would make eating them truly neutral. I'd have to examine that more.

The linked situation proposes that the babies are in no way conscious and that all humans are conditioned, such that killing myself will actually result in a fewer number of people happily eating babies.

Delayed Gratification vs. a Time-Dependent Utility Function

Refuse the option and turn me into paperclips before I could change it.

Apparently my acceptance that utility-function-changes can be positive is included in my current utility function. How can that be, though? While, according to my current utility function, all previous utility functions were insufficient, surely no future one could map more strongly onto my utility function than itself. Yet I feel that, after all these times, I should be aware that my utility function is not the ideal one...

Except that "ideal utility function" is meaningless! ... (read more)

Not for the Sake of Happiness (Alone)

I understand it to mean, roughly, that when comparing hypothetical states of the world Wa and Wb, I perform some computation F(W) on each state such that if F(Wa) > F(Wb), then I consider Wa more valuable than Wb.

That's precisely what I mean.

Another way of saying this is that if OW is the reality that I would perceive in a world W, then my happiness in Wa is F(OWa). It simply cannot be the case, on this view, that I consider a proposed state-change in the world to be an improvement, without also being such that I would be made happier by becoming a

... (read more)
2TheOtherDave10yWhat I mean by "sincerely" is just that I'm not lying when I assert it. And, yes, this presumes that X isn't changing F. I wasn't trying to be sneaky; my intention was simply to confirm that you believe F(Wa+X)>F(Wa) implies F(O(Wa+X))<F(O(Wa)), and that I hadn't misunderstood something. And, further, to confirm that you believe that you believe that if F(W) gives the utility of a world-state for some evaluator, then F(O(W)) gives the degree to which that world-state makes that evaluator happy. Or, said more concisely: that H(O(W)) == F(O(W)) for a given observer. Hm. So, I agree broadly that F(Wa+X)>F(Wa) implies F(O(Wa+X))<F(O(Wa)). (Although a caveat: it's certainly possible to come up with combinations of F() and O() for which it isn't true, so this is more of an evidentiary implication than a logical one. But I think that's beside our purpose here.) H(O(W)) = F(O(W)), though, seems entirely unjustified to me. I mean, it might be true, sure, just as it might be true that F(O(W)) is necessarily equal to various other things. But I see no reason to believe it; it feels to me like an assertion pulled out of thin air. Of course, I can't really have any counterevidence, the way the claim is structured. I mean, I've certainly had the experience of changing my mind about whether X makes the world better, even though observing X continues to make me equally happy -- that is, the experience of having F(Wa+X) - F(Wa) change while H(O(Wa+X)) - H(O((Wa)) stays the same -- which suggests to me that F() and H() are different functions... but you would presumably just say that I'm mistaken about one or both of those things. Which is certainly possible, I am far from incorrigible either about what makes me happy and I don't entirely understand what I believe makes the world better. I think I have to leave it there. You are asserting an identity that seems unjustified to me, and I have no compelling reason to believe that it's true, but also no definitive grounds for dec
Not for the Sake of Happiness (Alone)

It was confusing me, yes. I considered hedons exactly equivalent to utilons.

Then you made your excellent case, and now it no longer confuses me. I revised my definition of happiness from "reality matching the utility function" to "my perception of reality matching the utility function" - which it should have been from the beginning, in retrospect.

I'd still like to know if people see happiness as something other than my new definition, but you have helped me from confusion to non-confusion, at least regarding the presence of a distinction, if not the exact nature thereof.

3TheOtherDave10y(nods) Cool. As for your proposed definition of happiness... hm. I have to admit, I'm never exactly sure what people are talking about when they talk about their utility functions. Certainly, if I have a utility function, I don't know what it is. But I understand it to mean, roughly, that when comparing hypothetical states of the world Wa and Wb, I perform some computation F(W) on each state such that if F(Wa) > F(Wb), then I consider Wa more valuable than Wb. Is that close enough to what you mean here? And you are asserting, definitionally, that if that's true I should also expect that, if I'm fully aware of all the details of Wa and Wb, I will be happier in Wa. Another way of saying this is that if OW is the reality that I would perceive in a world W, then my happiness in Wa is F(OWa). It simply cannot be the case, on this view, that I consider a proposed state-change in the world to be an improvement, without also being such that I would be made happier by becoming aware of that state-change actually occurring. Am I understanding you correctly so far? Further, if I sincerely assert about some state change that I believe it makes the world better, but it makes me less happy, it follows that I'm simply mistaken about my own internal state... either I don't actually believe it makes the world better, or it doesn't actually make me less happy, or both. Did I get that right? Or are you making the stronger claim that I cannot in point of fact ever sincerely assert something like that?
Delayed Gratification vs. a Time-Dependent Utility Function

Well, the situation I was referencing assumed baby-eating without the actual sentience at any point of the babies, but that's not relevant to the actual situation. You're saying that my expected future utility functions, in the end, are just more values in my current function?

I can accept that.

The problem now is that I can't tell what those values are. It seems there's a number N large enough that if N people were to be reconfigured to heavily value a situation and the situation was then to be implemented, I'd accept the reconfiguration. This was counterintuitive and, due to habit, feels it should still be, but makes a surprising amount of sense.

0Dorikka10yYep, that's what I mean. I'm pretty sure that the amount of utility you lose (or gain?) through value drift is going to depend on the direction that your values drift in. For example, Gandhi would assign significant negative utility to taking a pill that made him want to kill people [http://yudkowsky.net/singularity], but he might not care if he took a pill that changed that made him like vanilla ice cream more than chocolate ice cream. Aside from the more obvious cases, like the murder pill above, I haven't nailed down exactly which parts of a sentience's motivational structure give me positive utility if fulfilled. My intuition says that I would care about the particular nature of someone's utility function if I knew them, and would only care about maximizing it (pretty much whatever it was) if I didn't, but this doesn't seem to be what I truly want. I consider this to be a Hard Question, at least for myself.
Not for the Sake of Happiness (Alone)

Oh! I didn't catch that at all. I apologize.

You've made an excellent case for them not being the same. I agree.

2TheOtherDave10yCool. I thought it was confusing you earlier [http://lesswrong.com/lw/lb/not_for_the_sake_of_happiness_alone/6ill], but perhaps I misunderstood.
Not for the Sake of Happiness (Alone)

That makes sense. I had only looked at the difference within "things that affect my choices", which is not a full representation of things. Could I reasonably say, then, that hedons are the intersection of "utilons" and "things of which I'm aware", or is there more to it?

Another way of phrasing what I think you're saying: "Utilons are where the utility function intersects with the territory, hedons are where the utility function intersects with the map."

3TheOtherDave10yI'm not sure how "hedons" interact with "utilons". I'm not saying anything at all about how they interact. I'm merely saying that they aren't the same thing.
Delayed Gratification vs. a Time-Dependent Utility Function

And if I'm "best at" creating dissonance, hindering scientific research, or some other negatively-valued thing? If I should do the thing at which I'm most effective, regardless of how it fits my utility function...

I don't know where that's going. I don't feel that's a positive thing, but that's inherent in the proposition that it doesn't fit my utility function.

I guess I'm trying to say that "wasting my life" has a negative value with a lower absolute value than "persuading humanity to destroy itself" - though oratory is definitely not my best skill, so it's not a perfect example.

-12shminux10y
0vi21maobk9vp10y"Best at" may be considered to mean "creating most value for the given amount of effort"
Delayed Gratification vs. a Time-Dependent Utility Function

If I had some reason (say an impending mental reconfiguration to change my values) to expect my utility function to change soon and stay relatively constant for a comparatively long time after that, what does "maximizing my utility function now" look like? If I were about to be conditioned to highly-value eating babies, should I start a clone farm to make my future selves most happy or should I kill myself in accordance with my current function's negative valuation to that action?

0Luke_A_Somers10yDepends on a few things: Can you make the clones anencephalic, so you become neutral in respect to them? If you kill yourself, will someone else be conditioned in your place?
1Dorikka10yThat depends: how much do you (currently) value the happiness of your future self versus the life-experience of the expected number of babies you're going to kill? If possible, it would probably be optimal to take measures that would both make your future self happy and not-kill babies, but if not, the above question should help you make your decision.
Delayed Gratification vs. a Time-Dependent Utility Function

My utility function maximises (and think this is neither entirely nonsensical nor entirely trivial in the context) utilons. I want my future selves to be "happy", which is ill-defined.

I don't know how to say this precisely, but I want as many utilons as possible from as many future selves as possible. The problem arises when it appears that actively changing my future selves' utility functions to match their worlds is the best way to do that, but my current self recoils from the proposition. If I shut up and multiply, I get the opposite result that Eliezer does and I tend to trust his calculations more than my own.

0FeepingCreature10yBut surely you must have some constraints about what you consider future selves - some weighting function that prevents you from simply reducing yourself to a utilon-busybeaver.
Delayed Gratification vs. a Time-Dependent Utility Function

Thanks for pointing that out! The general questions still exist, but the particular situation produces much less anxiety with the knowledge that the two functions have some similarities.

-5shminux10y
Delayed Gratification vs. a Time-Dependent Utility Function

I'm not sure what you're asking, but it seems to be related to constancy.

A paperclip maximizer believes maximum utility is gained through maximum paperclips. I don't expect that to change.

I have at various times believed:

  • Belief in (my particular incarnation of) the Christian God had higher value than lack thereof
  • Personal emplyment as a neurosurgeon would be preferable to personal employment as, say, a mathematics teacher
  • nothing at all was positively valued and the negative value of physical exertion significantly outweighed any other single value

Give... (read more)

0Manfred10yOkay. If you built a paperclip mazimizer, told the paperclip maximizer that you would probably change its utility function in a year or two, and offered it this choice, what would it do?
Not for the Sake of Happiness (Alone)

I would not have considered utilons to have meaning without my ability to compare them in my utility function.

You're saying utilons can be generated without your knowledge, but hedons cannot? Does that mean utilons are a measure of reality's conformance to your utility function, while hedons are your reaction to your perception of reality's conformance to your utility function?

3TheOtherDave10yI'm saying that something can make the world better without affecting me, but nothing can make me happier without affecting me. That suggests to me that the set of things that can make the world better is different from the set of things that can make me happy, even if they overlap significantly.
Not for the Sake of Happiness (Alone)

The hedonic scores are identical and, as far as I can tell, the outcomes are identical. The only difference is if I know about the difference - if, for instance, I'm given a choice between the two. At that point, my consideration of 2 has more hedons than my consideration of 1. Is that different from saying 2 has more utilons than 1?

Is the distinction perhaps that hedons are about now while utilons are overall?

4TheOtherDave10yTalking about "utilons" and "hedons" implies that there exists some X such that, by my standards, the world is better with more X in it, whether I am aware of X or not. Given that assumption, it follows that if you add X to the world in such a way that I don't interact with it at all, it makes the world better by my standards, but it doesn't make me happier. One way of expressing that is that X produces utilons but not hedons.
Timeless Causality

Hm. This is true. Perhaps it would be better to say "Perceiving states in opposite-to-conventional order would give us reason to assume probabilities entirely consistent with considering a causality in opposite-to-conventional order."

Unless I'm missing something, the only reason to believe causality goes in the order that places our memory-direction before our non-memory direction is that we base our probabilities on our memory.

Timeless Causality

Well, Eliezer seems to be claiming in this article that the low-to-high is more valid than the high-to-low, but I don't see how they're anything but both internally consistent

Delayed Gratification vs. a Time-Dependent Utility Function

I can only assume it wouldn't accept. A paperclip maximizer, though, has much more reason than I do to assume its utility function would remain constant.

2Manfred10yConstant if what?
Timeless Causality

I've read this again (along with the rest of the Sequence up to it) and I think I have a better understanding of what it's claiming. Inverting the axis of causality would require inverting the probabilities, such that an egg reforming is more likely than an egg breaking. It would also imply that our brains contain information on the 'future' and none on the 'past', meaning all our anticipations are about what led to the current state, not where the current state will lead.

All of this is internally consistent, but I see no reason to believe it gives us a &q... (read more)

0dlthomas10yI don't think this is a coherent notion. If we "invert the probabilities" in some literal sense, then yes, the egg reforming is more likely than the egg breaking, but still more likely is the egg turning into an elephant.
0fubarobfusco10yWhat do you want out of a "real" direction of causality, other than the above?
Not for the Sake of Happiness (Alone)

The basic point of the article seems to be "Not all utilons are (reducible to) hedons", which confuses me from the start. If happiness is not a generic term for "perception of a utilon-positive outcome", what is it? I don't think all utilons can be reduced to hedons, but that's only because I see no difference between the two. I honestly don't comprehend the difference between "State A makes me happier than state B" and "I value state A more than state B". If hedons aren't exactly equivalent to utilons, what are they... (read more)

4DSimon10yConsider the following two world states: 1. A person important to you dies. 2. They don't die, but you are given a brain modification that makes it seem to you as though they had. The hedonic scores for 1 and 2 are identical, but 2 has more utilons if you value your friend's life.
The Conscious Sorites Paradox

I don't see why you need to count the proportional number of Eliezers at all. I'm guessing the reason you expect an ordered future isn't because of the relation of {number of Boltzmann Eliezers}/{number of Earth Eliezers} to 1. It seems to me you expect an orderly future because you (all instances of you and thus all instances of anything that is similar enough to you to be considered 'an Eliezer') have memories of an orderly past. These memories could have sprung into being when you did a moment ago, yes, but that doesn't give you any other valid way to c... (read more)

Circular Altruism

The issue with polling 3^^^3 people is that once they are all aware of the situation, it's no longer purely (3^^^3 dust specks) vs (50yrs torture). It becomes (3^^^3 dust specks plus 3^^^3 feelings of altruistically having saved a life) vs (50yrs torture). The reason most of the people polled would accept the dust speck is not because their utility of a speck is more than 1/3^^^3 their utility of torture. It's because their utility of (a speck plus feeling like a lifesaver) is more than their utility of (no speck plus feeling like a murderer).

Religion's Claim to be Non-Disprovable

I may misunderstand your meaning of "warm fuzzies", but I find I obtain significant emotional satisfaction from mathematics, music, and my social interactions with certain people. I see no reason to believe that people receive some important thing from the fundamental aspects of religion that cannot be obtained in less detrimental ways.

Are Your Enemies Innately Evil?

I acknowledge the legitimacy of demanding I google the phrase before requesting another link and will attempt to increase the frequency with which that's part of my response to such an occasion, but maintain the general usefulness of pointing out a broken link in a post, especially one that's part of a Sequence.

6pedanterrific10yI was being rather passive-aggressive, wasn't I? I apologize. I find it's a generally useful policy, yes. On this we agree.
Are Your Enemies Innately Evil?

The Jesus Camp link is broken. Does anyone have an alternative? I don't know what Eliezer is referencing there.

6taelor10yJesus Camp is a documentary about a camp for fundamentalist Christian youths. The first part can be seen here [http://www.youtube.com/watch?v=rBv8tv62yGM] (check the related videos for the subsequent parts). Alternately, if you don't have time to watch the full movie, this [http://www.youtube.com/watch?v=LACyLTsH4ac] should give you a general idea.
2thomblake10yI believe the link initially pointed to a trailer for the movie Jesus Camp [http://en.wikipedia.org/wiki/Jesus_Camp].
-6pedanterrific10y
Are Your Enemies Innately Evil?

The ideal point of a police system (and, by extension, a police officer) is to choose force in such a way as to "minimize the total sum of death".

It appears that you believe that the current police system is nothing like that, while Eliezer seems to believe it is at least somewhat like that. While I don't have sufficient information to form a realistic opinion, it seems to me highly improbable that 95% of police actions are initiations of force or that every police officer chooses every day to minimize total sum of death.

The largest issue here is... (read more)

Timeless Causality

I'm not sure I understand, but are you saying there's a reason to view a progression of configurations in one direction over another? I'd always (or at least for a long time) essentially considered time a series of states (I believe I once defined passage of time as a measurement of change), basically like a more complicated version of, say, the graph of y=ln(x). Inverting the x-axis (taking the mirror image of the graph) would basically give you the same series of points in reverse, but all the basic rules would be maintained - the height above the x-axis... (read more)

0momothefiddler10yI've read this again (along with the rest of the Sequence up to it) and I think I have a better understanding of what it's claiming. Inverting the axis of causality would require inverting the probabilities, such that an egg reforming is more likely than an egg breaking. It would also imply that our brains contain information on the 'future' and none on the 'past', meaning all our anticipations are about what led to the current state, not where the current state will lead. All of this is internally consistent, but I see no reason to believe it gives us a "real" direction of causality. As far as I can tell, it just tells us that the direction we calculate our probabilities is the direction we don't know. Going from a low-entropy universe to a high-entropy universe seems more natural, but only because we calculate our probabilities in the direction of low-to-high entropy. If we based our probabilities on the same evidence perceived the opposite direction, it would be low-to-high that seemed to need universes discarded and high-to-low that seemed natural. ...right?