Posts

Sorted by New

Wiki Contributions

Comments

I agree, intuition is very difficult here. In this specific scenario, I'd lean towards saying yes - it's the same person with a physically different body and brain, so I'd like to think that there is some continuity of the "person" in that situation. My brain isn't made of the "same atoms" it was when I was born, after all. So I'd say yes. In fact, in practice, I would definitely assume said robot and software to have moral value, even if I wasn't 100% sure.

However, if the original brain and body weren't destroyed, and we now had two apparently identical individuals claiming to be people worthy of moral respect, then I'd be more dubious. I'd be extremely dubious of creating twenty robots running identical software (which seems entirely possible with the technology we're supposing) and assigning them the moral status of twenty people. "People", of the sort deserving of rights and dignity and so forth, shouldn't be the sort of thing that can be arbitrarily created through a mechanical process. (And yes, human reproduction and growth is a mechanical process, so there's a problem there too.)

Actually, come to think of it... if you have two copies of software (either electronic or neuron-based) running on two separate machines, but it's the same software, could they be considered the same person? After all, they'll make all the same decisions given similar stimuli, and thus are using the same decision process.

What issues does your best atheist theory have?

My biggest problem right now is all the stuff about zombies, and how that implies that, in the absence of some kind of soul, a computer program or other entity that is capable of the same reasoning processes as a person, is morally equivalent to a person. I agree with every step of the logic (I think, it's been a while since I last read the sequence), but I end up applying it in the other direction. I don't think a computer program can have any moral value, therefore, without the presence of a soul, people also have no moral value. Therefore I either accept a lack of moral value to humanity (both distasteful and unlikely), or accept the presence of something, let's call it a soul, that makes people worthwhile (also unlikely). I'm leaning towards the latter, both as the less unlikely, and the one that produces the most harmonious behaviour from me.

It's a work in progress. I've been considering the possibility that there is exactly one soul in the universe (since there's no reason to consider souls to propagate along the time axis of spacetime in any classical sense), but that's a low-probability hypothesis for now.

It's about how, if you're attacking somebody's argument, you should attack all of the bad points of it simultaneously, so that it doesn't look like you're attacking one and implicitly accepting the others. With any luck, it'll be up tonight.

Hi, I've been lurking on Less Wrong for a few months now, making a few comments here and there, but never got around to introducing myself. Since I'm planning out an actual post at the moment, I figured I should tell people where I'm coming from.

I'm a male 30-year-old optical engineer in Sydney, Australia. I grew up in a very scientific family and have pretty much always assumed I had a scientific career ahead of me, and after a couple of false starts, it's happened and I couldn't ask for a better job.

Like many people, I came to Less Wrong from TVTropes via Methods of Rationality. Since I started reading, I've found that it's been quite helpful in organising my own thoughts and casting aside unuseful arguments, and examining aspects of my life and beliefs that don't stand up under scrutiny.

In particular, I've found that reading Less Wrong has allowed, nay forced, me to examine the logical consistency of everything I say, write, hear and read, which allows me to be a lot more efficient in dicussions, both by policing my own speech and being more usefully critical of others' points (rather than making arguments that don't go anywhere).

While I was raised in a substantively atheist household, my current beliefs are theist. The precise nature of these beliefs has shifted somewhat since I started reading Less Wrong, as I've discarded the parts that are inconsistent or even less likely than the others. There are still difficulties with my current model, but they're smaller than the issues I have with my best atheist theory.

I've also had a surprising amount of success in introducing the logical and rationalist concepts from Less Wrong to one of my girlfriends, which is all the more impressive considering her dyscalculia. I'm really pleased that that this site has given me the tools to do that. It's really easy now to short-circuit what might otherwise become an argument by showing that it's merely a dispute about definitions. It's this sort of success that has kept me reading the site these past months, and I hope I can contribute to that success for other people.

Assuming rational agents with a reasonable level of altruism (by which I mean, incorporating the needs of other people and future generations into their own utility functions, to a similar degree to what we consider "decent people" to do today)...

If such a person figures that getting rid of the Nazis or the Daleks or whoever the threat of the day is, is worth a tiny risk of bringing about the end of the world, and their reasoning is completely rational and valid and altrustic (I won't say "unselfish" for reasons discussed elsewhere in this thread) and far-sighted (not discounting future generations too much)...

... then they're right, aren't they?

If the guys behind the Trinity test weighed the negative utility of the Axis taking over the world, presumably with the end result of boots stamping on human faces forever, and determined that the 3/1,000,000 chance of ending all human life was worth preventing this future from coming to pass, then couldn't Queen Victoria perform the same calculations, and conclude "Good heavens. Nazis, you say? Spreading their horrible fascism in my empire? Never! I do hope those plucky Americans manage to build their bomb in time. Tiny chance of destroying the world? Better they take that risk than let fascism rule the world, I say!"

If the utility calculations performed regarding the Trinity test were rational, altrustic and reasonably far-sighted, then they would have been equally valid if performed at any other time in history. If we apply a future discounting factor of e^-kt, then that factor would apply equally to all elements in the utility calculation. If the net utility of the test were positive in 1945, then it should have been positive at all points in history before then. If President Truman (rationally, altrustically, far-sightedly) approved of the test, then so should Queen Victoria, Julius Caesar and Hammurabi have, given sufficient information. Either the utility calculations for the test were right, or they weren't.

If they were right, then the problem stops being "Oh no, future generations are going to destroy the world even if they're sensible and altruistic!", and starts being "Oh no, a horrible regime might take over the world! Let's hope someone creates a superweapon to stop them, and damn the risk!"

If they were wrong, then the assumption that the ones performing the calculation were rational, altrustic and far-sighted is wrong. Taking these one by one:

1) The world might be destroyed by someone making an irrational decision. No surprises there. All we can do is strive to raise the general level of rationality in the world, at least among people with the power to destroy the world.

2) The world might be destroyed by someone with only his own interests at heart. So basically we might get stuck with Dr Evil. We can't do a lot about that either.

3) The world might be destroyed by someone acting rationally and altrustically for his own generation, but who discounts future generations too much (i.e. his value of k in the discounting factor is much larger than ours). This seems to be the crux of the problem. What is the "proper" value of k? It should probably depend on how much longer humans are going to be around, for reasons unrelated to the question at hand. If the world really is going to end in 2012, then every dollar spent on preventing global warming should have been spent on alleviating short-term suffering all over the world, and the proper value for k is very large. If we really are going to be here for millions of years, then we should be exceptionally careful with every resource (both material and negentropy-based) we consume, and k should be very small. Without this knowledge, of course, it's very difficult to determine what k should be.

That may be the way to avoid a well-meaning scientist wiping out all human life - find out how much longer we have as a species, and then campaign that everyone should live their lives accordingly. Then, the only existential risks that would be implemented are the ones that are actually, seriously, truly, incontrovertibly, provably worth it.

Thinking about this in commonsense terms is misleading, because we can't imagine the difference between 8x utility and 16x utility

I can't even imagine doubling my utility once, if we're only talking about selfish preferences. If I understand vNM utility correctly, then a doubling of my personal utility is a situation which I'd be willing to accept a 50% chance of death in order to achieve (assuming that my utility is scaled so that U(dead) = 0, and without setting a constant level, we can't talk about doubling utility). Given my life at the moment (apartment with mortgage, two chronically ill girlfriends, decent job with unpleasantly long commute, moderate physical and mental health), and thinking about the best possible life I could have (volcano lair, catgirls), I wouldn't be willing to take that bet. Intuition has already failed me on this one. If Omega can really deliver on his promise, then either he's offering a lifestyle literally beyond my wildest dreams, or he's letting me include my preferences for other people in my utility function, in which case I'll probably have cured cancer by the tenth draw or so, and I'll run into the same breakdown of intuition after about seventy draws, by which time everyone else in the world should have their own volcano lairs and catgirls.

With the problem as stated, any finite number of draws is the rational choice, because the proposed utility of N draws outweighs the risk of death, no matter how high N is. The probability of death is always less than 1 for a finite number of draws. I don't think that considering the limit as N approaches infinity is valid, because every time you have to decide whether or not to draw a card, you've only drawn a finite number of cards so far. Certainty of death also occurs in the same limit as infinite utility, and infinite utility has its own problems, as discussed elsewhere in this thread. It might also leave you open to Pascal's Scam - give me $5 and I'll give you infinite utility!

But we have a mathematical theory about rationality. Just apply that, and you find the results seem unsatisfactory.

I agree - to keep drawing until you draw a skull seems wrong. However, to say that something "seems unsatisfactory" is a statement of intuition, not mathematics. Our intuition can't weigh the value of exponentially increasing utility against the cost of an exponentionally diminishing chance of survival, so it's no wonder that the mathematically derived answer doesn't sit well with intuition.

"Every time you draw a card with a star, I'll double your utility for the rest of your life. If you draw a card with a skull, I'll kill you."

Sorry if this question has already been answered (I've read the comments but probably didn't catch all of it), but...

I have a problem with "double your utility for the rest of your life". Are we talking about utilons per second? Or do you mean "double the utility of your life", or just "double your utility"? How does dying a couple of minutes later affect your utility? Do you get the entire (now doubled) utility for those few minutes? Do you get pro rata utility for those few minutes divided by your expected lifespan?

Related to this is the question of the utility penalty of dying. If your utility function includes benefits for other people, then your best bet is to draw cards until you die, because the benefits to the rest of the universe will massively outweigh the inevitability of your death.

If, on the other hand, death sets your utility to zero (presumably because your utility function is strictly only a function of your own experiences), then... yeah. If Omega really can double your utility every time you win, then I guess you keep drawing until you die. It's an absurd (but mathematically plausible) situation, so the absurd (but mathematically plausible) answer is correct. I guess.

Perfect decision-makers, with perfect information, should always be able to take the optimal outcome in any situation. Likewise, perfect decision-makers with limited information should always be able to choose the outcome with the best expected payoff under strict Bayesian reasoning.

However, when the actor's decision-making process becomes part of the situation under consideration, as happens when Katemega scrutinises Joe's potential for leaving her in the future, then the perfect decision-maker is only able to choose the optimal outcome if he is also capable of perfect self-modification. Without that ability, he's vulnerable to his own choices and preferences changing in the future, which he can't control right now.

I'd also like to draw a distinction between a practical pre-commitment (of the form "leaving this marriage will cause me -X utilons due to financial penalty or cognitive dissonance for breaking my vows"), and an actual self-modification to a mind state where "I promised I would never leave Kate, but I'm going to do it anyway now" is not actually an option. I don't think humans are capable of the latter. An AI might be, I don't know.

Also, what about decisions Joe made in the past (for example, deciding when he was eighteen that there was no way he was ever going to get married, because being single was too much fun)? If you want your present state to influence your future state strongly, you have to accept the influence of your past state on your present state just as strongly, and you can't just say "Oh, but I'm older and wiser now" in one instance but not the other.

Without the ability to self-modify into a truly sincere state wherein he'll never leave Kate no matter what, Joe can't be completely sincere, and (by the assumptions of the problem) Kate will sense this and his chances of his proposal being accepted will diminish. And there's nothing he can do about that.

It's an interesting situation, and I can see the parallel to Newcombe's Problem. I'm not certain that it's possible for a person to self-modify to the extent that he will never leave his wife, ever, regardless of the very real (if small) doubts he has about the relationship right now. I don't think I could ever simultaneously sustain the thoughts "There's about a 10% chance that my marriage to my wife will make me very unhappy" and "I will never leave her no matter what". I could make the commitment financially - that, even if the marriage turns awful, I will still provide the same financial support to her - but not emotionally. If Joe can modify his own code so that he can do that, that's very good of him, but I don't think many people could do it, not without pre-commitment in the form of a marital contract with large penalties for divorce, or at least a very strong mentality that once the vows are said, there's no going back.

Perhaps the problem would be both more realistic and more mathematically tractable if "sincerity" were rated between 0 and 1, rather than being a simple on/off state? If 1 is "till death do us part" and 0 is "until I get a better offer", then 0.9 could be "I won't leave you no matter how bad your cooking gets, but if you ever try to stab me, I'm out of here". Then Kate's probability of accepting the proposal could be a function of sincerity, which seems a much more reasonable position for her.

Could this be an example where rationality and self-awareness really do work against an actor? If Joe were less self-aware, he could propose with complete sincerity, having not thought through the 10% chance that he'll be unhappy. If he does become unhappy, he'd then feel justified in this totally unexpected change inducing him to leave. The thing impeding Joe's ability to propose with full sincerity is his awareness of the possibility of future unhappiness.

Also, it's worth pointing out that, by the formulation of the original problem, Kate expects Joe to stay with her even if she is causing him -125 megautilons of unhappiness by forcing him to stay. That seems just a touch selfish. This is something they should talk about.

Talking with people that do not agree with you as though they were people. That is taking what they say seriously and trying to understand why they are saying what they say. Asking questions helps. Also, assume that they have reasons that seem rational to them for what they say or do, even if you disagree.

I think this is a very important point. If we can avoid seeing our political enemies as evil mutants, then hopefully we can avoid seeing our conversational opponents as irrational mutants. Even after discounting the possibility that you, personally, might be mistaken in your beliefs or reasoning, don't assume that your opponent is hopelessly irrational. If you find yourself thinking, "How on earth can this person be so wrong!", then change that exclamation mark into a question mark and actually try to answer that question.

If the most likely failure mode in your opponent's thoughts can be traced back to a simple missing fact or one of the more tame biases, then supply the fact or explain the bias, and you might be able to make some headway.

If you trace the fault back to a fundamental belief - by which I mean one that can't be changed over the course of the conversation - then bring the conversation to that level as quickly as possible, point out the true level of your disagreement, and say something to the effect of, "Okay, I see your point, and I understand your reasoning, but I'm afraid we disagree fundamentally on the existence of God / the likelihood of the Singularity / the many-worlds interpretation of quantum mechanics / your support for the Parramatta Eels[1]. If you want to talk about that, I'm totally up for that, but there's no point discussing religion / cryonics / wavefunction collapse / high tackles until we've settled that high-level point."

There are a lot of very clever and otherwise quite rational people out there who have a few... unusual views on certain topics, and discounting them out of hand is cutting yourself off from their wisdom and experience, and denying them the chance to learn from you.

[1] Football isn't a religion. It's much more important than that.

Load More