All of Usul's Comments + Replies

Stupid Questions, 2nd half of December

"I'm curious whether you're willing to chomp down on bullets."

Since you're happy to go off topic, and your other posts suggest you've definitely got a dog in this fight already, would you agree or disagree with the following statement:

Based on things I've read on the internet (Cochrane) (not to be confused with the Cochrane Library that actually produces meta-analyses, just some guy named Cochrane who can't land a tenure track job teaching physics) regarding brain size and IQ test results, I believe that it is more probable than not that Black ... (read more)

You're much confused in the beginning, but it will take too long to sort you our, so I'll cut to the chase. I believe that blacks (the Sub-Saharan genetic pool) have a lower average IQ than whites (European genetic pool), by about one standard deviation (15 points). The jury is not out on Asians -- East Asians, specifically Chinese Han, have an average IQ higher than whites, by about 10 points, if I remember correctly. Moreover, Ashkenazi Jews also have a higher average IQ than whites. You seem to be... limited in your understanding of how people use words. Woot! I think it's the first time I've been called that. It's so new and exciting! Tell me about myself, I'm all ears.
If you look at genetics the difference between different parts of Africa is higher than the different between different non-African groups. If you think that Whites and Asians have a different race but all Blacks have the same race than your concept of race is cultural and not based on biology.
I think there are a number of people on LW who will dispute that; who will say something like this. "Yes, there are fuzzy popular uses of the word and if you take them too seriously you will say silly things. But it is also the case that there are important genetic differences between human subpopulations, and that these correlate to some extent with those fuzzy popular ideas about race." That seems to me to be a position that is not at all refuted by saying that popular use of "race" is fuzzy and that the things commonly called "races" don't correspond to well defined biological groupings. (It might be refuted by other means, but that would be more work.)
Open Thread, January 4-10, 2016

"the idea that false meories got planted is uncontroversial history"

Certainly, but is this a significant concern for the OP at this time, such that it bears mention in a thread in which he is turning to this community seeking help with a mental health problem. "Dangerous territory" is a strong turn of phrase. I don't know the answer, but I would need evidence that p(damage from discouraging needed help)< p(damage from memory implantation in 2015). Would you mention Tuskigee if he was seeking help for syphilis? Facilitated communication if he was sending an aphasic child to a Speech Language Pathologist? Just my opinion.

This community is not "expert help" for a mental health problem in the sense that people here are trained to deal with the issue in a way that doesn't produce false memories. That's not at all what he's doing. In this post he doesn't speak about going to an expert to get help. He instead speaks about acting based on reading on the internet of a theory about shame. Clarity spoke in the past about having seen a psychologist and I don't argue that he shouldn't.
Stupid Questions, 2nd half of December

"So, sure, lets' put the idea of race to bed and start with killing affirmative action. You're good with that?"

This is the point where I say "politics is the mind killer" and discount all of your politically charged conclusions, then?

"Have you actually seem Somalis? They do not look like the stereotypical African blacks at all."

My point exactly. Yet they are universally considered "black" by people in your and my culture because of the arbitrary (which word I do mean quite literally) choice to see skin color as one ... (read more)

Not necessarily, but I'm curious whether you're willing to chomp down on bullets. I am not particularly attached to the strawful "popular" ideas of race that you are so fond of skewering. But are you willing to admit that large groups of humans can be significantly different on the "genetic basis"? The issue is bias, incentives, credibility, trust. "Some recourse" is different from "defer to the experts whatever they say". I am not a fan of high-priesthood treatment of science.
Stupid Questions, 2nd half of December

So there exists a Pure Caucasian, a Pure Mongoloid, and a Pure Negroid out there? Can you identify them? Can you name a rational basis for those morphological qualities by which you know them? Is it a coincidence that the qualities you have chosen coincide perfectly with those that were largely developed by bias-motivated individuals living in Europe, Australia, and North America over the past few centuries? Why not back hair, toe length, presence of palmeris longus muscle, renal vein anatomy, positon of the sciatic nerve relative to piriformis muscle? Am... (read more)

Open Thread, January 4-10, 2016

"That's dangerous territory. Quite a lot of people got talked by their therapist has having false memories of abuse."

I would want to have a hell of a lot of evidence showing a clear statistically significant problem along these lines before I attempted to discourage a person from seeking expert help with a self-defined mental health problem.

Nothing I said is about discouraging Clarity to seek out an expert for mental health. A well trained expert should know what creates false memories and be aware of the dangers. From my perspective the idea that false meories got planted is uncontroversial history taught in mainstream psychology classes.
Stupid Questions, 2nd half of December

"Social science academics are very skewed politically." So shall we discount any concept of expertise based solely on our biases towards the suspected biases of others based on their reported political affiliations? I don't have the time to get my own PhD in every subject. I don't claim they have the gospel truth, but, as I said, it's a good place to start, from which a cursory examination of geographic population variations pretty much puts the the idea of race to bed with very short work.

Tabooing race, I think your paraphrasing doesn't quite ... (read more)

I don't know about concept of expertise, but yes, I will certainly discount (which is different from discard) politically charged conclusions by those biased others. Incentives matter and publishing politically incorrect results is usually a career-damaging move. Especially if you don't have tenure when it could easily be a career-ending move. I disagree, but in the sphere of rights I generally favour colour-blind solutions. So, sure, lets' put the idea of race to bed and start with killing affirmative action. You're good with that? They are not "arbitrary", of course, but who are you arguing against? If your point is that popular usage of the word "race" is fuzzy and not rigorous, sure, but no one contests that. I think that the real point of this conversation is about useful classifications of people and, in particular, about the real underlying differences between large genetically similar groups of people. I am not so sure of that. Have you actually seem Somalis? They do not look like the stereotypical African blacks at all. One of the big ideas underlying the culture of this site is that truth is not necessarily binary and that you can change your beliefs in whether something is true by degrees instead of oscillating between "this is a complete nonsense" and "this is obviously correct". You don't need to "wholeheartedly accept", but you should update, to use a local expression.
Your transhuman copy is of questionable value to your meat self.

I believe there is a functional definition of amnesia, loss of factual memory, life skills remain intact. I guess I would call what you are calling a memory wipe a "brain wipe". I guess I'd call what you are calling memory "total brain content". If a brain is wiped of all content in the forest is Usul's idea of consciousness spared? No idea. Total brain reboot? I'd say yes and call that good as dead I think.

I would say probably yes to the text only question. Again, loss of factual memory. But I don't rate that as a reliable or valid test in this context.

Stupid Questions, 2nd half of December

Should one value the potential happiness of theoretical future simulated beings more than a certain decline in happiness for currently existing meat beings which will result as soon as the theoretical become real? Should one allow for absurdly large populations if the result is absurd morality?

The promise of countless simulated beings of equal moral value to meat beings, and who can be more efficiently cared for than meat, seems to make the needs and wants of simulated beings de facto overrule the needs and wants of meat beings ( as well as some absurdly l... (read more)

Stupid Questions, 2nd half of December

When the relevant experts, Anthropologists, say that the concept of race is a social construct with no basis in biological fact they aren't just bowing to some ivory tower overlord of political correctness. We would do well to consider their expertise as a starting point in any such inquiry.

Start anywhere on a map of the Eastern Hemisphere and trace what the people look like in any geographic area relative to the regions beside them and then consider why the term "race" has any meaning. sami, swede, finn, rus, tatar, khazak, turk, kurd, arab, ber... (read more)

Height positively correlates with IQ and foot length is a very good proxy for height.
And how do you know that? Social science academics are very skewed politically. I don't think AmagicalFishy specified the number of races. In common usage "race" is a fuzzy term and the number of races has historically varied from two (us and barbarians) to the traditional European four (white, black, yellow, and red) to many. It might be useful to taboo "race" in this discussion. The question then becomes "Do genetically similar large groups of people have different distributions/frequencies/averages of certain qualities of interest?" and the answer is, of course, "Depends on what you're interested in, but often yes". For example, IQ tests have been administered to a lot of people of different genetic backgrounds and of different cultures. The picture is diverse, but there are clear patterns.
Your transhuman copy is of questionable value to your meat self.

I see factual memory as a highly changeable data set that has very little to do with "self". As I understand it (not an expert in neuroscience or psychiatry, but experience working with neurologically impaired people) the sort of brain injuries which produce amnesia are quite distinct from those that produce changes in personality, as reported by significant others, and vice versa. In other words, you can lose the memories of "where you came from" and still be recognized as very much the same person by those who knew you, while you ca... (read more)

Yes or no, will those who knew them be able to pick them out blind out of a group going only on text-based communication? If not, what do you mean by recognize? (If yes, I'll be surprised and will need to reevaluate this.) The officer can't work if they're completely amnesiac. They can't do much of anything, in fact. As to your main point: it's possible that personality changes remain after memory loss, but those personalities are themself caused by experiences and memories. I suppose I was assuming that memory wiped would wash away any recognizable personality. I still do. The kinds of amnesia you're referring to presumably leave traces of the memory somewhere in the brain, which then affects the brain's outputs. Unless we can access the brain directly and wipe it ourself, we can't guarantee everything was forgotten, and it probably does linger on in the subconscious; so that's not the same as an actual memory wipe.
Your transhuman copy is of questionable value to your meat self.

Anatomical location meaning neurons in the brain. Not necessarily a discrete brain organelle. To deny consciousness an anatomical location in the brain is to say it arises from something other than brain function. Are you supporting some sort of supernatural theory of consciousness?

No, I am saying that consciousness - like a website or computer program - is a computational phenomenon which isn't irrevocably tied to once piece of hardware. It may currently be instantiated in the particular neurons in your brain, but that could change if the computational functions of those neurons were taken over by other physical devices. Your consciousness could, in principal, be run as a distributed computing project like folding@home.
Consciousness and Sleep

If retention of memory is a key component of identity, then what are the implications for identity:

When decades of new memories have been made (if loss of memory=loss of identity does gain of memory also=change of identity)? When old memories have changed beyond all recognition (unaware to the current rememberer he doesn't recall Suzy Smith from 1995 in 2015 the same way he recalled her in 2000)? When senile dementia causes gradual loss of memory? When mild brain injury causes sudden loss of large areas of memory while personality remains unchanged post i... (read more)

Identity is tricky, but only if you want to devise questions to be tricked by it. I'd suggest it can be deconstructed to a point where these sorts of questions disappear.
You care because your brain was created by evolution, which relies on physical continuity of your body. Whether you "should" care depends on your meaning of "should".
Your transhuman copy is of questionable value to your meat self.

"I model that as "when you die, some memories are wiped and you live again". If you concede that wiping a couple days of memory doesn't destroy the person, then I think that's enough for me. Probably not you, though. What's the specific argument in that case?"

I think I must have missed this part before. Where I differ is in the idea that a copy is "me" living again, I don't accept that it is, for the reasons previously written. Whether or not a being with a me-identical starting state lives on after I die might be the tiniest... (read more)

So continuity of consciousness can exist outside of memories? How so? Why is memory-wiped you different than any random memory-wiped person? How can physical continuity do that?
Your transhuman copy is of questionable value to your meat self.

I'm familiar with the concept, not his specific essays, and, indeed, this literally does happen. Our neurons are constantly making and unmaking largely proteinaceous components of themselves and, over a lifetime, there is a no doubt partial, perhaps complete, turnover of the brain's atoms. I find this not problematic at all because the electrochemical processes which create consciousness go on undisturbed. The idea that really queers my argument for me is that of datum-by-datum transfer, each new datum, in the form of neuronal activity (the 1s and 0s of... (read more)

Can you read [], [], and [], with any relevant posts linked therein? (Or just start at the beginning of the quantum sequence.) Note that you can believe everyone involved is "you", and yet not care about them. The two questions aren't completely orthogonal, but identifying someone with yourself doesn't imply you should care about them. The same price I would accept to have the last second erased from my memory, but first feel the pain of drowning. That's actually not so easy to set. I'm not sure how much it would cost to get me to accept X amount of pain plus removal of the memory, but it's probably less than the cost for X amount of pain alone. That's like removing the last second of memory, plus pain, plus jumping forward in time. I'd probably only do it if I had a guarantee that I'd survive and be able to get used to whatever goes on in the future and be happy.
Your transhuman copy is of questionable value to your meat self.

Another thought, separate but related issue: "fork" and "copy" could be synonyms for "AI", unless an artificial genesis is in your definition of AI. Is it a stretch to say that "accomplish some task" and "(accept) termination" could be at least metaphorically synonymous with "stay in the box"?

"If I make 100 AIs they will stay in the box."

(Again, I fully respect the rationality that brings you to a different conclusion than mine, and I don't mean to hound your comment, only that yours was the best comment on which for me to hang this thought.)

Your transhuman copy is of questionable value to your meat self.

I'm going to go ahead and continue to disagree with the the pattern theorists on this one. Has the inverse of the popular "Omega is a dick with a lame sense of irony" simulation mass-murder scenario been discussed? Omega (truthful) gives you a gun. "Blow your brains out and I'll give the other trillion copies a dollar." It seems the pattern theorist takes the bullet or Dr Bowie-Tesla's drowning pool with very little incentive.

The pattern theorists as you describe them would seem to take us also to the endgame of Buddhist ethics (not a... (read more)

No [], you're not [] even that.
There is some Buddhist connection, yes. The moments of experience thing is a thing in some meditation styles, and advanced meditators are actually describing something like subjective experience starting to feel like an on/off sequence instead of a continuous flux. Haven't gone really deep into what either the Buddhist metaphysics or the meditation phenomenology says. Neuroscience also has some discrete consciousness steps stuff, but I likewise haven't gone very deep into that. Anyway, This is still up for grabs. Given the whole thing about memories being what makes you you, consciousness itself is nice but it's not all that. It can still be your tribe against the world, your family against your tribe, your siblings against your family and you and your army of upload copies against your siblings and their armies of upload copies. So I'm basically thinking about this from a kin altruism and a general having people more like you closer in your circle of concern than people less like you thing. Upload copies are basically way, way closer kin than any actual kin. So am I a pattern theorist? Not quite sure. It seems to resolve lots of paradoxes with the upload thought experiments, and I have no idea about a way to prove it wrong. (Would like to find one though, it seems sorta simplistic and we definitely still don't understand consciousness to my satisfaction.) But like I said, if I sit down on an upload couch, I fully expect to get up from an upload couch, not suddenly be staring at a HUD saying "IN SIMULATION", even though pattern theory seems to say that I should expect each outcome with 50 % probability. There will be someone who does wake up in the simulation with my memories in the thought experiment, no matter which interpretation, so I imagine those versions will start expecting to shift viewpoints while they do further upload scans, while the version of me who always wakes up on the upload coach (by the coin-toss tournament logic, there will be a me who never
Your transhuman copy is of questionable value to your meat self.

I'm not sure I follow your first point. Please clarify for me if my answer doesn't cover it. If you are asking if multiple completely non-interacting, completely functional minds run on a single processing medium constituting separate awarenesses (consciounesses), or if two separate awarenesses could operate with input from a single set of mind-operations, then I would say yes to both. Awareness is a result of data-processing, 1s and 0s, neurons interacting as either firing or not. Multiple mind operations can be performed in a single processing substr... (read more)

There is the difficult edge case when the systems are connected but they don't need the connection to function. Separated or fused the outcome of the data processing is going to be the same in the subsection thought experiment. If the gate is open and individual electrons are free to go on either edge of the wire it could be seen as similar to having software separation within one hardware. Its not their influence on each other would be impossible but they just in fact don't. If you merely change what is possible but what they infact end up doing remains same it would be pretty strange if that changed the number of awarenesses. I seem to be getting ther vibe that you believe that awareness is singular in the sense that you either have it or don't and you can't have it fragment into pieces. I am thinking waht kind of information processing good awareness in your opinion. Some times organizations get some task that is infact been carried out by small teams. When those teams undercommmunicate misunderstandings can happen. In a good team there is sufficient communication that what is going to happen is common knowledge to the point atleast that no contradictory plans exist within different team members. In a shattered corporation there is no "corporation official line" while in a well coordinated corporation there migth be one even if it is more narrow than any one members full opinion. While the awareness of individual team members is pretty plain can the corporation become separately aware from its members? With the brain the puzzle is kinda similar but instead the pieces are pretty plainly not aware. It does seem to me that you chase the awareness into the unknown black box. In the corporation metaphor the CEOs awareness counts as the corporations awareness to the extent there is point to discuss about it. However in the "unaware pieces" picture this would lead into some version of panpsychism (or some kind of more less symmetrical version where there is a distinq
Your transhuman copy is of questionable value to your meat self.

I definitely agree that incremental change (which gets stickier with incremental non-destructive duplication) is a sticky point. What I find the most problematic to my my thesis is a process where every new datum is saved on a new medium, rather than the traditionally-cited cell-by-cell scenario. It's problematic but nothing in it convinces me to step up to Mr Bowie-Tesla's machine under any circumstances. Would you? How about if instead of a drowning pool there was a team of South America's most skilled private and public sector torture experts, who c... (read more)

Your transhuman copy is of questionable value to your meat self.

Honestly, I'm not sure what other than intuition and subjective experience we have to go with in discussing consciousness. Even the heavy hitters in the philosophy of consciousness don't 100% agree that it exists. I will be the first to admit I don't have the background in pattern theory or the inclination to get into a head to head with someone who does. If pressed, right now I'm leaning towards the matter-based argument, that if consciousness is not magical then it is tied to specific sets of matter. And that a set of matter can not exist in multiple l... (read more)

So, there are two things we need to track here, and you're not really making a distinction between them. There are individual moments of consciousness, which, yes, probably need to be on a physical substrate that exists in the single location. This is me saying that I'm this moment of conscious experience right now, which manifests in my physical brain. Everybody can be in agreement about this one. Then there is the continuity of consciousness from moment to moment, which is where the problems show up. This is me saying that I'm the moment of conscious experience in my brain right now, and I'm also going to be the next moment of conscious experience in my brain. The problems start when you want to say that the moment of consciousness in your brain now and the moment of consciousness in your brain a second in the future are both "your consciousness" and the moment of consciousness in your brain now and the moment of consciousness in your perfect upload a second in the future are not. There is no actual "consciousness" that refers to things other than the single moments for the patternist. There is momentary consciousness now, with your memories, then there is momentary consciousness later, with your slightly evolved memories. And on and on. Once you've gone past the single eyeblink of consciousness, you're already gone, and a new you might show up once, never, or many times in the future. There's nothing but the memories that stay in your brain during the gap laying the claim for the you-ness of the next moment of consciousness about to show up in a hundred or so milliseconds.
Your transhuman copy is of questionable value to your meat self.

Sorry if my attempt at coloring the conversation with humor upset you. That was not my intent. However, you will find it did nothing to alter the content of our discourse. You have changed your question. The question you ask now is not the question you asked previously.

Previous question: No, I do not choose to murder trillions of sentient me-copies for personal gain. I added an addendum, to provide you with further information, perhaps presuming a future question: Neither would I murder trillions of sentient not-me copies.

New question: Yes, an amoral dick who shares my views on consciousness would say yes.

Your transhuman copy is of questionable value to your meat self.

I completely respect the differences of opinion on this issue, but this thought made me laugh over lunch: Omega appears and says he will arrange for your taxes to be done and for a delightful selection of funny kitten videos to be sent to you, but only if you allow 100 perfect simulations of yourself to be created and then destroyed. Do you accept?

Sounds more sinister my way.

I would want to know what the copies would be used for. If you told me that you would give me $1000 if you could do whatever you wanted with me tomorrow and then administer an amnesiac drug so I didn't remember what happened the next day, I don't think I would agree, because I don't want to endure torture even if I don't remember it.
Your transhuman copy is of questionable value to your meat self.

The genesis of my brain is of no concern as to whether or not I am the consciousness within it. I am, ipso facto. When I say it doesn't matter if I am an original or a copy or a copy of a copy I mean to say just exactly that. To whom are you speaking when you ask the question who are you? if it is to me the answer is "Me" I'm sorry that I don't know whether or not I am a copy but I was UNconscious at the time.

If copy is B and original is A. The question of whether I am A or B is irrelevant to the question of am I ME, which I am. Ask HIM the s... (read more)

Your transhuman copy is of questionable value to your meat self.

No. It's a dick move. Same question and they're not copies of me? Same answer.

As I'm sure you're aware, the purpose of these thought experiments is to investigate what exactly your view of consciousness entails from a decision-making perspective. The fact that you would have given the same answer even if the virtual instances weren't copies of you shows that your reason for saying "no" has nothing to do with the purpose of the question. In particular, telling me that "it's a dick move" does not help elucidate your view of consciousness and self, and thus does not advance the conversation. But since you insist, I will rephrase my question: Would someone who shares your views on consciousness but doesn't give a crap about other people say "yes" or "no" to my deal?
Your transhuman copy is of questionable value to your meat self.

It's a sticky topic, consciousness. I edited my post to clarify further:

I define consciousness as a passively aware thing, totally independent of memory, thoughts, feelings, and unconscious hardwired or conditioned responses. It is the hard-to-get-at thing inside the mind which is aware of the activity of the mind without itself thinking, feeling, remembering, or responding.

Which I recognize might sound a bit mystical to some, but I believe it is a real thing which is a function of brain activity.

As a function of brain (or whatever processing medium) consc... (read more)

If I gradually replace every atom in your brain with a different one until you have no atoms left, but you still function the same, are you still "you"? If not, at what point did you stop? Have you seen Yudkowsky's series of posts on this?
Your transhuman copy is of questionable value to your meat self.

I was just having a laugh at the follow-up justification where technical difficulties were cited, not criticizing the argument of your hypothetical, sorry if it came off that way.

As you'd probably assume they would based on my OP, my copies, if I'd been heartless enough to make them and able to control them, would scream in existential horror as each came to know that that-which-he-is was to be ended. My copies would envy the serenity of your copies, but think them deluded.

So, I don't think I felt the way I do now prior to reading the Quantum Thief novels, in which characters are copied and modified with reckless abandon and don't seem to get too bent out of shape about it. It has a remarkable effect on your psyche to observe other people (even if those people are fictional characters) dealing with a situation without having existential meltdowns. Those novels allowed me to think through my own policy on copying and modification, as an entertaining diversion.
Your transhuman copy is of questionable value to your meat self.

Sorry, I missed that you were the copier. Sure, I'm the copy. I do not care one bit. My life goes on totally unaffected (assuming the original and I live in unconnected universes). Do I get transhuman immortality? Because that would be awesome for me. If so, I git the long end of the stick. It would have no value to poor old original, nor does anything which happens to him have intrinsic value for me. If you had asked his permission he would have said no.

In other words, I could make you believe that you were either the original or the copy simply by telling you you were the original/the copy. This means that before I told you which one you were, you would have been equally comfortable with the prospect of being either one (here I'm using "comfortable" in an epistemic sense--you don't feel as though one possibility is "privileged" over the other). I could have even made you waffle back and forth by repeatedly telling you that I lied. What a strange situation to find yourself in--every possible piece of information about your internal experience is available to you, yet you seem unable to make up your mind about a very simple fact! The pattern theorists answer this by denying this so-called "simple" fact's existence: the one says, "There is no fact of the matter as to which one I am, because until our experiences diverge, I am both." You, on the other hand, have no such recourse, because you claim there is a fact of the matter. Why, then, is the information necessary to determine this fact seemingly unavailable to you and available to me, even though it's a fact about your consciousness, not mine?
Your transhuman copy is of questionable value to your meat self.

You shouldn't murder sentient beings or cause them to be murdered by trillions. Both are generally considered dick moves. Shame on you both. My argument: a benefit to an exact copy is of no intrinsic benefit to a different copy or original. Unless some Omega starts playing evil UFAI games with them. One trillion other copies are unaffected by this murder. Original or copy is irrelevant. It is the being we are currently discussing that is relevant. If I am the original I care about myself. If I am a copy I care about myself. Whether or not I even care if I'm a copy or not depends on various aspects of my personality.

If I offered you the same deal I offered to Mark Friedenbach, would you agree? (Please answer with "yes" or "no". You're free to expand on your answer, but first please make sure you give an answer.)
Your transhuman copy is of questionable value to your meat self.

Based on your status as some-guy-on-the-internet and my estimate of the probability that this exact situation could come to be, no I do not believe you.

To clarify: I do not privilege the original self. I privilege the current self.

Your transhuman copy is of questionable value to your meat self.

Meat or sim or both meat aren't issues as I see it. I am self, fuck non-self, or not to be a dick, value non-self less than self, certainly on existential issues. "I" am the awareness within this mind. "I" am not memory, thought, feeling, personality, etc. I know I am me ipso facto as the observer of the knower. I don't care if I was cloned yesterday or one second ago and there are many theoretical circumstances where this could be the case. I value the "I" that I currently am just exactly now. I don't believe that this... (read more)

I don't know what you mean by that. Why can't a perfect copy be you? Doesn't that involve epiphenomenalism? Even if I give the entire state of the world X time in the future, I'd also need to specify which identical beings are "you".
Your transhuman copy is of questionable value to your meat self.

Great question. Usul and his copy do not care one bit which is which. But perhaps you could put together a convincing evidence chain. At which time copy Usul will still not care.

Follow-up question: Assuming everything I said in my previous comment is true and that I have no incentive to lie to you (but no incentive to tell you the truth, either), would you believe me if I then said you were the copy?
Your transhuman copy is of questionable value to your meat self.

Thanks for the reply. Yeah, I think I just threw a bunch of thoughts at the wall to see what would stick. I'm not really thinking too much about the practical so-I've-got-a-copy-now-what? sort of issues. I'm thinking more of the philosophical, perhaps even best categorized as Zen, implications the concept of mind-cloning has for "Who am I" in the context of changing thoughts, feelings, memories, unconscious conditioned responses, and the hard to get at thing inside which ( I first typed "observes" - bad term: too active) which is awa... (read more)

I think having a firm policy for oneself in place ahead of time might circumvent a lot of these issues. Unfortunately at this point I must reference the film Mutliplicity. In this film, a character wakes up from being duplicated and discovers to his surprise that he is the clone, not the original. He is somewhat resentful of the fact that he now has to capitulate to the desires of the original. Obviously the original didn't have a firm understanding in mind that he would have a good chance of waking up as the duplicate, nor did he have a firm policy of how he would behave if he woke up as the duplicate. For myself, my policy might be that I would be perfectly obedient (within reason) if I woke up as a copy, but that I would insist on being terminated within a week, because I wouldn't want to go on living a life where I'm cut off from my friends and family due to the original moridinamael taking the role as the "real me".
Your transhuman copy is of questionable value to your meat self.

Thanks for the reply. To your last point, I am not speaking of zombies. Every copy I discussed above is assumed to have its own consciousness. To your first points, at no time is there any ambiguity or import to the question of "which one I am". I know which one I am, because here I am, in the meat (or in my own sim, it makes no difference). When a copy is made its existence is irrelevant, even if it should live in a perfect sim and deviate not one bit, I do not experience what it experiences. It is perhaps identical or congruent to me. It ... (read more)

Can you explain how you know that you're the meat space one? If every observation you make is made by the other one, and both are conscious, how do you know you're not a sim? This is a purely epistemic question. I'm perfectly happy saying " if I am meat, then fuck sim-me, and if I am sim, fuck meat me" (assuming selfishness). But if you don't know which one you are, you need to act to benefit both, because you might be both. On the other hand, if you see the other one, there's no problem fighting it, because the one you're fighting is surely not you. (But even so, if you expect them to do the same as you, then you're in a perfect prisoner dilemma and should cooperate.) On the other hand, I think that if I clone myself, then do stuff my clone doesn't do, I'd still be less worried about dying than if I had no clone. I model that as "when you die, some memories are wiped and you live again". If you concede that wiping a couple days of memory doesn't destroy the person, then I think that's enough for me. Probably not you, though. What's the specific argument in that case?
Your transhuman copy is of questionable value to your meat self.

Thanks for the reply.

"You conclude that consciousness in your scenario cannot have 1 location(s)." I'm not sure if this is a typo or a misunderstanding. I am very much saying that a single consciousness has a single location, no more no less. It is located in those brain structures which produce it. One consciousness in one specific set of matter. A starting-state-identical consciousness may exist in a separate set of matter. This is a separate consciousness. If they are the same, then the set of matter itself is the same set of matter. The ... (read more)

Our instincts have evolved in situations where copies did not exist, so taking a bullet in one's head was always a loss. Regardless of what thought experiments you propose, my insticts will still reject the premises and assume that copies don't exist and that the information provided to me is false. If copying would be a feature in our ancient environment, organisms who took the bullet if it saved e.g. two of their copies would have an evolutionary advantage. So their descendants would still hesitate about it (because the information that it will save their two copies could be false; and even if it is right, it would still be better to spend some time looking for a solution that might solve all three copies), but ultimately many of them would accept the deal. I'm not sure what is the conclusion here. On one hand, the fact that some hypothetical other species would have other values doesn't say much about our values. On the other hand, the fact that my instincts refuse to accept the premise of your thought experiment doesn't mean that the answer of my instincts is relevant for your thought experiment.
Your transhuman copy is of questionable value to your meat self.

"would require extremely detailed neurological understanding of memory storage and retrieval." Sorry, this on a blog where superintelligences have been known simulate googleplexes of perfectly accurate universes to optimize the number of non-blue paperclips therein?

The original post stipulated that I was "forced" to terminate all the copies but one, that was the nature of the hypothetical I was choosing to examine, a hypothetical where the copies aren't deleted would be a totally different situation.
Your transhuman copy is of questionable value to your meat self.

Hail Xenu! I would need some time to see how the existential horror of going to bed tonight sits with me. Very likely it would overwhelm and around 4:00am tomorrow I'd take your deal. "(There is no afterlife for humans.) " I knew it! SUCK IT, PASCAL!

Your transhuman copy is of questionable value to your meat self.

Thanks for the reply. I don't really follow how the two parts of your statement fit together, but regardless, my first instinct is to agree with part one. I did as a younger (LSD-using) man ascribe to a secular magical belief that reincarnation without memory was probable, and later came to your same conclusion that it was irrelevant, and shortly thereafter that it was highly improbable. But just now I wonder (not to the probability of magical afterlives) but what if I gave you the choice: 1. Bullet to the head. 2. Complete wipe of memory, including su... (read more)

This is a good reply. I feel the same way that you do about your #1 and #2, but I suspect that the reason is because of an emotional reaction to physical death. Your #2 is relatively attractive because it doesn't involve physical death, while my version had physical death in both. This might be one reason that I and most people don't find cryonics attractive: because it does not prevent physical death, even if it offers the possibility of something happening afterwards. I find the intuitions behind my point stronger than that emotional reaction. In other words, it seems to me that I should either adjust my feelings about the bullet to correspond with the memory wipe situation, or I should adjust the feelings about the memory wipe to correspond with the bullet situation. The first adjustment is more attractive: it suggests that death is not as bad as I thought. Of course, that does not prove that this is the correct adjustment. Regarding the duplication machine, I would probably take a deal like that, given a high enough reward given to the surviving copy.
Your transhuman copy is of questionable value to your meat self.

Thanks for the reply. I am not convinced by the pattern identity theorist because, I suppose, I do not see the importance of memory in the matter, nor the thoughts one might have about those thoughts. If I lose every memory slowly and die senile in a hospital bed I believe that it will be the same consciousness experiencing those events as is experiencing me writing these words. I identify that being which holds no resemblance to my current intellect and personality will be my Self in a way that an uploaded copy with my current memory and personality ca... (read more)

Yep. And "Self". These are tricky terms that guarantee confusion.
You're still mostly just arguing for your personal intuition for the continuity theory though. People have been doing that pretty much as long as we've had fiction about uploads or destructive teleportation, with not much progress to the arguments. How would you convince someone sympathetic to the pattern theory that the pattern theory isn't viable? FWIW, after some earlier discussions about this, I've been meaning to look into Husserl's phenomenology to see if there are some more interesting arguments to be found there. That stuff gets pretty weird and tricky fast though, and might be a dead end anyway.
Your transhuman copy is of questionable value to your meat self.

Great thought experiment, thanks. I do define consciousness as a passively aware thing, totally independent of memory. The demented, the delirious, the brain damaged all have (unless those structures performing the function of consciousness are damaged, which is not a given) the same consciousness, the same Self, as I define it, as they did when their brains were intact. Dream Self is the same Self as Waking Self to my thinking. I assume consciousness arises at some point in infancy. From that moment on it is Self, to my thinking.

In your 2 meat scenar... (read more)

What if the running of two programs is causally separated but runs on common hardware? And when the circuits are separated their function isn't changed. Is not the entity and awareness of A+B still intact? Can the awareness be compromised without altering function? Also are lobotomized persons two awarenesses? What is the relevant difference to the subsection circuitry?
Your transhuman copy is of questionable value to your meat self.

Thanks for the reply. Sleep is definitely a monkey wrench in the works of my thoughts on this, not a fatal one for me, though. I wouldn't count distraction of dissociation, though. I am speaking of the (woo-light alert) awareness at the center of being, a thing that passively receives sensory input, including sense of mind-activity) (and I wonder if that includes non-input?) I do believe that this thing exists and is the best definition of "Self".

Your transhuman copy is of questionable value to your meat self.

Thanks for the reply. Perhaps I should mention I have no children and at no point in my life or in my wife's life have either of us wanted children.

Open Thread, January 4-10, 2016

I don't play, craps is the only sucker bet I enjoy engaging in. But if coerced to play, I press with non-sims. Don't press with sims. But not out of love, out of an intimate knowledge of my opponent's expected actions. Out of my status as a reliable predictor in this unique circumstance.

The Number Choosing Game: Against the existence of perfect theoretical rationality

I was born a non-Archimedean and I'll die a non-Archimedean.

"0.99 repeating = 1" I only accept that kind of talk from people with the gumption to admit that the quotient of any number divided by zero is infinity. And I've got college calculus and 25 years of not doing much mathematical thinking since then to back me up.

I'll show myself out.

Open Thread, January 4-10, 2016

Thanks for the reply. I'm not sure if your reasoning (sound) is behind the tendency I think I've identified for LW'ers to overvalue simulated selves in the examples I've cited, though. I suppose by population ethics you should value the more altruistic simulation, whomever that should be. But then, in a simulated universe devoted to nothing but endless torture, I'm not sure how much individual altruism counts.

"Totally tangential point" I believe footnotes do the job best. The fiction of David Foster Wallace is a masterwork of portraying OCD through this technique. I am an idiot at formatting on all media, though, and could offer no specifics as to how to do so.

I think if people don't make the distinction I proposed it is easy to choose an ethics that overvalues other selves compared to a mixed model. Thanks for the idea to use footnotes though yes it is difficult with with some media.
Open Thread, January 4-10, 2016

I appreciate the reply. I recognize both of those arguments but I am asking something different. If Omega tells me to give him a dollar or he tortures a simulation, a separate being to me, no threat that I might be that simulation (also thinking of the Basilisk here), why should I care if that simulation is one of me as opposed to any other sentient being?

I see them as equally valuable. Both are not-me. Identical-to-me is still not-me. If I am a simulation and I meet another simulation of me in Thunderdome (Omega is an evil bastard) I'm going to kill t... (read more)

If love your simulation as you love yourself, they will love you as they love themselves (and if you don't, they won't). You can choose to have enemies or allies with your own actions. You and a thousand simulations of you play a game where pressing a button gives the presser $500 but takes $1 from each of the other players. Do you press the button?
The Number Choosing Game: Against the existence of perfect theoretical rationality

I was bringing the example into the presumed finite universe in which we live, where Maximum Utility = The Entire Universe. If we are discussing a finite-quantity problem than infinite quantity is ipso facto ruled out.

I guess I'm asking "Why would a finite-universe necessarily dictate a finite utility score?" In other words, why can't my utility function be: * 0 if you give me the entire universe minus all the ice cream. * 1 if you give me the entire universe minus all the chocolate ice cream. * infinity if I get chocolate ice cream, regardless of how much chocolate ice cream I receive, and regardless of whether the rest of the universe is included with it.
I think Nebu was making the point that while we normally use utility to talk about a kind of abstract gain, computers can be programmed with an arbitrary utility function. We would generally put certain restraints on it so that the computer/robot would behave consistently, but those are the only limitation. So even if there does not exist such a thing as infinite utility, a rational agent may still be required to solve for these scenarios.
The Number Choosing Game: Against the existence of perfect theoretical rationality

"What you've done is take my argument and transform it into an equivalent obvious statement. That isn't a counter-argument. In fact, in mathematics, it is a method of proving a theorem. If you read the other comments, then you'll see that other people disagree with what I've said" You're welcome? Feel free to make use of my proof in your conversations with those guys. It looks pretty solid to me.

If a Perfect Rational Agent is one who can choose Maximum Finite Utility. And Utility is numerically quantifiable and exists in infinite quantities. And ... (read more)

The Number Choosing Game: Against the existence of perfect theoretical rationality

Sorry, I missed that you postulated an infinite universe in your game.

I don't believe I am misrepresenting your position. "Maximizing utility" is achieved by-, and therefore can be defined as- "choosing the highest number". The wants of the agent need not be considered. "Choosing the highest number" is an example of "doing something impossible". I think your argument breaks down to "An agent who can do the impossible can not exist." or "It is impossible to do the impossible". I agree with this statement, but I don't think it tells us anything useful. I think, but I haven't thought it out fully, that it is the concept of infinity that is tripping you up.

What you've done is take my argument and transform it into an equivalent obvious statement. That isn't a counter-argument. In fact, in mathematics, it is a method of proving a theorem. If you read the other comments, then you'll see that other people disagree with what I've said (and in a different manner than you), so I'm not just stating something obvious that everyone already knows and agrees with.
The Number Choosing Game: Against the existence of perfect theoretical rationality

Let's taboo "perfect", and "utility" as well. As I see it, you are looking for an agent who is capable of choosing The Highest Number. This number does not exist. Therefore it can not be chosen. Therefore this agent can not exist. Because numbers are infinite. Infinity paradox is all I see.

Alternately, letting "utility" back in, in a universe of finite time, matter, and energy, there does exist a maximum finite utility which is the sum total of the time, matter, and energy in the universe. There will be an number which corresponds to this. Your opponent can choose a number higher than this but he will find the utility he seeks does not exist.

Why can't my utility function be: * 0 if I don't get ice cream * 1 if I get vanilla ice cream * infinity if I get chocolate ice cream ? I.e. why should we forbid a utility function that returns infinity for certain scenarios, except insofar that it may lead to the types of problems that the OP is worrying about?
"You are looking for an agent who is capable of choosing The Highest Number" - the agent wants to maximise utility, not to pick the highest number for its own sake, so that is misrepresenting my position. If you want to taboo utility, let's use the word "lives saved" instead. Anyway, you say "Therefore this agent (the perfect life maximising agent) can not exist", which is exactly what I was concluding. Concluding the exact same thing as I concluded, supports my argument, it doesn't contradict it like you seem to think it does. "Alternately, letting "utility" back in, in a universe of finite time, matter, and energy, there does exist a maximum finite utility" - my argument is that there does not exist perfect rationality within the imagined infinite universe. I said nothing about the actual, existing universe.
The Number Choosing Game: Against the existence of perfect theoretical rationality

I'm just not convinced that you're saying anything more than "Numbers are infinite" and finding a logical paradox within. You can't state the highest number because it doesn't exist. If you postulate a highest utility which is equal in value to the highest number times utility 1 then you have postulated a utility which doesn't exist. I can not chose that which doesn't exist. That's not a failure of rationality on my part any more than Achilles inability to catch the turtle is a failure of his ability to divide distances.

I see I made Bob unnecessarily complicated. Bob = 99.9 Repeating (sorry don't know how to get a vinculum over the .9) This is a number. It exists.

It is a number, it is also known as 100, which we are explicitly not allowed to pick (0.99 repeating = 1 so 99.99 repeating = 100). In any case, I think casebash successfully specified a problem that doesn't have any optimal solutions (which is definitely interesting) but I don't think that is a problem for perfect rationality anymore than problems that have more than one optimal solution are a problem for perfect rationality.
Regardless of what number you choose, there will be another agent who chooses a higher number than you and hence who does better at the task of utility optimising than you do. If "perfectly rational" means perfect at optimising utility (which is how it is very commonly used), then such a perfect agent does not exist. I can see the argument for lowing the standards of "perfect" to something achievable, but lowering it to a finite number would result in agents being able to outperform a "perfect" agent, which would be equally confusing. Perhaps the solution is to taboo the word "rational". It seems like you agree that there does not exist an agent that scores maximally. People often talk about utility-maximising agents, which assumes it is possible to have an agent which maximises utility, which isn't true for some situations. That the assumption I am trying to challenge regardless of whether we label it perfect rationality or something else.
The Number Choosing Game: Against the existence of perfect theoretical rationality

There exists an irrational number which is 100 minus delta where delta is infinitesimally small. In my celestial language we call it "Bob". I choose Bob. Also I name the person who recognizes that the increase in utility between a 9 in the googleplex decimal place and a 9 in the googleplex+1 decimal place is not worth the time it takes to consider its value, and who therefore goes out to spend his utility on blackjack and hookers displays greater rationality than the person who does not.

Seriously, though, isn't this more of an infinity paradox... (read more)

Just as an aside, no there isn't. Infinitesimal non-zero numbers can be defined, but they're "hyperreals", not irrationals.
I didn't specify in the original problem how the number has to be specified, which was a mistake. There is no reason why the gamemaker can't choose to only award utility for numbers provided in decimal notation, just as any other competition has rules. "Also I name the person who recognizes that the increase in utility between a 9 in the googleplex decimal place and a 9 in the googleplex+1 decimal place is not worth the time it takes to consider its value" - we are assuming either a) an abstract situation where there is zero cost of any kind of naming extra digits or b) the gamemaker compensates the individual for the extra time and effort required to say longer numbers. If there is a problem here, it certainly isn't that we can't calculate precisely. For each number, we know exactly how much utility it gives us. EDIT: Further 10-delta is not normally considered a number. I imagine that some people might include x as a number, but they aren't defining the game, so number means what mathematicians in our society typically mean by (real) number.
Load More