Posts

Sorted by New

Wiki Contributions

Comments

Answer by Myron HeddersonFeb 16, 202410

I've skimmed the answers so far and I don't think I'm repeating things that have already been said, but please feel free to let me know if I am and skip re-reading.

> What I know about science and philosophy suggests that determinism is probably true

What I know about science and philosophy suggests we shouldn't be really sure that the understanding we believe to be accurate now won't be overturned later. There are problems with our physics sufficient to potentially require a full paradigm shift to something else, as yet unknown. So if "determinism is true" is demotivating for you, then consider adding a "according to my current understanding" and a "but that may be incorrect" to that statement.

I also read in some of the discussion below that determinism isn't always demotivating for you - only in some cases, like when the task is hard and the reward, if any, is temporally distant. So I wonder how much determinism is a cause of your demotivation, rather than a rationalization of demotivation whose main cause is something else. If someone convinced you that determinism was false, how much more motivated do you expect you would be, to do hard things with long delays before reward? If the answer comes back "determinism is a minor factor" then focusing on the major factors will get you most of the way to where you want to be.

But, suppose determinism is definitely true, and is, on further reflection, confirmed as a major cause of your demotivation. What then?

This has actually been said in a few different ways below, but I'm going to try and rephrase. It's a matter of perspective. Let me give you a different example of something with a similar structure, that I have at times found demotivating. It is basically the case, as far as I understand, that slightly changing the timing of when people have sex with each other will mean a different sperm fertilizes a given egg, and so our actions, by for example by accidentally causing someone to pause while walking, ripple out and change the people who would otherwise have been born a generation hence, in very unpredictable ways whose effects probably dominate the fact that I might have been trying to be nice by opening a door for someone. It was nice of me to open the door, but whether changing the set of which billions of people will be born is a net good or a net bad, is not something I can know.

One response to this is something like "focus on your circle of control - the consequences you can't control and predict aren't your responsibility, but slamming the door in someone's face would be bad even if the net effect including all the consequences that are unknowable to you could be either very good or very bad".

This is similar in structure to the determinism problem - the universe might be deterministic, but even if so, you don't and can't know what the determined state at each point in time is. Within your circle of control as an incredibly cognitively limited tiny part of the universe, is only to make what feels like a choice to you, about whether to hold a door open for someone or slam it in their face. From your perspective as a cognitively-bounded agent with the appearance of choice, making a choice makes sense. Don't try to take on the perspective of a cognitively-unbounded non-agent looking at the full state of the universe at all points in time from the outside and going "yep, no choices exist here" - you don't have the cognitive capacity to model such a being correctly, and letting how such a being might feel if it had feelings influence how you feel is a mistake. In my opinion, anyway.

I'm unclear why you consider low-trust societies to be natural and require no explanation. To me it makes intuitive sense that small high-trust groups would form naturally at times, and sometimes those groups would, by virtue of cooperation being an advantage, grow over time to be big and successful enough to be classed as "societies".

I picture a high trust situation like a functional family unit or small village where everyone knows everyone, to start. A village a few kilometers away is low trust. Over time, both groups grow, but there's less murdering and thievery and expense spent on various forms of protection against adversarial behaviour in the high-trust group, so they grow faster. Eventually the two villages interact, and some members of the low-trust group defect against their neighbours and help the outsiders to gain some advantage for themselves, while the high trust group operates with a unified goal, such that even if they were similarly sized, the high trust group would be more effective. Net result, the high trust group wins and expands, the low trust group shrinks or is exterminated. More generally, I think in a lot of different forms of competition, the high trust group is going to win because they can coordinate better. So all that is needed is for a high-trust seed to exist in a small functional group, and it may grow to arbitrary size (provided mechanisms for detecting and punishing defectors and free-riders, of course).

I don't claim that this is a well-grounded explanation with the backing of any anthropological research, which is why I'm putting it as a comment rather than an answer. But I do know that children often grow up assuming that whatever environment they grew up in is typical for everyone everywhere. So if a child grows up in a functional family that cooperates with and supports each other, they're going to generalize that and expect others outside of their family to cooperate and support each other was well, unless and until they learn this isn't always the case. This becomes the basis for forming high-trust cooperative relationships with non-kin, where the opportunity exists. Seems to me a high trust society is just one where those small seeds of cooperation have grown to a group of societal size. 

Taking it back a step, it seems like we have a lot of instincts that aid us in cooperating with each other. Probably because those with those instincts did better than those without, because a human by itself is puny and weak and can only look in one direction at once and sometimes needs to sleep, but ten humans working together are not subject to those same constraints. And it is those cooperative instincts, like reciprocity, valuing fairness, punishment of defectors, and rewarding generosity with status, which help us easily form trusting cooperative relationships ("easily" relative to how hard it would be if we were fully selfish agents aiming only to maximize some utility function in each interaction, and we further knew that this was true of everyone we interacted with as well), which in turn are the basis for trust within larger-scale groups.

I mean, you're asking this question with the well-founded hope that someone is going to take their own time to give you a good answer, without them being paid to do so or any credible promise of another form of reward. You would be surprised, I think, if the response to this request was an attempt to harm you in order to gain some advantage at your expense. If 10 people with a similar dispositon were trapped on an island for a few generations, you could start a high trust society, could you not?

My brain froze up on that question. In order for there to be mysterious old wizards, there have to be wizards, and in order for the words "mysterious" and "old" to be doing useful work in that question the wizards would have to vary in their age and mysteriousness, and I'm very unsure how the set of potential worlds that implies compares to this one.

I'm probably taking the question too literally... :D

And, um, done.

Generically, having more money in the bank gives you more options, being cash-constrained means you have fewer options. And, also generically, when the future is very uncertain, it is important to have options for how to deal with it. 

If how the world currently works changes drastically in the next few decades, I'd like to have the option to just stop what I'm doing and do something else that pays no money or costs some money, if that seems like the situationally-appropriate response. Maybe that's taking some time to think and plan my next move after losing a job to automation, rather than having to crash-train myself in something new that will disappear next year. Maybe it's changing my location and not caring how much my house sells for. Maybe it's doing different work. Maybe it's paying people to do things for me. Maybe it's also useful to be invested in the right companies when the economy goes through a massive upswing before the current system collapses, so I for a brief time have a lot of wealth and can direct it towards goals that are aligned with my values rather than someone else's, thus, index funds that buy me into a lot of companies.

Even if we eventually get to a utopia, the path to that destination could be rocky, and having some slack is likely to be helpful in riding that time out.

Another form of slack is learning to live on much less than you make - so the discipline required to accumulate savings, could also pay off in terms of not being psychologically attached to a lifestyle that stops you from making appropriate changes as the world changes around you.

Of course "accumulate money so you have options when the world changes" is a different mindset than "save money so you can go live on a beach in 40 years". But money is sort of like fungible power, an instrumentally useful thing to have for many different possible goals in many different scenarios, and a useless thing to have in only a few.

Side note: "the amount a dollar can do goes up, the value of a dollar collapses" strikes me as implausible. Your story for how that could happen is people hit a point of diminishing returns in terms of their own happiness... but there are plenty of things dollars can be used for aside from buying more personal happiness. If things go well, we're just at the start of earth-originating intelligence's story, and there are plenty of ways for an investment made at the right time to ripple out across the universe. If I was a trillionaire (or a 2023-hundred-thousandaire where the utility of a dollar has gone up by a factor of 10 million, whatever), I could set up a utopia suited to my tastes and understanding of the good, for others, and that seems worth doing even if my subjective day-to-day experience doesn't improve as a result. As just one example. In any case, being at the beginning of a large expansion in the power of earth-originating intelligence, seems like just the sort of time when you'd like to have the ability to make a careful investment.

To be clear, I do not endorse the argument that mental models embedded in another person are necessarily that person. It makes sense that a sufficiently intelligent person with the right neural hardware would be able to simulate another person in sufficient detail that that simulated person should count, morally.

I appreciate your addendum, as well, and acknowledge that yes, given a situation like that it would be possible for a conscious entity which we should treat as a person to exist in the mind of another conscious entity we should treat as a person, without the former's conscious experience being accessible to the latter.

What I'm trying to express (mostly in other comments) is that, given the particular neural architecture I think I have, I'm pretty sure that the process of simulating a character requires use of scarce resources such that I can only do it by being that character (feeling what it feels, seeing in my mind's eye what it sees, etc.), not run the character in some separate thread. Some testable predictions: If I could run two separate consciousnesses simultaneously in my brain (me plus one other, call this person B) and then have a conversation with B, I would expect the experience of interacting with B to be more like the experience of interacting with other people, in specific ways that you haven't mentioned in your posts. Examples: I would expect B to misunderstand me occasionally, to mis-hear what I was saying and need me to repeat, to become distracted by its own thoughts, to occasionally actively resist interacting with me. Whereas the experience I have is consistent with the idea that in order to simulate a character, I have to be that character temporarily - I feel what they feel, think what they think, see what they see, their conscious experience is my conscious experience, etc.  - and when I'm not being them, they aren't being. In that sense, "the character I imagine" and "me" are one. There is only one stream of consciousness, anyway. If I stop imagining a character, and then later pick back up where i left off, it doesn't seem like they've been living their lives outside of my awareness and have grown and developed, in the way a non-imagined person would grow and change and have new thoughts if I stopped talking to them and came back and resumed the conversation in a week. Rather, we just pick up right where we left off, perhaps with some increased insight (in the same sort of way that I can have some increased insight after a night's rest, because my subconscious is doing some things in the background) but not to the level of change I would expect from a separate person having its own conscious experiences.

I was thinking about this overnight, and an analogy occurs to me. Suppose in the future we know how to run minds on silicon, and store them in digital form. Further suppose we build a robot with processing power sufficient to run one human-level mind. In its backpack, it has 10 solid state drives, each with a different personality and set of memories, some of which are backups, plus one solid state drive is plugged in to its processor, which it is running as "itself" at this time. In that case, would you say the robot + the  drives in its backpack = 11 people, or 1?

I'm not firm on this, but I'm leaning toward 1, particularly if the question is something like "how many people are having a good/bad life?" - what matters is how many conscious experiencers there are, not how many stored models there are. And my internal experience is kind of like being that robot, only able to load one personality at a time. But sometimes able to switch out, when I get really invested in simulating someone different from my normal self.

EDIT to add: I'd like to clarify why I think the distinction between "able to create many models of people, but only able to run one at a time" and "able to run many models of people simultaneously" is important in your particular situation. You're worried that by imagining other people vividly enough, you could create a person with moral value who you are then obligated to protect and not cause to suffer. But: If you can only run one person at a time in your brain (regardless of what someone else's brain/CPU might be able to do) then you know exactly what that person is experiencing, because you're experiencing it too. There is no risk that it will wander off and suffer outside of your awareness, and if it's suffering too much, you can just... stop imagining it suffering.

I elaborated on this a little elsewhere, but the feature I would point to would be "ability to have independent subjective experiences". A chicken has its own brain and can likely have a separate experience of life which I don't share, and so although I wouldn't call it a person, I'd call it a being which I ought to care about and do what I can to see that it doesn't suffer. By contrast, if I imagine a character, and what that character feels or thinks or sees or hears, I am the one experiencing that character's (imagined) sensorium and thoughts - and for a time, my consciousness of some of my own sense-inputs and ability to think about other things is taken up by the simulation and unavailable for being consciously aware of what's going on around me. Because my brain lacks duplicates of certain features, in order to do this imagining, I have to pause/repurpose certain mental processes that were ongoing when I began imagining. The subjective experience of "being a character" is my subjective experience, not a separate set of experiences/separate consciousness that runs alongside mine the way a chicken's consciousness would run alongside mine if one was nearby. Metaphorically, I enter into the character's mindstate, rather than having two mindstates running in parallel.

Two sets of simultaneous subjective experiences: Two people/beings of potential moral importance. One set of subjective experiences: One person/being of potential moral importance. In the latter case, the experience of entering into the imagined mindstate of a character is just another experience that a person is having, not the creation of a second person.

Having written the above, I went away and came back with a clearer way to express it: For suffering-related (or positive experience related) calculations, one person = one stream of conscious experience, two people = two streams of conscious experience. My brain can only do one stream of conscious experience at a time, so I'm not worried that by imagining characters, I've created a bunch of people. But I would worry that something with different hardware than me could.

I have a question related to the "Not the same person" part, the answer to which is a crux for me.

Let's suppose you are imagining a character who is experiencing some feeling. Can that character be feeling what it feels, while you feel something different? Can you be sad while your character is happy, or vice versa?

I find that I can't - if I imagine someone happy, I feel what I imagine they are feeling - this is the appeal of daydreams. If I imagine someone angry during an argument, I myself feel that feeling. There is no other person in my mind having a separate feeling. I don't think I have the hardware to feel two people's worth of feelings at once, I think what's happening is that my neural hardware is being hijacked to run a simulation of a character, and while this is happening I enter into the mental state of that character, and in important respects my other thoughts and feelings on my own behalf stop.

So for me, I think my mental powers are not sufficient to create a moral patient separate from myself. I can set my mind to simulating what someone different from real-me would be like, and have the thoughts and feelings of that character follow different paths than my thoughts would, but I understand "having a conversation between myself and an imagined character", which you treat as evidence there are two people involved, as a kind of task-switching, processor-sharing arrangement - there are bottlenecks in my brain that prevent me from running two people at once, and the closest I can come is thinking as one conversation partner and then the next and then back to the first. I can't, for example, have one conversation partner saying something while the other is not paying attention because they're thinking of what to say next and only catches half of what was said and so responds inappropriately, which is a thing that I hear is not uncommon in real conversations between two people. And if the imagined conversation involves a pause which in a conversation between two people would involve two internal mental monologues, I can't have those two mental monologues at once. I fully inhabit each simulation/imagined character as it is speaking, and only one at a time as it is thinking.

If this is true for you as well, then in a morally relevant respect I would say that you and whatever characters you create are only one person. If you create a character who is suffering, and inhabit that character mentally such that you are suffering, that's bad because you are suffering, but it's not 2x bad because you and your character are both suffering - in that moment of suffering, you and your character are one person, not two.

I can imagine a future AI with the ability to create and run multiple independent human-level simulations of minds and watch them interact and learn from that interaction, and perhaps go off and do something in the world while those simulations persist without it being aware of their experiences any more. And for such an AI, I would say it ought not to create entities that have bad lives. And if you can honestly say that your brain is different than mine in such a way that you can imagine a character and you have the mental bandwidth to run it fully independently from yourself, with its own feelings that you know somehow other than having it hijack the feeling-bits of your brain and use them to generate feelings which you feel while what you were feeling before is temporarily on pause (which is how I experience the feelings of characters I imagine), and because of this separation you could wander off and do other things with your life and have that character suffer horribly with no ill effects to you except the feeling that you'd done something wrong... then yeah, don't do that. If you could do it for more than one imagined character at a time, that's worse, definitely don't.

But if you're like me, I think "you imagined a character and that character suffered" is functionally/morally equivalent to "you imagined a character and one person (call it you or your character, doesn't matter) suffered" - which, in principle that's bad unless there's some greater good to be had from it, but it's not worse than you suffering for some other reason.

 I think there are at least two levels where you want change to happen - on an individual level, you want people to stop doing a thing they're doing that hurts you, and on a social level, you want society to be structured so that you and others don't keep having that same/similar experience. 

The second thing is going to be hard, and likely impossible to do completely. But the first thing... responding to this: 

It wouldn't be so bad, if I only heard it fifty times a month.  It wouldn't be so bad, if I didn't hear it from friends, family, teachers, colleagues.  It wouldn't be so bad, if there were breaks sometimes.

I think it would be healthy and good and enable you to be more effective at creating the change you want in society, if you could arrange for there to be some breaks sometimes. I see in the comments that you don't want to solve things on your individual level completely yet because there's a societal problem to solve and you don't want to lose your motivation, and I get that. (EDIT: I realize that I'm projecting/guessing here a bit, which is dangerous if I guess wrong and you feel erased as a result... so I'm going to flag this as a guess and not something I know. But my guess is the something precious you would lose by caring less about these papercuts has to do with a motivation to fix the underlying problem for a broader group of people). But if you are suffering emotional hurt to the extent that it's beyond your ability to cope with and you're responding to people in ways you don't like or retrospectively endorse, then taking some action to dial the papercut/poke-the-wound frequency back a bit among the people you interact with the most is probably called for.

With that said, it seems to me that while it may be hard to fix society, the few trusted and I assume mostly fairly smart people who you interact with most frequently can be guided to avoid this error, by learning the things about you that don't fit into their models of "everyone", and that it would really help if they said "almost all" rather than "all". People in general may have to rely on models and heuristics into which you don't fit, but your close friends and family can learn who you are and how to stop poking your sore spots. This gives you a core group of people who you can go be with when you want a break from society in general, and some time to recharge so you can better reengage with changing that society.

As for fixing society, I said above that it may be impossible to do completely, but if I was trying for most good for the greatest number, my angle of attack would be, make a list of the instances where people are typical-minding you, and order that list based on how uncommon the attribute they're assuming doesn't exist is. Some aspects of your cognition or personality may be genuinely and literally unique, while others that get elided may be shared by 30% of the population that the person you're speaking to at the moment just doesn't have in their social bubble. The things that are least uncommon are both going to be easiest to build a constituency around and get society to adjust to, and have the most people benefit from the change when it happens.