Generically, having more money in the bank gives you more options, being cash-constrained means you have fewer options. And, also generically, when the future is very uncertain, it is important to have options for how to deal with it.
If how the world currently works changes drastically in the next few decades, I'd like to have the option to just stop what I'm doing and do something else that pays no money or costs some money, if that seems like the situationally-appropriate response. Maybe that's taking some time to think and plan my next move after losing a job to automation, rather than having to crash-train myself in something new that will disappear next year. Maybe it's changing my location and not caring how much my house sells for. Maybe it's doing different work. Maybe it's paying people to do things for me. Maybe it's also useful to be invested in the right companies when the economy goes through a massive upswing before the current system collapses, so I for a brief time have a lot of wealth and can direct it towards goals that are aligned with my values rather than someone else's, thus, index funds that buy me into a lot of companies.
Even if we eventually get to a utopia, the path to that destination could be rocky, and having some slack is likely to be helpful in riding that time out.
Another form of slack is learning to live on much less than you make - so the discipline required to accumulate savings, could also pay off in terms of not being psychologically attached to a lifestyle that stops you from making appropriate changes as the world changes around you.
Of course "accumulate money so you have options when the world changes" is a different mindset than "save money so you can go live on a beach in 40 years". But money is sort of like fungible power, an instrumentally useful thing to have for many different possible goals in many different scenarios, and a useless thing to have in only a few.Side note: "the amount a dollar can do goes up, the value of a dollar collapses" strikes me as implausible. Your story for how that could happen is people hit a point of diminishing returns in terms of their own happiness... but there are plenty of things dollars can be used for aside from buying more personal happiness. If things go well, we're just at the start of earth-originating intelligence's story, and there are plenty of ways for an investment made at the right time to ripple out across the universe. If I was a trillionaire (or a 2023-hundred-thousandaire where the utility of a dollar has gone up by a factor of 10 million, whatever), I could set up a utopia suited to my tastes and understanding of the good, for others, and that seems worth doing even if my subjective day-to-day experience doesn't improve as a result. As just one example. In any case, being at the beginning of a large expansion in the power of earth-originating intelligence, seems like just the sort of time when you'd like to have the ability to make a careful investment.
To be clear, I do not endorse the argument that mental models embedded in another person are necessarily that person. It makes sense that a sufficiently intelligent person with the right neural hardware would be able to simulate another person in sufficient detail that that simulated person should count, morally.I appreciate your addendum, as well, and acknowledge that yes, given a situation like that it would be possible for a conscious entity which we should treat as a person to exist in the mind of another conscious entity we should treat as a person, without the former's conscious experience being accessible to the latter.What I'm trying to express (mostly in other comments) is that, given the particular neural architecture I think I have, I'm pretty sure that the process of simulating a character requires use of scarce resources such that I can only do it by being that character (feeling what it feels, seeing in my mind's eye what it sees, etc.), not run the character in some separate thread. Some testable predictions: If I could run two separate consciousnesses simultaneously in my brain (me plus one other, call this person B) and then have a conversation with B, I would expect the experience of interacting with B to be more like the experience of interacting with other people, in specific ways that you haven't mentioned in your posts. Examples: I would expect B to misunderstand me occasionally, to mis-hear what I was saying and need me to repeat, to become distracted by its own thoughts, to occasionally actively resist interacting with me. Whereas the experience I have is consistent with the idea that in order to simulate a character, I have to be that character temporarily - I feel what they feel, think what they think, see what they see, their conscious experience is my conscious experience, etc. - and when I'm not being them, they aren't being. In that sense, "the character I imagine" and "me" are one. There is only one stream of consciousness, anyway. If I stop imagining a character, and then later pick back up where i left off, it doesn't seem like they've been living their lives outside of my awareness and have grown and developed, in the way a non-imagined person would grow and change and have new thoughts if I stopped talking to them and came back and resumed the conversation in a week. Rather, we just pick up right where we left off, perhaps with some increased insight (in the same sort of way that I can have some increased insight after a night's rest, because my subconscious is doing some things in the background) but not to the level of change I would expect from a separate person having its own conscious experiences.I was thinking about this overnight, and an analogy occurs to me. Suppose in the future we know how to run minds on silicon, and store them in digital form. Further suppose we build a robot with processing power sufficient to run one human-level mind. In its backpack, it has 10 solid state drives, each with a different personality and set of memories, some of which are backups, plus one solid state drive is plugged in to its processor, which it is running as "itself" at this time. In that case, would you say the robot + the drives in its backpack = 11 people, or 1?I'm not firm on this, but I'm leaning toward 1, particularly if the question is something like "how many people are having a good/bad life?" - what matters is how many conscious experiencers there are, not how many stored models there are. And my internal experience is kind of like being that robot, only able to load one personality at a time. But sometimes able to switch out, when I get really invested in simulating someone different from my normal self.EDIT to add: I'd like to clarify why I think the distinction between "able to create many models of people, but only able to run one at a time" and "able to run many models of people simultaneously" is important in your particular situation. You're worried that by imagining other people vividly enough, you could create a person with moral value who you are then obligated to protect and not cause to suffer. But: If you can only run one person at a time in your brain (regardless of what someone else's brain/CPU might be able to do) then you know exactly what that person is experiencing, because you're experiencing it too. There is no risk that it will wander off and suffer outside of your awareness, and if it's suffering too much, you can just... stop imagining it suffering.
I elaborated on this a little elsewhere, but the feature I would point to would be "ability to have independent subjective experiences". A chicken has its own brain and can likely have a separate experience of life which I don't share, and so although I wouldn't call it a person, I'd call it a being which I ought to care about and do what I can to see that it doesn't suffer. By contrast, if I imagine a character, and what that character feels or thinks or sees or hears, I am the one experiencing that character's (imagined) sensorium and thoughts - and for a time, my consciousness of some of my own sense-inputs and ability to think about other things is taken up by the simulation and unavailable for being consciously aware of what's going on around me. Because my brain lacks duplicates of certain features, in order to do this imagining, I have to pause/repurpose certain mental processes that were ongoing when I began imagining. The subjective experience of "being a character" is my subjective experience, not a separate set of experiences/separate consciousness that runs alongside mine the way a chicken's consciousness would run alongside mine if one was nearby. Metaphorically, I enter into the character's mindstate, rather than having two mindstates running in parallel.Two sets of simultaneous subjective experiences: Two people/beings of potential moral importance. One set of subjective experiences: One person/being of potential moral importance. In the latter case, the experience of entering into the imagined mindstate of a character is just another experience that a person is having, not the creation of a second person.
Having written the above, I went away and came back with a clearer way to express it: For suffering-related (or positive experience related) calculations, one person = one stream of conscious experience, two people = two streams of conscious experience. My brain can only do one stream of conscious experience at a time, so I'm not worried that by imagining characters, I've created a bunch of people. But I would worry that something with different hardware than me could.
I have a question related to the "Not the same person" part, the answer to which is a crux for me.Let's suppose you are imagining a character who is experiencing some feeling. Can that character be feeling what it feels, while you feel something different? Can you be sad while your character is happy, or vice versa?I find that I can't - if I imagine someone happy, I feel what I imagine they are feeling - this is the appeal of daydreams. If I imagine someone angry during an argument, I myself feel that feeling. There is no other person in my mind having a separate feeling. I don't think I have the hardware to feel two people's worth of feelings at once, I think what's happening is that my neural hardware is being hijacked to run a simulation of a character, and while this is happening I enter into the mental state of that character, and in important respects my other thoughts and feelings on my own behalf stop.So for me, I think my mental powers are not sufficient to create a moral patient separate from myself. I can set my mind to simulating what someone different from real-me would be like, and have the thoughts and feelings of that character follow different paths than my thoughts would, but I understand "having a conversation between myself and an imagined character", which you treat as evidence there are two people involved, as a kind of task-switching, processor-sharing arrangement - there are bottlenecks in my brain that prevent me from running two people at once, and the closest I can come is thinking as one conversation partner and then the next and then back to the first. I can't, for example, have one conversation partner saying something while the other is not paying attention because they're thinking of what to say next and only catches half of what was said and so responds inappropriately, which is a thing that I hear is not uncommon in real conversations between two people. And if the imagined conversation involves a pause which in a conversation between two people would involve two internal mental monologues, I can't have those two mental monologues at once. I fully inhabit each simulation/imagined character as it is speaking, and only one at a time as it is thinking.If this is true for you as well, then in a morally relevant respect I would say that you and whatever characters you create are only one person. If you create a character who is suffering, and inhabit that character mentally such that you are suffering, that's bad because you are suffering, but it's not 2x bad because you and your character are both suffering - in that moment of suffering, you and your character are one person, not two.I can imagine a future AI with the ability to create and run multiple independent human-level simulations of minds and watch them interact and learn from that interaction, and perhaps go off and do something in the world while those simulations persist without it being aware of their experiences any more. And for such an AI, I would say it ought not to create entities that have bad lives. And if you can honestly say that your brain is different than mine in such a way that you can imagine a character and you have the mental bandwidth to run it fully independently from yourself, with its own feelings that you know somehow other than having it hijack the feeling-bits of your brain and use them to generate feelings which you feel while what you were feeling before is temporarily on pause (which is how I experience the feelings of characters I imagine), and because of this separation you could wander off and do other things with your life and have that character suffer horribly with no ill effects to you except the feeling that you'd done something wrong... then yeah, don't do that. If you could do it for more than one imagined character at a time, that's worse, definitely don't.
But if you're like me, I think "you imagined a character and that character suffered" is functionally/morally equivalent to "you imagined a character and one person (call it you or your character, doesn't matter) suffered" - which, in principle that's bad unless there's some greater good to be had from it, but it's not worse than you suffering for some other reason.
I think there are at least two levels where you want change to happen - on an individual level, you want people to stop doing a thing they're doing that hurts you, and on a social level, you want society to be structured so that you and others don't keep having that same/similar experience. The second thing is going to be hard, and likely impossible to do completely. But the first thing... responding to this:
It wouldn't be so bad, if I only heard it fifty times a month. It wouldn't be so bad, if I didn't hear it from friends, family, teachers, colleagues. It wouldn't be so bad, if there were breaks sometimes.
I think it would be healthy and good and enable you to be more effective at creating the change you want in society, if you could arrange for there to be some breaks sometimes. I see in the comments that you don't want to solve things on your individual level completely yet because there's a societal problem to solve and you don't want to lose your motivation, and I get that. (EDIT: I realize that I'm projecting/guessing here a bit, which is dangerous if I guess wrong and you feel erased as a result... so I'm going to flag this as a guess and not something I know. But my guess is the something precious you would lose by caring less about these papercuts has to do with a motivation to fix the underlying problem for a broader group of people). But if you are suffering emotional hurt to the extent that it's beyond your ability to cope with and you're responding to people in ways you don't like or retrospectively endorse, then taking some action to dial the papercut/poke-the-wound frequency back a bit among the people you interact with the most is probably called for.With that said, it seems to me that while it may be hard to fix society, the few trusted and I assume mostly fairly smart people who you interact with most frequently can be guided to avoid this error, by learning the things about you that don't fit into their models of "everyone", and that it would really help if they said "almost all" rather than "all". People in general may have to rely on models and heuristics into which you don't fit, but your close friends and family can learn who you are and how to stop poking your sore spots. This gives you a core group of people who you can go be with when you want a break from society in general, and some time to recharge so you can better reengage with changing that society.As for fixing society, I said above that it may be impossible to do completely, but if I was trying for most good for the greatest number, my angle of attack would be, make a list of the instances where people are typical-minding you, and order that list based on how uncommon the attribute they're assuming doesn't exist is. Some aspects of your cognition or personality may be genuinely and literally unique, while others that get elided may be shared by 30% of the population that the person you're speaking to at the moment just doesn't have in their social bubble. The things that are least uncommon are both going to be easiest to build a constituency around and get society to adjust to, and have the most people benefit from the change when it happens.