I should clarify that resource-compatibility is a claim about the mundane and exotic values humans actually hold. It's a contingent, not a necessary. Yes, some people think the natural world is a hell-pit of suffering (negative utilitarians like Brian Tomasik), but they're typically scope-sensitive and longtermist, so they'd care far more about the distal resources.
You could construct a value profile like "utility = -1 if suffering exists on Earth, else 0" which is exotic values seeking proximal resources. I don't have a good answer for handling such cases. But empirically, this value profile seems rare.
I agree that this is an empirical claim, in fact, this is pretty much the major interesting question here! Please say more about why you think this is the case. Empirically, Brian Tomasik does exist, as do nature-has-intrinsic-value hippies, so somebody is definitely getting shafted in any given future.
My intuitions expect that in the limit of infinite power, human moral intuitions come apart quite a lot more than you've addressed here (for example, I think the utopia described here throws away >90% of the value in the universe for no reason, maybe more depending on how much of the universe gets converted to -oniums).
I also think that any attempt to consider some kind of moral value compromise should probably think about what kinds of process we would actually expect to come up with a nice compromise like this, for example, it seems very unlikely to me that a moral value war would lead to a good ending like this.
I think most of the arguments in this essay fail to bind to reality. This essay seems to have been backchained from a cool idea for the future into arguments for it. The points about how "The cosmos will be divided between different value systems" are all really vague and don't really provide any insight into what the future actually looks like, or how any of these processes really lead to a future like the one described, yet the descriptions of each individual layer are very specific.
(I can imagine that maybe after some kind of long reflection, we all agree on something like this, but I expect that any actual war scenarios end up with a winner-takes-all lock-in)
I do think the intuitions that a stratified utopia is desirable are somewhat interesting. I think that dividing up the universe into various chunks probably is a way to create a future that most people would be happy with. The "Nothing to Mourn" principle is really nice.
Then again, I think that a simple forward-chaining application of the nothing to mourn principle immediately runs into the real, difficult problems of allocating resources to different values: some people's utopias are mutually net negative. For example, if one person thinks the natural world is a horrid hell-pit of suffering and another thinks that living in a fully AI-managed environment is a kind of torture for everyone involved, they just can't compromise. It's not possible. This is the real challenge of allocating value through splitting up the universe, and the fact that you didn't really address it gives the whole essay a kind of "Communist art students planning their lives on the commune after the revolution" vibe.
It would be cool to do a dive into this concept which focuses more on what kind of a thing a value actually is, and what moral uncertainty actually means (some people especially EAs do this thing where they talk about moral uncertainty as if they're moral realists, but firstly I think moral realism is incoherent, and secondly they don't actually endorse moral realism) and also to address the problem of mutually net negative ideal worlds.
C. diff is the classic case, fecal transplants are really well established there as the good treatment for antibiotic-induced infections at this point. I learned about it reading about the microbiome a few years ago.
FWIW I actually also ran the idea past my partner who works in microbial community modelling (though not in anything medical related, they run simulations) and it was roughly:
Me: I'm looking up nasal microbiome transplants
Them: I'm not sure that makes sense, the nasal microbiome is normally fairly low-diversity, it's not the same as the gut
Me: this guy has a nose full of staph though
Them: oh, in that case maybe it will work
Yeah basically. I think "OK-ness" in the human psyche is a bit of a binary, which is uncorrelated with ones actions a lot of the time.
So you can imagine four quadrants of "Ok with dying" vs "Not Ok with dying" and, separately "Tries to avoid dying" vs "Doesn't try to avoid dying". Where most normies are in the "Ok with dying"+"Doesn't try to avoid dying" (and quite a few are in the "Not Ok with dying"+"Doesn't try to avoid dying" quadrant) while lots of rats are in the "Not Ok with dying"+"Tries to avoid dying" quadrant.
I think that, right now, most of the sane work being done is in the "Ok with dying"+"Tries to avoid dying" quadrant. I think Yudkowsky's early efforts wanted to move people from "Doesn't try..." to "Tries..." but did this by pulling on the "Ok..." to "Not Ok..." axis, and I think this had some pretty negative consequences.
This is very close to some ideas I've been trying and failing to write up. In "On Green" Joe Carlsmith writes "Green is what told the rationalists to be more OK with death, and the EAs to be more OK with wild animal suffering." but wait hang on actually being OK with death is the only way to stay sane, and while it's not quite the same, the immediate must-reduce-suffering-footprint drive that EAs have might have ended up giving some college students some serious dietary deficiencies.
Caveats: this is all vibes based. The following LEGALLY NOT ADVICE is pretty low-risk and easy, but I have no idea if it will actually work. Some people have already tried this (look up Nasal Microbiome Transplant)
Best guess: your nasal microbiome is in a low diversity disease state similar to C. difficile infections in the gut. Hitting it with antibiotics doesn't work because the staph just resists and rebounds faster than anything else.
What I would do in your situation: use the garlic nasal spray, leave it an hour or so, then (the unpleasant part) put someone else's snot up your nose to recolonize your nasal cavity with a healthy, diverse microbiome.
The important other question is whether you should come off the antibiotics for a bit. This is higher risk than any other part of the advice (since the antibiotics are for maintenance) but might be necessary: if you stay on the antibiotics, you might just kill off all your brand new microbiome.
Thanks for the response! I feel like I understand your position quite a lot better now, and see the place where "pessimization" fits into a mental model a lot better. My version of your synthesis is something like as follows:
"Activists often work in very adversarial domains, in the obvious and non-obvious ways. If they screw up even a little bit, this can make them much, much less effective, causing more harm to their cause than a similarly-sized screwup would in most non-adversarial domains. This process is important enough to need a special name, even though individual cases of it might be quite different. Once we've named it, we can see if there are any robust solutions to all or most of those cases."
Based on this, I currently think of the concept of pessimization as making a prediction about the world: virtue ethics (or something like it) is a good solution to most or all of these problems, which means the problems themselves shared something in common, which is worthy of a label.
It's also worth noting that a major research goal of mine is to pin down mechanisms of pessimization more formally and precisely, and if I fail then that should count as a significant strike against the concept.
This is absolutely intriguing do you have anything more written about this publicly?
At the risk of being too much of an "Everything Is Connected" guy, I think there's a connection between the following (italicized items are things I've thought about or worked on recently)
It doesn't quite fit, which is a little annoying. But the level one vs level two security mindset thing comes to mind when I think about deontology vs virtue ethics.
Deontology seeks to find specific rules to constrain humans away from particular failure modes "Don't overthrow your democratically elected leader in a bloody revolution even if the glorious leader really would be a good god-emperor" and the like.
Perhaps a good version of virtue ethics would work like the true security mindset, although I don't know whether the resulting version of virtue ethics would look much like what the Athenians were talking about.
Just don't do this. This isn't the kind of plan which works in real life.
Appealing to outgroup fear just gets you a bunch of paranoid groups who never talk to one another.
Truth-telling is relatively robust because you automatically end up on the same side as other truth-tellers and you can all automatically agree on messaging (roughly).
The only exception is leveraging fear of death, which is reasonable in smaller doses IMO when talking about AI, since dying is actually on the table.
I'm optimistic that the same forces that remind the collective to focus on accomplishing their instrumental goals instead of degenerating into unproductive navel-gazing will also be strong enough to remind them of their deontological commitments.
OK I actually think this might be the real disagreement, as opposed to my other comment. I think that generalizing across capabilities is much more likely than generalizing across alignment, or at least that the first thing which generalizes across strong capabilities will not generalize alignment "correctly".
This is like a super high-level argument, but I think there are multiple ways of generalizing human values and no correct/canonical one (as in my other comment) nor are there any natural ways for an AI to be corrected without direct intervention from us. Whereas if an AI makes a factually wrong inference, it can correct itself.
I actually think that A is the most intuitive option. I don't see why it should be possible for something which knows the physical state of my brain to be able to efficiently compute the contents of it.
Then again, given functionalism, perhaps it's the case that extracting information about the contents of the brain from the encrypted computation is not as hard as one might think. The encryption is just a reversible map from one state space to another. If an omniscient observer can extract the contents of a brain by assembling a causal model of it in un-encrypted phase space, why would it struggle to build the same casual model in encrypted phase space? If some high-level abstractions of the computation are what matter, then the difficult part is mostly in finding the right abstractions.