(I say this all the time, but I think that [the thing you call “values”] is a closer match to the everyday usage of the word “desires” than the word “values”.)
I think we should distinguish three things: (A) societal norms that you have internalized, (B) societal norms that you have not internalized, (C) desires that you hold independent of [or even despite] societal norms.
For example:
Anyway, the OP says: “our shared concept of Goodness is comprised of whatever messages people spread about what other people should value. … which sure is a different thing from what people do value, when they intros...
I think that [the thing you call “values”] is a closer match to the everyday usage of the word “desires” than the word “values”.
Seconded. The word 'value' is heavily overloaded, and I think you're conflating two meanings. 'What do you value?' and 'What are your values?' are very different questions for that reason. The first means roughly desirability, whereas the second means something like 'ethical principles'. I read you as pointing mostly to the former, whereas 'value' in philosophy nearly always refers to the latter. Trying to redefine 'value' locally to have the other meaning seems likely to result in more confusion than clarity.
Good points as usual! On a meta note, I thought when writing this "Steve will probably say something like he usually says, and I still haven't fully incorporated it into my models, hopefully I'll absorb some more this time".
Anyway, I don't think I want to deny the existence of (A). I want to say that "style X is cool" is a true part of the girl's values insofar as style X summons up yummy/yearning/completeness/etc feelings on its own, and is not a true part of her values insofar as the feelings involved are mostly social anxiety or a yearning to be liked. (The desire to be liked would then be a part of her values, insofar as the prospect of being liked is what actually triggers the yearning.)
I do want to say that stuff is a true part of one's values once it triggers those feelings, regardless of whether memes were involved in installing the values along the way. I want to distinguish that from the case where people "tie themselves in knots", trying to act like they value something or telling themselves that they value something when the feelings are not in fact there, because they've been told (or logically convinced themselves) they "should" value the thing.
I've now read your linked posts, but can't derive from them how you would answer my questions. Do you want to take a direct shot at answering them? And also the following question/counter-argument?
Think about the consequences, what will actually happen down the line and how well your Values will actually be satisfied long-term, not just about what feels yummy in the moment.
Suppose I'm a sadist who derives a lot of pleasure/reward from torturing animals, but also my parents and everyone else in society taught me that torturing animals is wrong. According to your posts, this implies that my Values = "torturing animals has high value", and Goodness = "don't torturing animals", and I shouldn't follow Goodness unless it actually lets me better satisfy my values better long-term, in other words allows me to torture more animals in the long run. Am I understanding your ideas correctly?
(Edit: It looks like @Johannes C. Mayer made a similar point under one of your previous posts.)
Assuming I am understanding you correctly, this would be a controversial position to say the least, and counter to many people's intuitions or metaethical beliefs. I think metaethics is a hard problem, and I probab...
I think the confusion here is that "Goodness" means different things depending on whether you're a moral realist or anti-realist.
If you're a moral realist, Goodness is an objective quality that doesn't depend on your feelings/mental state. What is Good may or may not overlap with what you like/prefer/find yummy, but it doesn't have to.
If you're a moral anti-realist, either:
I think "Human Values" is a very poor phrase because:
Instead, people referring to "Human Values" obscure whether they are moral realists or anti-realists, which causes a lot of confusion when determining the implications and logical consistency of their views.
I agree that the distinction is important. However, my view is that a lot of what you call "goodness" is part of society's mechanism to ensure cooperate/cooperate. It helps other people get yummy stuff, not just you.
You can of course free yourself from that mechanism, and explicitly strategize how to get the most "yumminess" for yourself without ending up broke/addicted/imprisoned/etc. If the rest of society still follows "goodness", that leads to defect/cooperate, and indeed you end up better off. But there's a flaw in this plan.
Part of the point I intended to convey with the post is that society pushing for cooperate/cooperate is one way that Goodness-claims can go memetic, but there are multiple others ways memeticity can be achieved which are not so well aligned with the Values of Humans (either one's own values or others'). Thus this part:
Albert has relatively low innate empathy, and throws out all the Goodness stuff about following the rules and spirit of high-trust communities. Albert just generally hits the “defect” button whenever it’s convenient. Then Albert goes all pikachu surprise face when he’s excluded from high trust communities.
The message is definitely not to go hammering the defect button all the time, that's stupid. Yet somehow every time someone suggests that Goodness is maybe not all it's cracked up to be, lots of onlookers immediately round this to "you should go around hammering the defect button all the time!" (some with positive affect, some with negative) and man I really wish people could stop rounding that off and absorb the actual point.
Hmm. In all your examples, Albert goes against "goodness" and ends up with less "yumminess" as a result. But my point was about a different kind of situation: some hypothetical Albert goes against "goodness" and actually ends up with more "yumminess", but someone else ends up with less. What do you think about such situations?
Most people do not actually like screwing over other people
I think this is very culturally dependent. For example, wars of conquest were considered glorious in most places and times, and that's pretty much the ultimate form of screwing over other people. Or for another example, the first orphanages were built by early Christians, before that the orphans were usually disposed of. Or recall how common slavery and serfdom have been throughout history.
Basically my view is that human nature without indoctrination into "goodness" is quite nasty by default. Empathy is indeed a feeling we have, and we can feel it deeply (...sometimes). But we ended up with this feeling mainly due to indoctrination into "goodness" over generations. We wouldn't have nearly as much empathy if that indoctrination hadn't happened, and it probably wouldn't stay long term if that indoctrination went away.
But why do you think that people's feelings of "yumminess" track the reality of whether an action is cooperate/cooperate? I've explained that it hasn't been true throughout most of history: people have been able to feel "yummy" about very defecting actions. Maybe today the two coincide unusually well, but then that demands an explanation.
I think it's just not true. There are too many ways to defect and end up better off, and people are too good at rationalizing why it's ok for them specifically to take one of those ways. That's why we need an evolving mechanism of social indoctrination, "goodness", to make people choose the cooperative action even when it doesn't feel "yummy" to them in the moment.
But, like, the memetic egregore “Goodness” clearly does not track that in a robust generalizable way, any more than people’s feelings of yumminess do.
I feel you're overstating the "any more" part, or at least it doesn't match my experience. My feelings of "goodness" often track what would be good for other people, while my feelings of "yumminess" mostly track what would be good for me. Though of course there are exceptions to both.
So why are you attached to the whole egregore, rather than wanting to jettison the bulk of the egregore and focus directly on getting people to not defect?
This can be understood two ways. 1) A moral argument: "We shouldn't have so much extra stuff in the morality we're blasting in everyone's ears, it should focus more on the golden rule / unselfishness". That's fine, everyone can propose changes to morality, go for it. 2) "Everyone should stop listening to morality radio and follow their feels instead". Ok, but if nobody listens to the radio, by what mechanism do you get other people to not defect? Plenty of people are happy to defect by feels, I feel I've proved that sufficiently. Do you use police? Money? The radio was pretty useful for that actually, so I'm not with you on this.
Directionally correct advice for confused rationalist, but many of the specific claims are so imprecise or confused as to make many people more confused than enlightened.
Goodness is not an egregore. More sensible pointer would be something like Memetic values. Actually different egregores push for different values, often contradictory.
What happens on a more mechanistic level:
- when memes want people to do stuff, they can do two somewhat different things: 1) try to manipulate some existing part of implicit reward function 2) manipulate the world model
- often the path via 2) is easier; sometimes the hijack/rewrite is so blunt it's almost funny: for example there is a certain set of memes claiming you will get to mate with large number of virgin females with beautiful eyes if you serve the memeplex (caveat is you get this impressive boost to reproductive fitness only in the afterlife)
-- notice in this case basically no concept of goodness is needed / invoked, the structure rests on innate genetic evolutionary values, and change in world model
- another thing which the memes can try to do is also to replace some S1 model / feeling with a meme-based S2 version, such us the yummines...
- Our Values are (roughly) the yumminess or yearning we feel when imagining something.
- Goodness is (roughly) whatever stuff the memes say one should value.
I do not think this matches my usage of the words "Human Values" or (especially) "Goodness" (nor of the usage of the rare intelligent people whose ethical judgement I trust). The concept of yumminess/yearning is relevant; the concept of popular assertions of what one oughts to yearn for is relevant. But I object to both of these rough definitions on the grounds that they miss many central aspects.
Concretely: consider a heroin addict, in a memetic environment that strongly disapproves of heroin usage. Because of their addiction, by far the greatest yumminess they feel when imagining things is more heroin (and things which may have brought their past-self feelings of yumminess no longer have that feeling, because it cannot compete). In your framework, getting more heroin is part of their Values, but not part of their culture's Goodness.
So far so good — but now compare to your example of a gay man in a memetic environment that strongly disapproves of gay romance and sex. As far as I can tell, your analytic framework treats these c...
I mostly agree with this, the part which feels off is
I’d like to say here “screw memetic egregores, follow the actual values of actual humans”
Humans already follow their actual Values[1], and will always do because their Values are the reason they do anything at all. They also construct narratives about themselves that involve Goodness, and sometimes deny the distinction between Goodness and Values altogether. This act of (self-)deception is in itself motivated by the Values, at least instrumentally.
I do have a version of the “screw memetic egregores” attitude, which is, stop self-deceiving. Because, deception distorts epistemics, and we cannot afford distorted epistemics right now. It's not necessarily correct advice for everyone, but I believe it's correct advice for everyone who is seriously trying to save the world, at least.
Another nuance is that, in addition to empathy and naive tit-for-tat, there is also acausal tit-for-tat. This further pushes the Value-recommended strategy in the direction of something Goodness-like (in certain respects), even though ofc it doesn't coincide with the Goodness of any particular culture in any particular historical period.
As Steven Byrnes wr
This post was one of several examples of "rolling your own metaethics" that I had in mind when writing Please, Don't Roll Your Own Metaethics, because it's not just proposing or researching a new metaethical idea, but deploying it, in the sense of trying to spread it among people who the author does not expect to reflect carefully about the idea.
I don't get why this was curated, am I missing something? The piece basically says that what you want to do & what society expects you to do are 2 separate things (a topic which has been explored since time immemorial). Then it says that you should evaluate what you really want to do based on rational thinking & long-term planning (also something incredibly obvious). Is there anything more to it?
I thought the only novel bit was the passage about oxytocin, which is barely 10% of the article.
An awful lot of people, probably a majority of the population, sure do feel deep yearning to either inflict or receive pain, to take total control over another or give total control to another, to take or be taken by force, to abandon propriety and just be a total slut, to give or receive humiliation, etc.
This is rather tangential to the main thrust of the post, but a couple of people used a react to request a citation for this claim.
One noteworthy source is Aella's surveys on fetish popularity and tabooness. Here is an older one that gives the % of people...
Curated. While in my personal language, I would have treated Goodness as a synonym for Human Values[1], the distinction John is making here is correct, plus his advice on how to approach it. A very important point I have noticed is that when people ask (or anguish), "am I a good person?" this is asking according to the social egregore sense of good – am I good in the way that will be approved by others? Social, despite seeming like a morality thing. By extension, I wonder how much scrupolisity, as an anxiety disorder, is a social anxiety disorder.
I'd guess...
To some extent "goodness" is some ever moving negotiated set of norms of how one should behave.
I notice that when I use the word "good" (or envoke this consept using other words such as "should"), I don't use it to point to the existing norms, but as a bid for what I think these norms should be. This sometimes overlap with the existing norms and sometimes not.
E.g. I might say that it's good to allow lots of diffrent subcultures to co-exist. This is a vote for a norm where peopel who don't my subculture leave me and my firends alone, in exchange for us leav...
I think the "your values" -framing itself already sneaks in assumptions which are false for a lot of minds/brains. Notably: most minds are not perfectly monolithic/unified things well-modeled as a coherent "you/I/me". And other minds are quite unified/coherent, but are in the unfortunate situation of running on a brain that also contains other (more or less adversarial) mind-like programs/wetware.
Example:
It is entirely possible to have strongly-held values such as "I reject so-and-so arbitrary/disgusting parts of the reward circuitry Evolution designed int...
Okay, but yumminess is not values. If we pick ML analogy, yumminess is reward signal or some other training hyperparameter.
My personal operationalization of values is "the thing that helps you to navigate trade-offs". You can have yummi feelings about saving life of your son or about saving life of ten strangers, but we can't say what you value until you consider situation where you need to choose between two. And, conversely, if you have good feelings about parties and reading books, your values direct what you choose.
Choice in case of real, value-laden trade-offs is usually defined by significant amount of reflection about values and memetic ambience supplies known summaries of such reflection in the past.
I mostly don't seem to have anything new to say in response to this at the moment, but I figured mentioning my comment from a few weeks ago on hunches about origins of caring-for-others was in order, so there it is.
This post doesn't seem to provide reasons to have one's actions be determined by one's feelings of yumminess/yearning, or reasons to think that what one should do is in some sense ultimately specified/defined by one's feelings of yumminess/yearning, over e.g. what you call "Goodness"? I want to state an opposing position, admittedly also basically without argument: that it is right to have one's actions be determined by a whole mess of things together importantly including e.g. linguistic goodness-reasoning, object-level ethical principles stated in langua...
You mention how yumminess positively biases for novel things. I think it also negatively biases for habitual things in ways that make not being an idiot harder.
IE, a new relationship feels a lot yummier than the same relationship with the same person 10 years later - even though that relationship is much more valuable to me personally after those 10 years than at the beginning.
There's a dynamic where we don't feel yumminess for things we have and are confident that we will continue having, even when those things are very valuable to us.
Two thoughts.
I like the sharp distinction you draw between
“Our Values are (roughly) the yumminess or yearning…”
and
“Goodness is (roughly) whatever stuff the memes say one should value.”
but the post treats these as more separable than they actually are from the standpoint of how the brain acquires preferences.
You emphasize that
“we mostly don’t get to choose what triggers yumminess/yearning”
and that Goodness trying to overwrite that is “silly.” Yet a few paragraphs later you note that
“a nontrivial chunk of the memetic egregore Goodness needs to be complied with…”
before re...
I think there may be a fairly critical confusion here, but perhaps have missed the key bit (or perhaps by seeing this particular tree have missed the forest the post is aiming at) that would address that. It seems that in "human values" here are defined very much in terms of a specific human. However, "goodness" seems to be more about something larger -- society, the culture, humanity as a class or even living things.
I suspect a lot of the potential error in treating the terms as near to one another disappears if you think of goodness for a specific person...
- Goodness is (roughly) whatever stuff the memes say one should value.
Looking at that first one, the second might seem kind of silly. After all, we mostly don’t get to choose what triggers yumminess or yearning.
A lot of goodness is about what you should do rather than what you should feel yearning for. There’s less conflict there. Even if you can’t change what you feel yearning for, you can change what you do.
Thank you for this article, I find the subject interesting.
In this article, I am rather surprised by the use of the word ‘value’, also in the comments, so I wondered if it was a language issue on my part.
However, the fact that the author wonders whether human values are good is something that fits in with my initial interpretation of the word value, which is as follows: value in the deepest sense, what is most important in life.
And my initial interpretation seems to be in line with that of the Stanford Encyclopedia of Philosophy, for example: h...
feels really yummy
I believe this is also referred to as "positive affect". I really like and use the term exactly because, as you mention later, many people like to fantasize about and explore things that are normally associated with negative affect, so you can't point to any specific source of positive affect to refer to positive affect.
Pragmatically, I think people will know what you mean more often with "yumminess" than "positive affect", but I think "positive affect" might be the technically correct term.
I basically agree with the thrust of this post, namely that we need a distinction between our values and goodness. Otherwise, we would not be able to ask the question whether we want what is good, for example. Or to put it differently, there is a conceptual distinction between what is desired and what is desirable, whatever determines the latter.
Furthermore, I agree that it is rather common to see what is desirable as some kind of function of what we in fact value. For example, in economics it is rather common to identify welfare with preference-sati...
Think of the stuff that, when you imagine it, feels really yummy.
Also worth taking into consideration: things that feel anti-yummy. Fear/disgust/hate/etc are also signals about your values.
It is quite possible to hyperoptimize for that one particular yumminess, then burn out and later realize that one values other things too - as many a parent learns when the midlife crisis hits.
So true, this reminds me of Jung’s emphasis on “the shadow“—it’s important to acknowledge (and not discount) “values” you hold that are selfish or otherwise not ostensibly pro-social.
… your actual Values long term (which usually involves other people)
This is also important to note. We are often torn between selfish wants and the wants and needs of others. This can be...
A thought I get is that
"Goodness defines the boundaries within which we can optimise for our values/desires (and suitably adjust for memetics (see above discussion)", and that "Goodness should evolve as to allow for the optimisation to occur as well as possible for as many as possible"?
"We don’t really know what human values are"
But we might, or might begin to: I put the effor tin over here :: Alignment ⑥ Values are an effort not a coin https://whyweshould.substack.com/p/alignment-values-are-an-effort-not
or in derived format: If all values are an effort, prices are a meeting of efforts https://whyweshould.substack.com/p/if-all-values-are-an-effort-prices
even deontological positions are an effort, evolution cares about the effort, not the ideal forms
One (over)optimistic hope I have is that something like a really good scale-free theory of intelligent agency would define a way to construct a notion of goodness that was actually aligned with the values of the members of a society to the best extent possible.
Is there a distinction to be made between different kinds of social imperatives?
e.g. I think a lot of people might feel the mimetic egregore tells them they should try to look good more than it tells them to be humble, but they might still associate the latter with 'goodness' more because when they are told to do it it is in the context of morality or virtue.
I agree there is an important distinction, but I think the social memetic aspect of "Goodness" is not central. The central distinction is that we have access to yumminess directly, it is the only thing we "truly care about" in some sense, but as bounded and not even perfectly coherent agents, we're unable to roll our predictions forward over all possible action paths and maximize yumminess.
Instead we need to form a compact /abstracted representation of our values/yuminess to 1) make them legible to ourselves and 2) make plans to attain them 3) communicate them 4) make them more coherent
I update my moral values based on my ontology. I try to factor in epistmic uncertainty. I do not attribute goodness to human values, because I do not center my world view around humans only. What an odd thing to do.
Ethics to me is an epistemic project. I read literature, poetry, the Upanishads, the Gita, the Gospels, Meditations, the sequences... More obscure things. I think and I update.
To me, the basic level of "goodness" is roughly "do no harm on offense", or "do not go against others will on offense" to others. (This level of goodness is actually much needed for humans to survive as well.)
I think some of the central models/advice in this post [1] are in an uncanny valley of being substantially correct but also deficient, in ways that are liable to lead some users of the models/advice to harm themselves. (In ways distinct from the ones addressed in the post under admonishments to "not be an idiot".)
In particular, I'm referring to the notion that
The Yumminess You Feel When Imagining Things Measures Your Values
I agree that "yumminess" is an important signal about one's values. And something like yumminess or built-in reward signals are wha...
Think you’re talking about ethics here… and if so why not call it that? Human Values (vs Ethics) is an unnecessary rejection that I don’t really believe is moving things forward in working in AI, Safety… and drum roll … ethics.
What you’ve pointed out in this article is a central concern of meta ethics. If your eluding to the fact this stuff is hard then… great. If it’s useful to how this fits in with our technologies then please specify how, so we can drive a proper critique.
There is a temptation to simply define Goodness as Human Values, or vice versa.
Alas, we do not get to choose the definitions of commonly used words; our attempted definitions will simply be wrong. Unless we stick to mathematics, we will end up sneaking in intuitions which do not follow from our so-called definitions, and thereby mislead ourselves. People who claim that they use some standard word or phrase according to their own definition are, in nearly all cases outside of mathematics, wrong about their own usage patterns.[1]
If we want to know what words mean, we need to look at e.g. how they’re used and where the concepts come from and what mental pictures they summon. And when we look at those things for Goodness and Human Values… they don’t match. And I don’t mean that we shouldn’t pursue Human Values; I mean that the stuff people usually refer to as Goodness is a coherent thing which does not match the actual values of actual humans all that well.
The Yumminess You Feel When Imagining Things Measures Your Values
There’s this mental picture where a mind has some sort of goals inside it, stuff it wants, stuff it values, stuff which from-the-inside feels worth doing things for. In old-school AI we’d usually represent that stuff as a utility function, but we wanted some terminology for a more general kind of “values” which doesn’t commit so hard to the mathematical framework (and often-confused conceptual baggage outside the math) of utility functions. The phrase “human values” caught on.
We don’t really know what human values are, or what shape they are, or even whether they’re A Thing at all. We don’t have trivial introspective access to our own values; sometimes we think we value a thing a lot, but realize in hindsight that we value it only a little. But insofar as the mental picture is pointing to a real thing at all, it does tell us how to go look for our values within our own minds.
How do we go look for our own values?
Well, we’re looking for some sort of goals, stuff which our minds want or value, stuff which drives us, etc. What does that feel like from the inside? Think of the stuff that, when you imagine it, feels really yummy. It induces yearning and longing. It feels like you’d be more complete with it. That’s the feeling of stuff that you value a lot. Lesser versions of the same feeling come when imagining things you value less (but still positively).
Personally… I get that feeling of yumminess and yearning when I imagine having a principled mathematical framework for understanding the internal structures of minds, which actually works on e.g. image generators.[2] I also get that feeling of yumminess and yearning when I imagine a really great night of dancing, or particularly great sex, or physically fighting with friends, or my favorite immersive theater shows, or some of my favorite foods at specific restaurants. Sometimes I get a weaker version of the yumminess and yearning feeling when I imagine hanging out around a fire with friends, or just sitting out on my balcony alone at night and watching the city, or dealing with the sort of emergency which is important enough that I drop everything else from my mind and just focus
Those are my values. That’s what human values look like, and how to probe for yours.
“Goodness” Is A Memetic Egregore
I did not first learn about goodness by imagining things and checking how yummy they felt. I first learned about Goodness by my parents and teachers and religious figures and books and movies and so forth telling me that it’s Good to not steal things, Good to do unto others what I’d have them do unto me, Good to follow rules and authority figures, Good to clean up after myself, Good to share things with other kids, Good to not pick my nose, etc, etc.
In other words, I learned about Goodness mostly memetically, absorbing messages from others about what’s Good.
Some of those messages systematically follow from some general principles. Things like “don’t steal” are social rules which help build a high-trust society, making it easier for everyone to get what they want insofar as everyone else follows the rules. We want other people to follow those rules, so we teach other people the rules. Other aspects of Goodness, especially about cleanliness, seem to mostly follow humans’ purity instincts, and are memetically spread mainly by people with relatively-strong purity instincts in an attempt to get people with relatively-weaker purity instincts to be less gross (think nose picking). Still other aspects of Goodness seem rather suspiciously optimized for getting kids to be easier for their parents and teachers to manage - think following rules or respecting one’s elders. Then there are aspects of Goodness which seem to be largely political, driven by the usual political memetic forces.
The main unifying theme here is that Goodness is a memetic egregore; in practice, our shared concept of Goodness is comprised of whatever messages people spread about what other people should value.
… which sure is a different thing from what people do value, when they introspect on what feels yummy.
Aside: Loving Connection
One thing to flag at this point: you know the feeling of deep loving connection, like a parent-child bond or spousal bond or the feeling you get (to some degree) when deeply empathizing with someone or the feeling of loving connection to God or the universe which people sometimes get from religious experiences? I.e. oxytocin?
For many (most?) people, that feeling is a REALLY big chunk of their Values. It is the thing which feels yummiest, often by such a large margin that it overwhelms everything else. If that’s you, then it’s probably worth stopping to notice that there are other things you value. It is quite possible to hyperoptimize for that one particular yumminess, then burn out and later realize that one values other things too - as many a parent learns when the midlife crisis hits.
That feeling of deep loving connection is also a major component of the memetic egregore Goodness, to such an extent that people often say that Goodness just is that kind of love. Think of the songs or hippies or whoever saying that all the world’s problems would be solved if only we had more love. As with values, it is worth stopping to notice that loving connection is not the entirety of Goodness, as the term is typically used. The people saying that Goodness just is loving connection (or something along those lines) are making the same move as someone trying to define a word; in most cases their usage probably doesn’t even match their own definition on closer inspection.
It is true that deep loving connection is both an especially large chunk of Human Values and an especially large chunk of Goodness, and within that overlap Human Values and Goodness do match. But that’s not the entirety of either Human Values or Goodness, and losing track of the rest is a good way to shoot oneself in the foot eventually.
We Don’t Get To Choose Our Own Values (Mostly)
To summarize so far:
Looking at that first one, the second might seem kind of silly. After all, we mostly don’t get to choose what triggers yumminess or yearning. There are some loopholes - e.g. sometimes we can learn to like things, or intentionally build new associations - but mostly the yumminess is not within conscious control. So it’s kind of silly for the memetic egregore to tell us what we should find yummy.
A central example: gay men mostly don’t seem to have much control over their attraction to men; that yumminess is not under their control. In many times and places the memetic egregore Goodness said that men shouldn’t be sexually attracted to men (those darn purity instincts!), which… usually isn’t all that effective at changing the underlying yumminess or yearning.
What does often happen, when the memetic egregore Goodness dictates something in conflict with actual Humans’ actual Values, is that the humans “tie themselves in knots” internally. The gay man’s attraction to men is still there, but maybe that attraction also triggers a feeling of shame or social anxiety or something. Or maybe the guy just hides his feelings, and then feels alone and stressed because he doesn’t feel safe being open with other people.
Sex and especially BDSM is a ripe area for this sort of thing. An awful lot of people, probably a majority of the population, sure do feel deep yearning to either inflict or receive pain, to take total control over another or give total control to another, to take or be taken by force, to abandon propriety and just be a total slut, to give or receive humiliation, etc. And man, the memetic egregore Goodness sure does not generally approve of those things. And then people tie themselves in knots, with the things that turn them on most also triggering anxiety or insecurity.
So What Do?
I’d like to say here “screw memetic egregores, follow the actual values of actual humans”, but then many people will be complete fucking idiots about it. So first let’s go over what not to do.
There’s a certain type of person… let’s call him Albert. Albert realizes that Goodness is a memetic egregore, and that the memetic egregore is not particularly well aligned with Albert’s own values. And so Albert throws out all that Goodness crap, and just queries his own feelings of yumminess in-the-moment when making decisions.
This goes badly in a few different ways
Sometimes Albert has relatively low innate empathy, and throws out all the Goodness stuff about following the rules and spirit of high-trust communities. Albert just generally hits the “defect” button whenever it’s convenient. Then Albert goes all pikachu surprise face when he’s excluded from high trust communities.
Other times Albert is just bad at thinking far into the future, and jumps on whatever feels yummy in-the-moment without really thinking ahead. A few years down the line Albert is broke.
Or maybe Albert rejects memetic Goodness, ignores authority a little too much, and winds up unemployed or in prison. Or ignores purity instincts a little too much and winds up very sick.
Point is: there’s a Chesterton’s fence here. Don’t be an idiot. Goodness is not very well aligned with actual Humans’ actual Values, but it has been memetically selected for a long time and you probably shouldn’t just jettison the whole thing without checking the pieces for usefulness. In particular, a nontrivial chunk of the memetic egregore Goodness needs to be complied with in order to satisfy your actual Values long term (which usually involves other people), even when it conflicts with your Values short term. Think about the consequences, what will actually happen down the line and how well your Values will actually be satisfied long-term, not just about what feels yummy in the moment.
… and then jettison the memetic egregore and pay attention to your and others' actual Values. Don’t make the opposite mistake of motivatedly looking for clever reasons to not jettison the egregore just because it’s scary.
You can quick-check this in individual cases by replacing the defined word with some made-up word wherever the person uses it - e.g. replace “Goodness” with “Bixness”.
… actually when I first try to imagine that I get a mild “ugh” because I’ve tried and failed to make such a thing before. But when I set that aside and actually imagine the end product, then I get the yummy feeling.