Thankyou to Elizabeth for a great conversation which spurred me to write up this post.

Claim: moral/status/value judgements (like "we should blame X for Y", “Z is Bad”, etc) like to sneak into epistemic models and masquerade as weight-bearing components of predictions.

Example: The Virtue Theory of Metabolism

The virtue theory of metabolism says that people who eat virtuous foods will be rewarded with a svelte physique, and those who eat sinful foods will be punished with fat. Obviously this isn’t meant to be a “real” theory; rather, it’s a tongue-in-cheek explanation of the way most laypeople actually think about diet and body weight.

Lest ye think this a strawman, let’s look at some predictions made by the virtue theory.

As a relatively unbiased first-pass test, we’ll walk through Haidt’s five moral foundations and ask what predictions each of these would make about weight loss when combined with the virtue theory. In particular, we’ll look for predictions about perceived healthiness which seem orthogonal (at least on the surface) to anything based on actual biochemistry.

  1. Care/harm: food made with care and love is healthier than food made with indifference. For instance, home cooking is less fattening than restaurant, or factory-farmed meat is more fattening than free-range chicken.
  2. Fairness: food made fairly is healthier than food made unfairly. For instance, “fair-trade” foods are less fattening.
  3. Loyalty/ingroup: food made by members of an ingroup is more healthy. For instance, local produce is less fattening.
  4. Authority/respect: food declared “good” by legitimate experts/authorities is more healthy. Fun fact for American readers: did you know the original food pyramid was created by the department of agriculture (as opposed to the department of health), and bears an uncanny resemblance to the distribution of American agricultural output?
  5. Sanctity/purity: impure or unnatural food is unhealthy. For instance, preservatives, artificial flavors, and GMO foods are all more fattening, whereas organic food is less fattening.

Maybe I’m cherry-picking or making a just-so story here, but… these sound like things which I think most people do believe, and they’re pretty central to the stereotypical picture of a “healthy diet”. That’s not to say that there isn’t also some legitimate biochemistry sprinkled into peoples’ food-beliefs, but even then it seems like the real parts are just whatever percolated out of Authorities. As a purely descriptive theory of how laypeople model metabolism, the virtue theory looks pretty darn strong.

Of course if pressed, people will come up with some biology-flavored explanation for why their food-health-heuristic makes sense, but the correlation with virtue instincts pretty strongly suggests that these explanations are post-hoc.

An Exercise

This post isn’t actually about the virtue theory of metabolism. It’s about a technique for noticing things like the virtue theory of metabolism in our own thinking. How can we detect moral/status/value judgements masquerading as components of predictive models?

The technique is simple: taboo concepts like “good”, “bad”, “should”, etc in one’s thinking. When you catch yourself thinking one “should” do X, or X is “good”, stop and replace that with “I like X” or “X is useful for Y” or “X has consequence Z” or the like. Ideally, you keep an eye out for anything which feels suspiciously-value-flavored (like “healthy” in the examples above) and taboo them.

So, for instance, I might notice myself eating vegetables because they are “good for me”. I notice that “good” appears in there, so I taboo “good”, and ask what exactly these vegetables are doing which is supposedly "good" for me. I may not know all the answers, and that’s fine - e.g. the answer may just be “my doctor says I’ll have fewer health problems if I eat lots of vegetables”. But at least I have flagged this as a thing-about-which-I-have-incomplete-knowledge-about-the-physical-gears-involved, rather than an atomic fact that “vegetables are good” in some vague undefined sense. 

In general, there is absolutely no morality or status whatsoever in any physical law of the universe, at any level of abstraction. If a subquery of the form “what is Good?” ever appears when trying to make a factual prediction, then something has gone wrong. Epistemics should contain exactly zero morality.

(Did you catch the “should” in the previous sentence? Here’s the full meaning with “should” taboo’d: if a moral statement is load-bearing in a model of the physical world, then something is factually incorrect in that model. At a bare minimum, even if the end prediction is right, the gears are definitely off.)

The usefulness of tabooing “should” is to flush out places where moral statements are quietly masquerading as facts about the world, or as gears in our world-models.

Years ago, when I first tried this exercise for a week, I found surprisingly many places where I was relying on vague morally-flavored “facts”, similar to “vegetables are good”. Food was one area, despite having already heard of the virtue theory of metabolism and already trying to avoid that mistake. The hooks of morality-infected epistemology ran deeper than I realized.

Politics-adjacent topics were, of course, another major area where the hooks ran deep.

Political Example: PS5 Sales

Matt Yglesias provides a prototypical example in What’s Wrong With The Media. He starts with an excerpt from a playstation 5 review:

The world is still reeling under the weight of the covid-19 pandemic. There are more Americans out of work right now than at any point in the country’s history, with no relief in sight. Our health care system is an inherently evil institution that forces people to ration life-saving medications like insulin and choose suicide over suffering with untreated mental illness.

As I’m writing this, it looks very likely that Joe Biden will be our next president. But it’s clear that the worst people aren’t going away just because a new old white man is sitting behind the Resolute desk—well, at least not this old white man. Our government is fundamentally broken in a way that necessitates radical change rather than incremental electorialism.

The harsh truth is that, for the reasons listed above and more, a lot of people simply won’t be able to buy a PlayStation 5, regardless of supply. Or if they can, concerns over increasing austerity in the United States and the growing threat of widespread political violence supersede any enthusiasm about the console’s SSD or how ray tracing makes reflections more realistic. That’s not to say you can’t be excited for those things—I certainly am, on some level—but there’s an irrefutable level of privilege attached to the ability to simply tune out the world as it burns around you.

The problem here, Yglesias argues, is that this analysis is bad - i.e. the predictions it makes are unlikely to be accurate:

What actually happened is that starting in March the household savings rate soared (people are taking fewer vacations and eating out less) and while it’s been declining from its peak as of September it was still unusually high.

[...]

The upshot of this is that no matter what you think about Biden or the American health care system, the fact is that the sales outlook for a new video game console system is very good.

Indeed, the PS5 sold out, although I don’t know whether Yglesias predicted that ahead of time.

So this is a pretty clear example of moral/status/value judgements masquerading as components of a predictive model. Abstracting away the details, the core of the original argument is “political situation is Bad -> people don’t have resources to buy PS5”. What does that look like if we taboo the value judgements? Well, the actual evidence cited is roughly:

  • Lots of people have COVID
  • American unemployment rate is at an all-time high
  • Health care system forces rationing of medication and doesn’t treat mental illness
  • Bad People aren’t going away (I’m not even sure how to Taboo this one or if there’d be anything left; it could mean any of a variety of Bad People or the author might not even have anyone in particular in mind)
  • Lots of people are concerned about austerity or political violence

Reading through that list and asking “do these actually make me think video game console sales will be down?”, the only one which stands out as directly relevant - not just mood affiliation with Bad things - is unemployment. High unemployment is a legitimate reason to expect slow console sales, but when you notice that that’s the only strong argument here, the whole thing seems a lot less weighty. (Amusing side note: the unemployment claim was false. Even at peak COVID, unemployment was far lower than during the Great Depression, and it had already fallen below the level of more recent recessions by the time the console review was published. But that’s not really fatal to the argument.)

By tabooing moral/status/value claims, we force ourselves to think about the actual gears of a prediction, not just mood-affiliate.

Now, in LessWrong circles we tend not to see really obvious examples like this one. We have implicit community norms against prediction-via-mood-affiliation, and many top authors already have a habit of tabooing “good” or “should”. (We’ll see an example of that shortly.) But I do expect that lots of people have a general-sense-of-social-Badness, with various political factors feeding into it, and quick intuitive predictions about economic performance (including console sales) downstream. There’s a vague idea that “the economy is Bad right now”, and therefore e.g. console sales will be slow or stock prices should be low. That’s the sort of thing we want to notice and taboo. Often, tabooing “Bad” in “the economy is Bad right now” will still leave useful predictors - unemployment is one example - but not always, and it’s worth checking.

Positive Example: Toxoplasma Memes

From Toxoplasma of Rage:

Consider the war on terror. They say that every time the United States bombs Pakistan or Afghanistan or somewhere, all we’re doing is radicalizing the young people there and making more terrorists. Those terrorists then go on to kill Americans, which makes Americans get very angry and call for more bombing of Pakistan and Afghanistan.

Taken as a meme, it’s a single parasite with two hosts and two forms. In an Afghan host, it appears in a form called ‘jihad’, and hijacks its host into killing himself in order to spread it to its second, American host. In the American host it morphs in a form called ‘the war on terror’, and it hijacks the Americans into giving their own lives (and tax dollars) to spread it back to its Afghan host in the form of bombs.

From the human point of view, jihad and the War on Terror are opposing forces. From the memetic point of view, they’re as complementary as caterpillars and butterflies. Instead of judging, we just note that somehow we accidentally created a replicator, and replicators are going to replicate until something makes them stop.

Note that last sentence: “Instead of judging, we just note that somehow we accidentally created a replicator…”. This is exactly the sort of analysis which is unlikely to happen without somebody tabooing moral/status/value judgements up-front.

If we go in looking for someone to blame, then we naturally end up modelling jihadists as Evil, or interventionist foreign policymakers as Evil, or …, and that feels like it has enough predictive power to explain what’s going on. Jihadists are Evil, therefore they do Bad things like killing Americans, and then the Good Guys kill the Bad Guys - that’s exactly what Good boils down to in the movies, after all. It feels like this model explains the main facts, and there’s not much mystery left - no reason to go asking about memetics or whatever.

Interesting and useful insights about memetics are more likely if we first taboo all that. Your enemies are not innately Evil, but even if they were it would be useful to taboo that fact, unpack it, and ask how such a thing came to be. It’s not that “Bad Person does Bad Thing” always makes inaccurate predictions, it’s that humans have built-in intuitions which push us to use that model regardless of whether it’s accurate, for reasons more related to tribal signalling than to heuristic predictive power.

Example: Copenhagen Ethics

The Copenhagen Interpretation of Ethics says that when you observe or interact with a problem, you can be blamed for it. In particular, you won’t be blamed for a problem you ignore, but you can be blamed for benefitting from a problem even if you make the problem better. This is not intended as a model for how ethics “should” work, but rather for how most people think about ethics by default.

Tabooing moral/status/value judgements tends to make Copenhagen ethics much more obviously silly. Here’s an example from the original post:

In 2010, New York randomly chose homeless applicants to participate in its Homebase program, and tracked those who were not allowed into the program as a control group. The program was helping as many people as it could, the only change was explicitly labeling a number of people it wasn’t helping as a “control group”. The response?

“They should immediately stop this experiment,” said the Manhattan borough president, Scott M. Stringer. “The city shouldn’t be making guinea pigs out of its most vulnerable.”

Let’s taboo the “should”s in Mr Stringer’s statement. We'll use the “We should X” -> “X has consequence Z” pattern: replace “they should immediately stop this experiment” with “immediately stopping this experiment would <????> for the homeless”. What goes in the <????>?

Feel free to think about it for a moment.

My answer: nothing. Nothing goes in that <????>. Stopping the experiment would not benefit any homeless people in any way whatsoever. When we try to taboo “should”, that becomes much more obvious, because we’re forced to ask how ending the experiment would benefit any homeless people.

Takeaway

Morality has no place in epistemics. If moral statements are bearing weight in world-models, then at a bare minimum the gears are wrong. Unfortunately, I have found a lot of morality-disguised-as-fact hiding within my own world-models. I expect this is the case for most other people as well, especially in politically-adjacent areas.

A useful exercise for rooting out some of these hidden-morality hooks is to taboo moral/status/value-flavored concepts like “good”, “bad”, “should”, etc in one’s thinking. Whenever you notice yourself using these concepts, dissolve them - replace them with “I want/like X” or “X is useful for Y” or “X has consequence Z”.

Three caveats to end on.

First caveat: as with the general technique of tabooing words, I don’t use this as an all-the-time exercise. It’s useful now and then to notice weak points in your world-models or habits, and it’s especially useful to try it at least once - I got most of the value out of the first week-long experiment. But it takes a fair bit of effort to maintain, and one week-long try was enough to at least install the mental move.

Second caveat: a moral statement bearing weight in a world-model means the model is wrong, but that does not mean that the model will improve if the moral components are naively thrown out. You do need to actually figure out what work those moral statements are doing (if any) in order to replace them. Bear in mind that morality is an area subject to a lot of cultural selection pressure, and those moral components may be doing something non-obviously useful.

Final caveat: before trying this exercise, I recommend you already have the skill of not giving up on morality altogether just because moral realism is out the window. Just because morality is not doing any epistemic work does not mean it’s not doing any instrumental work.

Note: I am actively looking for examples to use in a shorter exercise, in order to teach this technique. Ideally, I'd like examples where most people - regardless of political tribe - make a prediction which is obviously wrong/irrelevant after tabooing a value-loaded component. If you have examples, please leave them in the comments. I'd also be interested to hear which examples did/didn't click for people.

New Comment
28 comments, sorted by Click to highlight new comments since:

A previous example I ran into was using the phrase "X is an unhealthy relationship dynamic." I found myself saying it in a few different places without actually knowing what I meant by "unhealthy," I think the word unhealthy was being slightly more descriptive than "bad", but not by much.

Unfortunately I can't remember the particular examples that I was originally noticing. Some of them might have related to power differentials, or perceived power differentials. (Which I think I now have a more gearsy model of why you should be wary of them, although I think I may not have every really gotten clear on this point)

Tbh switching from using "unhealthy" to "bad" can help cause it removes any trace of sophistication, thereby making this kind of usage less rewarding.

Are you saying it's better to say "bad" than "unhealthy?" (because it removes the illusion of meaning?)

Not because it removes the illusion of meaning, but because it makes you sound less cool. I'm thinking of conversations here, in solitary thinking it wouldn't make much difference imo.

or, you could use unhealthy only to mean things which are likely to decrease your health (mental health included)

Thats what I meant ofc.

I and some others on Lightcone team have continued to use this exercise from time to time. Jacob Lagerros got really into it, and would ask us to taboo 'should' whenever someone made a vague claim about what we should do. In all honesty this was pretty annoying. :P 

But, it highlighted another use of tabooing 'should', which is checking what assumptions are shared between people. (i.e. John's post is mostly seems to be addressing "single player mode", where you notice your own shoulds and what ignorance that conceals. But sometimes, Alice totally understands what underlies her Should Statements, but Bob doesn't know Alice's model, and propagating Should Statements can result in Lost Purposes or confused, suboptimal work.

(FYI At some point one of us built a slack integration that replies at you whenever you use the word 'should', with "what do you mean by should? This was a joke that lasted about a week, but, listing here to convey that the post at least made it into our longish-term culture)

I don't think "Taboo-ing 'should'" isn't always the right thing to do, but I think it's a valuable exercise to do for a week or so, to flesh out the mental muscles.

My gut reaction to this post is that it's importantly wrong. This is just my babbled response, and I don't have time to engage in a back-and-forth. Hope you find it valuable though!

My issue is with the idea that any of your examples have successfully tabooed "should."

In fact, what's happening here is that the idea that we "should taboo 'should'" is being used to advocate for a different policy conclusion.

Let's use Toxoplasma Memes as an example. Well, just for starters, framing Jihad vs. War on Terror as "toxoplasma" works by choosing a concept handle that evokes a disgust reaction to affect an ethical reframing of the issue. Both Jihad/War on Terror theorists and "Toxoplasma" theorists have causal models that are inseparable from their ethical models. To deny that is not to accomplish a cleaner separation between epistemology and ethics, it's to disguise reality to give cover to one's own ethical/epistemic combination. You can do it, sure, it's the oldest trick in the book. If I were to say you shouldn't, "because it disguises the truth," I'm just being a hypocrite.

Likewise, the fact that "tabooing 'should'" makes the Copenhagen interpretation of ethics seem silly also illustrates how "tabooing 'should'" is a moral as much as an epistemic move. The point is to make an idea and its advocates look silly. It's to manipulate their social status and tickle people's jimmies until they agree with you. It might not work, but that's the intent.

Yglesias simply misrepresented the claim made by at least the snippet of the PS5 review that you cite.

The review said:

a lot of people simply won’t be able to buy a PlayStation 5, regardless of supply. Or if they can, concerns over increasing austerity in the United States and the growing threat of widespread political violence supersede any enthusiasm

Yglesias said:

the sales outlook for a new video game console system is very good.

"Regardless of supply" is a colloquialism. I think a more charitable reading of this statement, which obviously isn't meant primarily as a projection of console sales, would be that there will be people who want a PS5 who can't afford it, and that we have bigger issues in the world than being excited about the PS5. This is obviously true.

Yglesias's rhetoric isn't just meant to refocus the discussion on the sales outlook for the PS5. It's to smack down the status of the author of the PS5 review and those in the same camp, as thoughtless nincompoops who don't understand reality and therefore aren't qualified to be moral authorities either.

Now, that's all fine, because it's really all there is to do once you're getting into the realm of policy. If your goals and values aren't axiomatic, but if instead you're debating some conjunction of epistemics, values, and goals, as we usually are, then it might be a great rhetorical move to pretend like you're just having a "facts and logic and predictive abilities" debate, but that's rarely true.

Like, if Yglesias really was interested in that, then why would he ever, ever pick out such an obviously stupid piece of writing to address in the first place? He has the greatest thinkers and writers available to him to engage with! Why pick out the dumbest thing anybody ever wrote about the PS5?

Well, you know why already. There is a contest over facts/status/virtue/goals going on known as the "Culture Wars," and he's participating in it while pretending like he's not.

And maybe this is a culture war that needs to be fought. Maybe we really are ruled by the dumbest things anybody ever wrote on Medium and social media, and it's time to change that. And maybe there's value in pretending like you're performing a purely technical analysis when considering economics or the war on terror or how to address homelessness. I'm not a subjectivist. I do think that, although it may be impossible to prove what the truth is in some perfectly self-satisfying fashion, there's such a thing as being "more wrong" and "less wrong," and that it's virtuous to strive for the latter.

But I think that here, in our community of practice where we strive for the latter, we should strive to be skeptical about claims to objectivity. It's not that it's impossible. It's that it's a great rhetorical move to advance a subjective position, and how would you tell the difference?

Well, just for starters, framing Jihad vs. War on Terror as "toxoplasma" works by choosing a concept handle that evokes a disgust reaction to affect an ethical reframing of the issue.

This part is just objectively wrong. I don't have a disgust reaction to toxoplasma (possibly because I have no particular visual image associated with it). Do you?

The point is to make an idea and its advocates look silly. The point is to make an idea and its advocates look silly. It's to manipulate their social status and tickle people's jimmies until they agree with you. It might not work, but that's the intent.

This also seems objectively wrong. I mean, I first did this exercise years ago as a way to make my own beliefs more accurate. Manipulating other peoples' social status presumably wasn't the intent, because I didn't share it with anyone for years. I didn't care about making other people look silly; I generally kept the exercise and its results to myself. I wrote this post figuring that other people would find it helpful; the event which prodded me to write it up now was someone else finding the exercise useful for themselves.

Also, if I were trying to make other peoples' ideas look silly, tabooing "should" looks like a wildly suboptimal way to do that, since so many people have implicitly-moral-realist worldviews. This is a technique which would only seem sensible in the first place to people with unusually firm philosophical foundations. I expect many such people will want to use the technique themselves, for their own benefit, entirely independent of wanting to make other people look silly.

Like, if Yglesias really was interested in that, then why would he ever, ever pick out such an obviously stupid piece of writing to address in the first place? He has the greatest thinkers and writers available to him to engage with! Why pick out the dumbest thing anybody ever wrote about the PS5?

Because he was specifically trying to talk about problems with modern journalism. The title of the piece was "What's Wrong With The Media".

Now, you could certainly interpret that as Yglesias administering a status slap-down to some journalist(s), but if his goal was a status slap-down there are far more effective ways to do that. He could just write an actual hit-piece.

More generally, in all of these examples: you are pointing out that an argument has status/ethics/politics impact, and claiming that the status/ethics/politics impact must therefore have motivated the argument. That's simply not true. There are non-status/ethics/politics reasons to think about things which have status/ethics/politics implications.

Being skeptical about claims to objectivity is an epistemically-sensible policy. (Matt Yglesias in particular has probably earned an extra helping of such skepticism.) But I think your prior on "argument was chosen for its status/ethics/politics implications" is far too high, and you are making objectively inaccurate predictions as a result (including the first two quotes above).

For the record I do get some disgust reaction out of the toxoplasma thing, think it was at least somewhat intentional (and I think I may also endorse it? With significant caveats)

Regardless of the details of why he picked the piece, it's pretty clear from a clear-headed reading of the review that Yglesias is attributing to the reviewer a claim they did not make. I think the "PS5 will have outstanding sales" and "there are many people who won't buy a PS5 due to some impact of the pandemic" can both be true and likely are.

Sorry, I know that this runs the risk of being an exchange of essays, making it hard to respond to each point.

In Toxoplasma of Rage, the part just prior to the reference to the war on terror goes like this:

Toxoplasma is a neat little parasite that is implicated in a couple of human diseases including schizophrenia. Its life cycle goes like this: it starts in a cat. The cat poops it out. The poop and the toxoplasma get in the water supply, where they are consumed by some other animal, often a rat. The toxoplasma morphs into a rat-compatible form and starts reproducing. Once it has strength in numbers, it hijacks the rat’s brain, convincing the rat to hang out conspicuously in areas where cats can eat it. After a cat eats the rat, the toxoplasma morphs back into its cat compatible form and reproduces some more. Finally, it gets pooped back out by the cat, completing the cycle.

[Lion King image] It’s the ciiiiiircle of life!

What would it mean for a meme to have a life cycle as complicated as toxoplasma?

Consider the war on terror.

Now, maybe Scott's description of Toxoplasma doesn't evoke the same visceral disgust reaction you might have if you were scooping out the litter box of a cat that you knew was infected with toxoplasma.

But it seems clear that Scott's conscious intent here was to evoke that feeling. The point is not to use toxoplasma as intellectual scaffolding to explain the cause-and-effect model of violence-begets-violence. 

Instead, it was to link the violence-begets-violence model with disgusting imagery. Read that article, and when somebody talks about the War on Terror, if your previous association was with the proud image of soldiers going off to a noble battle with a determined enemy, now it's with cat shit.

Likewise, read enough "problem with media" articles that selectively reference the silliest media pieces -- classic cherry picking on a conceptual level -- then slowly but surely, when you think of "media" you think of the worst stuff out there, rather than the average or the best. Is Matt Yglesias looking for the silliest takes that Scott Alexander ever wrote and excoriating them? No, because Scott's on his team in the blogosphere.

Now, you could certainly interpret that as Yglesias administering a status slap-down to some journalist(s), but if his goal was a status slap-down there are far more effective ways to do that. He could just write an actual hit-piece.

I disagree with this. In fact, his methods are an extremely effective way of administering a status slap-down. If he wrote an actual hit-piece, that's what his readership would interpret it as: a hit-piece. But when he writes this article, his readership interprets it as a virtuous piece advocating good epistemic standards. It's a Trojan Horse.

There are non-status/ethics/politics reasons to think about things which have status/ethics/politics implications.

This is true. But what I'm saying is that advocacy of epistemic accuracy as opposed to virtue signaling is in many contexts primarily motivated by status/ethics/politics implications.

And that is fine. There is nothing inherently wrong with an argument about the nature of reality also having status/ethics/politics implications. It's even fine to pretend, for vague "this makes us think better" reasons, that for the sake of a discussion those status/ethics/politics implications are irrelevant.

But those implications are always present. They've just been temporarily pushed to the side for a moment. Often, they come swooping back in, right after the "epistemic discussion" is concluded. That's the nature of a policy debate.

I apologize if it's a bit tangential, but I want to point out that "should" statements are a common source of distortions (in cognitive behavioral therapy). It is often good to clarify the meaning of "should" in that context to see if it's valid (is it a law of the universe? is it a human law? ...). It often just means "I would prefer if", which doesn't bear as much weight...

David Burns explains this clearly and I was struck when he pointed out the linguistic heritage of "should" and how it connects to "scold". Here's one podcast episode on the topic (there are more, easy to find).

I wonder if some of the other distortions (such as "labelling" to sneak a morality judgement) could be subject to a similar treatment. For example, Scott Alexander talks about debates on labelling something a "disease".

Nice parallel to CBT!

On a meta level, I really like comments which take the idea from a post and show a parallel idea from some other area.

This strikes me as a core application of rationality. Learning to notice implicit "should"s and tabooing them. The example set is great.

Some of the richness is in the comments. Raemon's in particular highlights an element that strikes me as missing: The point is to notice the feeling of judging part of the territory as inherently good or bad, as opposed to recognizing the judgment as about your assessment of how you and/or others relate to the territory.

But it's an awful lot to ask of a rationality technique to cover all cases related to its domain.

If all that people did as a result of reading this post was notice the word "should" in their thoughts and start tabooing it, that would be a huge boon IMO.

Curated. 

I actually think this post could be improved a lot – the examples feel a bit weird. I didn't actually understand the intent of the PS5 example, and I think I agree with another commenter that something about the Toxoplasma example feels off. 

Nonetheless, this concept has been pretty important to my thinking in recent weeks. I think being able to taboo moral language and focus on explicit gears is a key skill.

Regarding the toxoplasma example: it sounds like some people have different mental images associated with toxoplasmas than I do. For me, it's "that virus that hijacks rat brains to remove their fear response and be attracted to cat-smell". The most interesting thing about it, in my head, is that study which found that it does something similar in humans, and in fact a large chunk of cat-owners have evidence of toxoplasma hijack in their brains. That makes it a remarkably wonderful analogy for a meme.

It sounds like some other people associate it mainly with cat poop (and therefore the main reaction is "gross!").

Anyway, I agree the post could be improved a lot, and I'd really like to find more/better examples. The main difficulty is finding examples which aren't highly specific to one person.

Protozoa, not virus.

My reaction to this post is something like "yes, this is a good practice", but I've done it ,and it pushed me out the other side to believe I can say nothing at all without saying some kind of "should" because if you taboo should enough you run into grounding problems.

Cf. no free lunch in value learning

This is only to add nuance to your point, though, and I think the practice is worthwhile, because until you've done it you likely have a lot of big "should"s in your beliefs gumming up the works. I just think it's worth pointing out that the category of motivated reasons can't be made to disappear from thought without giving up thinking all together even if they can be examined and significantly reduced and the right starting advice is just to try to remove them all.

Generally, I let things ground out in "I want X". There's value in further unpacking the different flavors of "wanting" things, but that's orthogonal to what this exercise is meant to achieve.

replace “they should immediately stop this experiment” with “immediately stopping this experiment would <????> for the homeless”. What goes in the <????>?

Immediately stopping this experiment would make me think less about the homeless. :P

(English is not my first language, there is probably a way to preserve the "for the homeless" ending.)

If you don't do the experiment, I can imagine that all homeless are somehow taken care of. If you keep doing it, it makes it obvious that many of them are ignored.

For me, I got the most use out of tabooing 'should' when it comes to guilt and motivation, as in "I should be better at this", etc. For lots on this topic, see Nate Soares' Replacing Guilt sequence (https://replacingguilt.com/).

It's a great approach, to avoid moral-carrying connotations unless explicitly talking about morals. I've been practicing it for some time, and it makes things clearer. And when explicitly talking about moral judgments, it's good to specify the moral framework first. 

A (much) harder exercise is to taboo the words like "truth", "exist", "real", "fact", and replace them with something like "accurate model", "useful heuristic", "commonly accepted shortcut", etc.

Separating morality and epistemics is not possible because the universe contains agents who use morality to influence epistemological claims, and the speaker is one of them. I wrote up a response to this post with a precise account of what "should" is and how it should be used. Precise definitions also solve these problems. Looking back today, I think my post introduces new problems of its own. I don't know when I will finish it. For now, in case I never do finish it, I should mention the best parts here. I don't believe or endorse all these claims, but mixed up with the false claims are important ones, that must be passed along.


I used to play with tabooing "should" to avoid tying myself to unrealistic hopes and expectations of moral realism, but found a better way.

A previous post was written under the assumption that morality has nothing to do with epistemics. That is, in fact, broadly wrong. Our desires must impact our predictions. You might want to read The Parable of Predict-o-matic to learn how, but I'll summarize the principle: There are many things that become true as a result of being believed to be true. The assignment of names, declarations of war, predictions of the changes in value of a currency, beliefs about whether a plan will be carried out. (Aside from that last one, most of these are for collective epistemology, rather than individual, but language is for collective reasoning and we're talking about language so!!)

Many of the parts of reality in which believing agents are involved, are being constructed by those agents' beliefs. If our selection of beliefs about the future do not answer to our values, we cede much of our agency over the future, we construct a wretched, arbitrary reality where the values of currencies, the relationships between factions, the meaning of our words, are all mostly determined by random forces. We see ourselves as being caught in a threshing tide that is beyond our control, but the tide actually consists of us, it was always in our control, when people miss that, tragedy occurs.

We need "should" to tame the tide. "Should" is the modality that associates belief and desire.
"X Should Be So" means "X Will Be So, Because We Want it to Be."

We can call this account of 'should' the realist should. We can call the old, naive standard english dialect speakers' account, the idealist should. We can be prescriptivists about the meaning of should because the idealist should is problematic and useless.

The idealist should is often spoken dishonestly. It does not care whether the things it claims can actually happen. It rails against reality. It fatally underestimates the strength of its opponent. It rallies around plans that will not work.

The realist should is more or less logically interchangeable with an enlightened optimist's "will". Anything that you accept will happen, should happen, and anything that should happen, will happen. It has a different connotation to "will", it is saying "think about how our desires influence this", but it can't be more than a connotative difference, if your should and will diverge, the dialog spirals into the idealism of fixating on plans that you know will not work.

 

The realist should is used in a few ways that an idealist will find unfamiliar. Exercise: Lets use the realist "should" in the proper way until you get used to it.

Idealist: Nobody should ever have to die of cancer.

Realist: About a billion people should die of cancer before the cure is found.

 

Idealist: You should have known everything we knew, and read the same books we had read. Instead you failed and are bad.

Realist: You should have acted under bounded rationality and read whatever seemed important to you at the time given what you knew. We must work with the sad reality that different people will have read different books.

 

The realist "should" has an implicit parameter. A plan.

What "should" be, is part of an implied plan. Different plans use "should" differently. When a person says "should", you will now be consciously aware that they have some specific plan in mind, just as you realize that when someone says "they" they have a particular person in mind, or when a person says "the thing" they have assumed that there is only one salient instance of a thing and you'll know what they mean.
It's perfectly normal for words to have somewhat complicated implicit bindings like this, it becomes unproblematic as soon as you learn to consciously interrogate the bindings, to question, "Who's 'they'?", "There is more than one instance of that class.", "'Should', under which plan? Tell me about your aims and your assumptions about how the world works."

With these adjustments we will have no further difficulties.

This morning I came across a paper reported on a 'science' website. The paper was on heteronormativity and was laced with value judgements on the moral perils & injustices that the authors seem to believe flow from refusing to recognise biological sex (not just gender) as a 'spectrum'. It read like a high school debate presentation. They need to read your post.

Making factory farmed eggs available to customers isn't objectively ethical. An appeal to choice only works if the choice decreased the amount of suffering that there is in the world.

Since that appears to not be the case, then you need to find another angle. Maybe factory farmed eggs decrease the amount of suffering in the world because egg eaters can eat more eggs for less money? Even if that were the case, that's only a small amount of suffering that was removed from the world that is outweighed by the suffering of factory farmed chickens. Moreover, egg eaters can still explore other foods. So you'd need to do show the data.

Basing morality off choice in and of itself isn't viable. Just because something causes a decision state does not magically make that thing moral or immoral. You also run into problems when basing morality off of choice like the libertarians who want to destroy the government because regulations technically cause more harm today than ever before. Basing morality off of choice is a moralistic fallacy and frankly dualistic.

Taking choice into account does matter though since choice affects the amount of suffering there is in the world. For example, you can choose to avoid causing agony. But that doesn't make the choice good in and of itself, only good instrumentally so, since it has caused a decrease in the amount of suffering.

The more decisions available to members, the more ethical that society is.

When I go to the grocery store, a decision is available to me to purchase factory farmed eggs or pasture-raised eggs. Let's consider this from exclusively the issue of animal suffering.

  • If our society started with only factory farmed eggs, then added pasture-raised eggs, it has given me the opportunity to make another decision and has also become more ethical.
  • By contrast, if our society started with only pasture-raised eggs, then added factory farmed eggs, it has given me the opportunity to make another decision and has also become less ethical.

In general, it doesn't seem like giving people more decisions makes a society inherently better or worse. Maybe as a rule of thumb, more options is better? But mostly it just seems irrelevant to the ethical level of a society.

Note: I don't have time to continue this discussion further, but I thought you might find this useful as you continue to explore this line of thinking.

I kind of get behind the "self-grown produce is less fattening" - you put in work, it takes out fat. Work, in many people's minds, is "good". Also, try chewing through a free-range hen, not a chicken. Builds muscles, hen.