Consider two claims:

  • “broccoli is good for you”
  • “broccoli decreases cholesterol”

Even though the former might be considered a lossy summary of the latter, the two feel very different; they pull very different levers in my brain. “Broccoli decreases cholesterol” pulls levers like:

  • Is the claim even true? Does broccoli really decrease cholesterol? Would I expect to hear people claim this even in worlds where it is false?
  • How much does broccoli decrease cholesterol? Is it a tiny effect size? Also how much broccoli?
  • Where did this information come from? Was it perhaps among the endless stream of bullshit nutrition studies?
  • Relative to what baselines? Is broccoli substituted for something, or added? What’s the population?
  • Do I want lower cholesterol? Do I want it more than I want to eat food tastier than broccoli?

(Probably other people will not have these exact same levers, but I expect most people instinctively respond to “eating broccoli decreases cholesterol” with some kind of guess about where that information came from and how trustworthy it is.)

The other version, “Eating broccoli is good for you”, not only doesn’t pull those levers, it feels like… the sentence is making a bid to actively suppress those levers? Like, those levers are all part of my value-judgement machinery, and the sentence “broccoli is good for you” is making a bid to circumvent that machinery entirely and just write a result into my value-cache.

This is a “bid to defer on a value judgement”: the sentence is a bid to directly write a value-judgement into cache, without going through my own internal value-judgement machinery. If I accept that bid, then I’m effectively deferring to the speaker’s value-judgement.

The Memetic Parasite Model

If broccoli is good for you (and presumably for most other humans, in general), then sharing that information is a friendly, helpful, prosocial action.

More generally: if a value judgement is correct, then passing it along is typically a friendly, helpful, prosocial action. After all, it will help other people to make more “good” decisions if they have more correct information cached about what’s “good”/”bad”.

But this gives rise to a potential parasitic meme dynamic:

  • Alice, at some point, hears that broccoli is good for you. She caches that value judgement.
  • When talking to Bob, Alice notices that it would be helpful and prosocial for her to tell Bob that broccoli is good for you. After all, according to her cached value judgement, broccoli is in fact good for you, so it would be prosocial to pass that information along.
  • Now Bob hears from Alice that broccoli is good for you and, unless he actively disbelieves what he’s hearing, caches that value judgement.
  • … and that memetic loop can run just fine regardless of what benefits broccoli does or does not have.

Note one difference from more general information cascades: information has to be salient for some reason to be passed along. Value judgements tend to be inherently salient; they lend themselves directly to use, since they directly say what would be good or bad.

Another difference from more general information cascades: value judgements naturally lend themselves to black-boxing. They don’t need to interact much with gears, because they circumvent the gearsy machinery of value judgement.

Now, at first glance this model seems rather maxentropic; one could claim that anything at all is “good” or “bad”, and the same dynamic will propagate it, so at first glance there aren’t predictions about which value judgements we will/won’t see propagating memetically. But now we can note that there are factors which favor memeticity of some such claims over others.

  • Value judgements producing outcomes which are actually “good” by their users’ lights will still probably spread more (all else equal), so there is nonzero pressure in favor of “true” judgements here.
  • … but also signalling is a big factor. Broccoli, notably, is not the tastiest food. It’s no ice cream. And that means that eating broccoli is a costly signal that Alice believes it’s good for some other reason, which provides stronger evidence to Bob when he’s unconsciously considering whether to cache the received value-judgement.
  • Generalizing that signalling pattern: things which are bad in some obvious way have a systematic memetic advantage in being labelled “good for nonobvious reasons”.

To What Extent Is Value-Deferral Unavoidable?

Epistemic deferral is notoriously unavoidable to a large extent; a human just doesn’t have the capacity to fact-check everything or even a very large fraction of the information we receive from other humans. (Though that doesn’t mean there’s no gains to be had - e.g. Inadequate Equilibria is largely about how to defer better.) To what extent does the unavoidability of epistemic deferral carry over to value deferral?

First, a lot of value deferral is “built in” to the environment, in a way I probably won’t notice. For instance, products I use every day have safety standards, and I don’t have the time to study them all, so I’m de-facto deferring to the value judgements of those safety standards. I’ll call that sort of value deferral “implicit”, in contrast to value deferral involving my explicit verbal attention (like the broccoli example).

Claim: Explicit value deferral is both unusually prone to parasitic memeticity, and is relatively tractable to avoid. Simply tabooing explicit use of “good”/”bad”/”should”/etc is in fact pretty tractable, and basically nullifies bids for explicit value deferral.

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 7:36 PM

I initially misread the title because "defer judgment" is often used to mean putting off the judgment until later. But here you meant "defer" as in taking on the other person's value judgment as your own (and were arguing against that), rather than as waiting to make the value judgment later (and arguing for that).

I guess "defer" is an autoantonym, in some contexts. When someone makes a claim which you aren’t in a good position to evaluate immediately, then to decide what you think about it you can either defer (to them) or defer (till later).

I think of it as deferring to future me vs. deferring to someone else.

and the sentence “broccoli is good for you” is making a bid to circumvent that machinery entirely and just write a result into my value-cache.

I think this has some truth to it, but that is missing important nuance.

When I imagine eg. my mom telling me that broccoli is good for you, I imagine her having read it on some unreliable magazine's cover. Or maybe she heard it from some unreliable friend of hers.

But when I imagine a smart friend of mine telling me that broccoli is good for you, I start making some educated guesses about the gears. Maybe it is because broccoli has a lot of fiber. Or because of some micronutrients.

In the latter scenario, I think a relevant follow-up question is about the extent to which it bypasses the gear-level machinery. And I think the answer is an unfortunate "it depends". In the broccoli example, I have enough knowledge about the domain such that I think I can make some pretty good educated guesses, and so it actually doesn't bypass the gears too much. Maybe we can say it bypasses it a "moderate amount". In other contexts though where I don't have much domain knowledge I think it'd frequently bypass the gears "a lot" though.

(All of that said, I agree with the broad gist of this post. In particular, with things like "value judgements usually pull on the wrong levers.")

It undermines ability to prioritize tradeoffs among goods, which I think is a bigger deal than it might seem. A substantial fraction of life problems seem to boil down to ambiguous prioritization. Tradeoffs are as complex as your values, so deferring doesn't really work as you can't practically check in for all the fine tuning and complexity of life. This commonly comes up with interactions with the medical system. I also see not very granular systems that result in lots of grinding with adversarially installed buckets like 'have to', 'want to', and 'should' with ad hoc Rube Goldberg machines to deal with all the collisions.

Epistemic deferral is notoriously unavoidable to a large extent; a human just doesn’t have the capacity to fact-check everything or even a very large fraction of the information we receive from other humans.

It's possible to a very large extent, e.g. senior nomenklatura in the late Soviet Union, who were notorious for disregarding anything and everything that didn't pass through multiple committees, secretaries, etc...

Of course bogus information could still got through, but at least it was verified to be superficially plausible enough to fool a sufficient number of serious folks to sign their name on it, such that the final decision maker was largely insulated from any resulting mistakes.

Which is almost as good as fact checking literally everything for at least one person. 

Though the cost of setting up and maintaining such a system is very high, likely to destroy even the US in a few decades, so it's not recommended.

The two statements are different in content, in important ways.  "Broccoli is good for you" can encompass MANY dimensions and mechanisms of goodness, and asserts that the good parts outweigh any bad parts.  "Broccoli reduces cholesterol" is much more specific, and implies (but does not explicitly state) that this is the primary benefit, and if you don't particularly care about your cholesterol, you shouldn't seek out broccoli.

I think you're reading more into the framing differences than there is for most conversations or food decisions.  My standard recommendation: "if it matters, use more words".  The times I've had similar experiences, it was never (as far as I could tell) intentional about value vs fact, but simply an attempt to speak at a useful level of abstraction with the listener.  And again, when there was confusion or disagreement, it required more depth.