[All the usual disclaimers. Wanders dangerously close to moral relativism. Cross-posted from Grand, Unified, Empty.]

I.

Postel’s Principle (also known as the Robustness Principle) is an obscure little guideline somewhat popular among computer programmers, in particularly those working on network protocols. The original goes like this:

Be conservative in what you do, be liberal in what you accept from others.

My parents were both computer programmers, as am I, and my first job as a programmer was working on network protocols, so it shouldn’t be too surprising that I ran across this principle a long, long time ago. I suspect I heard it while still a teenager, before finishing high school, but I honestly don’t remember. Suffice to say that it’s been kicking around my brain for a long time.

As a rule of thumb in computer programming, Postel’s Principle has some basic advantages. You should be conservative in what you do because producing output that isn’t strictly compliant with the specification risks other programs being unable to read your data. Conversely, you should be liberal in what you accept because other programs might occasionally produce non-compliant data, and ideally your program should be robust and keep working in the face of data that isn’t quite 100% right.

While in recent years the long-term effects of Postel’s Principle on software ecosystems have led to some pushback, I’m more interested in the fact that Postel’s Principle seems to apply as well just as well as a moral aphorism as it does in programming. Context matters a lot when reading, so here’s a list of other aphorisms and popular moral phrases to get your brain in the right frame:

  • What would Jesus do?
  • Actions speak louder than words.
  • If you can’t say something nice, don’t say anything at all.
  • Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime.
  • Be conservative in what you do, and liberal in what you expect from others.

II.

I am, by nature, a fairly conservative person. I’m also, whether by nature or past experience, somewhat socially subordinate; I’m usually much happier in a secondary position than in any role of real authority, and my self-image tends to be fairly fragile. The manosphere would happily write me off as a “beta male”, and I’m sure Jordan Peterson would have something weird to say about lobsters and serotonin.

This combination of personality traits makes Postel’s Principle a natural fit for defining my own behaviour. Rather than trying to seriously enforce my own worldview or argue aggressively for my own preferences, I endeavour not to make waves. The more people who like me, the more secure my situation, and the surest way to get people to like me is to follow Postel’s Principle: be conservative in my own actions (or else I might do something they disapprove of or dislike), and be liberal in what I accept from others (being judgemental is a sure way to lose friends).

[People who know me IRL will point out that in fact I am pretty judgemental a lot of the time. But I try and restrict my judginess (judgmentality? judgementalism?) to matters of objective efficiency, where empirical reality will back me up, and avoid any kind of value-based judgement. E.g. I will judge you for being an ineffective, inconsistent feminist, but never for holding or not holding feminist values.]

Unfortunately, of course, the world is a mind-boggling huge place with an annoyingly large number of people, each of whom has their own slightly different set of moral intuitions. There is clearly no set of behaviours I could perform that will satisfy all of them, so I focus on applying Postel’s Principle to the much smaller set of people who are in my “social bubble” (in the pre-COVID sense). If I’m not likely to interact with you soon, or on a regular basis, then I’m relatively free to ignore your opinion.

Talking about the “set” of people on whom to apply Postel’s Principle provides a nice segue into the formal definitions that are implicit in the English aphorism. For my own behaviour, it makes sense to think of it like the intersection operation in set theory, or the universal quantifier in predicate logic: something is only morally permissible for me if it is permissible for all of the people I am likely to interact with regularly. Conversely, of course, the values I must accept without judgment are the union of the values of the people I know; it is morally permissible if it is permissible for any of the people I am likely to interact with regularly.

III.

Since the set of actions that are considered morally permissible for me are defined effectively by my social circle, it becomes of some importance to intentionally manage my social circle. It would be untenable to make such different friends and colleagues that the intersection of their acceptable actions shrinks to nothing. In that situation I would be forced to make a choice (since inaction is of course its own kind of action) and jettison one group of friends in order to open up behavioural manoeuvring space again.

Unfortunately, it sometimes happens that people change their moral stances, especially when under pressure from other people who I may not be interacting with directly. Even if I have a stable social circle and behavioural manoeuvring space today, tomorrow one of my friends could decide they’re suddenly a radical Islamist and force me with a choice. While in some sense “difficult”, many of these choices end up being rather easy; I have no interest in radical Islam, and so ultimately how close I was to this friend relative to the rest of my social circle matters only in the very extreme case where they were literally my only acquaintance worth speaking of.

Again unfortunately, it sometimes happens that large groups of people change their moral stances all at once. Memes spread incredibly fast, and a small undercurrent of change can rapidly become a torrent when one person in a position of power or status chooses a side. This sort of situation also forces me with a choice, and often a much more difficult one. Apart from the necessity of weighing and balancing friend groups against each other, there’s also a predictive aspect. If I expect a given moral meme to become dominant over the next decade, it seems prudent to be “on the right side of history” regardless of the present impact on my social circle.

Being forced to choose between two social groups with incompatible moral stances is, unsurprisingly, stressful. Social alienation is a painful process, as can attest any Amish person who has been shunned. However what may be worse than any clean break is the moment just before, trying to walk the knife edge of barely-overlapping morals in the desperate hope that the centre can hold.

IV. (PostScript)

I wrote this focused mostly on myself. Having finished, I cannot help but wonder how much an approximation of Postel’s Principle guides the moral principles of most people, whether they would acknowledge it or not. Even people who claim to derive their morality from first principles often end up with something surprisingly close to their local social consensus.

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 1:55 PM

Having worked on APIs at very large software companies, the critiques of the principal resonate strongly for me. Your contract is not what you say, but what you do - as soon as you allow a broken usage, you can NEVER upgrade or tighten it up without breaking your customer. The best time to deny a broken request is before people are taking it for granted - the very first time (and every time) you see it.

It's absolutely allowed to EXPAND the spec to allow new variations. Doing so, you recognize and accept the complexity cost, and think about any security or ambiguity issues being introduced. But don't silently guess the intent (or accept the mistake) and just carry on.

IMO, this carries forth to social and moral topics. If you disagree with something, you _may_ have the privilege of ignoring it. But you shouldn't blindly just accept it - you should recognize that it's wrong, and actively decide what to do about it. Don't update on malformed beliefs, don't nod in agreement with incorrect generalizations, don't let bad LessWrong arguments pass without inspection.

"silence is consent" is a common phrase, with which I do not fully agree, and would prefer that's not how people work. But many times, it's a good predictor of how your acceptance will be seen, even if it's not your wish.

"What would Jesus do?" Not just stand there and accept a broken situation. He'd tear down the moneychangers at the temple and get himself crucified by the Romans.

[ Note: in reality, I'm both passive and uncertain of myself, and I tend NOT to challenge things very strongly. I have great luck in having enough resources that it's generally a matter of intellectual interest when I disagree, rather than my identity or survival being threatened. But I don't think this policy generalizes, and I'm not sure what parameters can be distilled on LW to say when to resist and when to accept.]

[-][anonymous]4y10

Fully agree with the technical critiques. I'm less certain that equivalent critiques apply to the social/moral, but I can see the argument. I think it depends on how close you care to wander to true moral relativism. Thanks for this, I'm going to think about it some more.

Here are some (mostly critical) notes I made reading the post. Hope it helps you figure things out.

> * If you can’t say something nice, don’t say anything at all.

This is such bad advice (I realise eapache is giving it as an example of common moral sayings; though I note it's neither an aphorism nor a phrase). Like maybe it applies when talking to a widow at her husbands funeral?

"You're going to fast", "you're hurting me", "your habit of overreaching hurts your ability to learn", etc. These are good things to say in the right context, and not saying them allows bad things to keep happening.

> The manosphere would happily write me off as a “beta male”, and I’m sure Jordan Peterson would have something weird to say about lobsters and serotonin.

I don't know why this is in here, particularly the second clause -- I'm not sure it helps with anything. It's also mean.

> This combination of personality traits makes ...

The last thing you talk about is what Peterson might say, not your own personality. Sounds like you're talking about the personality trait(s) of "[having] something weird to say about lobsters and serotonin".

> This combination of personality traits makes Postel’s Principle a natural fit for defining my own behaviour.

I presume you mean "guiding" more than "defining". It could define standards you hold for your own behaviour.

> *[People who know me IRL will point out that in fact I am pretty judgemental a lot of the time. But I try and restrict my judginess ... to matters of objective efficiency, where empirical reality will back me up, and avoid any kind of value-based judgement. E.g. I will judge you for being an ineffective, inconsistent feminist, but never for holding or not holding feminist values.]*

This is problematic, e.g. 'I will judge you for being an ineffective, inconsistent nazi, but never for holding or not holding nazi values'. Making moral judgements is important. That said, judging things poorly is (possibly very) harmful. (Examples: treating all moral inconsistencies as equally bad, or treating some racism as acceptable b/c of the target race)

> annoyingly large number of people

I think it's annoyingly few. A greater population is generally good.

> There is clearly no set of behaviours I could perform that will satisfy all of them, so I focus on applying Postel’s Principle to the much smaller set of people who are in my “social bubble”

How do you know you aren't just friends with people who approve of this?

What do you do WRT everyone else? (e.g. shop-keeps, the mailman, taxi drivers)

> If I’m not likely to interact with you soon, or on a regular basis, then I’m relatively free to ignore your opinion.

Are you using Postel's Principle *solely* for approval? (You say "The more people who like me, the more secure my situation" earlier, but is there another reason?)

> Talking about the “set” of people on whom to apply Postel’s Principle provides a nice segue into the formal definitions that are implicit in the English aphorism.

How can formal definitions be implicit?

Which aphorism? You provided 5 things you called aphorisms, but you haven't called Postel's Principle that.

> ... [within the context of your own behaviour] something is only morally permissible for me if it is permissible for *all* of the people I am likely to interact with regularly.

What about people you are friends with for particular purposes? Example: a friend you play tennis with but wouldn't introduce to your parents.

What if one of those people decides that Postel's Principle is not morally permissible?

> ... [within the context of other ppl's behaviour] it is morally permissible if it is permissible for any of the people I am likely to interact with regularly.

You're basing your idea on which things are generally morally permissible on what other people think. (Note: you do acknowledge this later which is good)

This cannot deal with contradictions between people's moral views (a case where neither of those people necessarily have contradictions, but you do).

It also isn't an idea that works in isolation. Other people might have moral views b/c they have principles from which they derive those views. They could be mistaken about the principles or their application. In such a case would you - even if you realised they were mistaken - still hold their views as permissible? How is that rational?

> Since the set of actions that are considered morally permissible for me are defined effectively by my social circle, it becomes of some importance to intentionally manage my social circle.

This is a moral choice, by what moral knowledge can you make such a choice? I presume you see how using Postel's Principle here might lead you into a recursive trap (like an echo-chamber), and how it limits your ability to error correct if something goes wrong. Ultimately you're not in control of what your social circle becomes (or who's in and who's out).

> It would be untenable to make such different friends and colleagues that the intersection of their acceptable actions shrinks to nothing.

What? Why?

Your using of 'untenable' is unclear; is it just impractical but something you'd do if it were practical, or is it unthinkable to do so, or is it just so difficult it would never happen? (Note: I think option 3 is not true, btw)

> (since inaction is of course its own kind of action)

It's good you realise this.

> In that situation I would be forced to make a choice (since inaction is of course its own kind of action) and jettison one group of friends in order to open up behavioural manoeuvring space again.

I can see the logic of why you'd *want* to do this, but I can't see *how* you'd do it. Also, I don't see why you'd care to if it wasn't causing problems. I have friends and associates I value which I'd have to cut-loose if I were to follow Postel's P. That would harm me, so how could it be moral to?

It would harm you too, unless the friends are a) collectively and individually not very meaningful (but then why be friends at all?) or b) not providing value to your life anyway (so why be friends at all?). Maybe there are other options?

> Unfortunately, it sometimes happens that people change their moral stances, ...

Why is this a bad thing!??!? It's **good** to learn you were wrong and improve your values to reflect that.

You expand the above with "especially when under pressure from other people who I may not be interacting with directly" -- I'd argue that's not *necessarily* changing one's preference, it's just that the person is behaving like that to please someone else. Hard to see why that would matter unless it was like within the friend group itself or impacted the person so much that they couldn't spend time with you (the second example being something that happens alongside moral pressure with e.g. domestic abuse, so might be something to seriously consider).

> tomorrow one of my friends could decide they’re suddenly a radical Islamist and force me with a choice.

You bring up a decent problem with your philosophy, but then say:

> While in some sense “difficult”, many of these choices end up being rather easy; I have no interest in radical Islam, and so ultimately how close I was to this friend relative to the rest of my social circle matters only in the very extreme case where they were literally my only acquaintance worth speaking of.

First, "many" is not "all" so you still have undefined behaviour (like what to do in these situations). Secondly, who cares if you have an interest in radical islam? A friend of yours suddenly began adhering to a pro-violence anti-reason philosophy. I don't think you need Postel’s P. to know you don't want to casually hang with them again.

So I think this is a bad example for two reasons:
1. You dismiss the problem because "many of these choices end up being rather easy", but that's a bad reason to dismiss it, and I really hope many of those choices are not because a friend has recently decided terrorism might be a good hobby.
2. If you do it just b/c you don't have an interest that doesn't cover all cases, but more importantly to do so for that reason is to reject deeper moral explanations. How do you know you're "on the right side of history" if you can't judge it and refuse the available moral knowledge we have?

> Again unfortunately, it sometimes happens that large groups of people change their moral stances all at once. ... This sort of situation also forces me with a choice, and often a much more difficult one. ... If I expect a given moral meme to become dominant over the next decade, it seems prudent to be “on the right side of history” regardless of the present impact on my social circle.

I agree that you shouldn't take your friends' moral conclusions into account when thinking about big societal stuff. But the thing about the "right side of history" is that you can't predict it. Take the US civil war - with your Postel’s P. inspired morality, your judgements would depend on which state you were in. Leading up to things you'd probably judge the dominant local view the one that would endure. If you didn't judge the situation like that, it means you would have used some other moral knowledge that isn't part of Postel’s P.

> However what may be worse than any clean break is the moment just before, trying to walk the knife edge of barely-overlapping morals in the desperate hope that the centre can hold.

I agree, that sounds like a very uncomfortable situation.

> Even people who claim to derive their morality from first principles often end up with something surprisingly close to their local social consensus.

Why is this not by design? I think it's natural for ppl to mostly agree with their friend group on particular moral judgements (moral explanations can be a whole different ball game). I don't think Postel’s P. need be involved.

Additionally: social dynamics are such that a group can be very *restrictive* in regards to what's acceptable, and often treat harshly those members who are too liberal in what they accept. (Think Catholics in like the 1600s or w/e)

----

I think the programmingisterrible post is good.

> If some data means two different things to different parts of your program or network, it can be exploited—Interoperability is achieved at the expense of security.

Is something like *moral security* important to you? Maybe it's moot because you don't have anyone trying to maliciously manipulate you, but worth thinking about if you hold the keys to any accounts, servers, etc.

> The paper, and other work and talks from the LANGSEC group, outlines a manifesto for language based security—be precise and consistent in the face of ambiguity

Here tef (the author) points out that preciseness and consistency (e.g. having and adhering to well formed specs) are a way to avoid the bad things about Postel’s P. Do you agree with this? Are your own moral views "precise and consistent"?

> Instead of just specifying a grammar, you should specify a parsing algorithm, including the error correction behaviour (if any).

This is good, and I think applies to morality: you should be able to handle any moral situation, know the "why" behind any decision you make, and know how you avoid errors in moral judgements/reasoning.

Note: "any moral situation" is fine for me to say here b/c "don't make judgements on extreme or wacky moral hypotheticals" can be part of your moral knowledge.