I propose it is altruistic to be replaceable and therefore, those who strive to be altruistic should strive to be replaceable.

As far as I can Google, this does not seem to have been proposed before. LW should be a good place to discuss it. A community interested in rational and ethical behavior, and in how superintelligent machines may decide to replace mankind, should at least bother to refute the following argument.

Replaceability

Replaceability is "the state of being replaceable". It isn't binary. The price of the replacement matters: so a cookie is more replaceable than a big wedding cake. Adequacy of the replacement also makes a difference: a piston for an ancient Rolls Royce is less replaceable than one in a modern car, because it has to be hand-crafted and will be distinguishable. So something is more or less replaceable depending on the price and quality of its replacement.

Replaceability could be thought of as the inverse of the cost of having to replace something. Something that's very replaceable has a low cost of replacement, while something that lacks replaceability has a high (up to unfeasible) cost of replacement. The cost of replacement plays into Total Cost of Ownership, and everything economists know about that applies. It seems pretty obvious that replaceability of possessions is good, much like cheap availability is good.

Some things (historical artifacts, art pieces) are valued highly precisely because of their irreplacability. Although a few things could be said about the resale value of such objects, I'll simplify and contend these valuations are not rational.

The practical example

Anne manages the central database of Beth's company. She's the only one who has access to that database, the skillset required for managing it, and an understanding of how it all works; she has a monopoly to that combination.

This monopoly gives Anne control over her own replacement cost. If she works according to the state of the art, writes extensive and up-to-date documentation, makes proper backups etc she can be very replaceable, because her monopoly will be easily broken. If she refuses to explain what she's doing, creates weird and fragile workarounds and documents the database badly she can reduce her replaceability and defend her monopoly. (A well-obfuscated database can take months for a replacement database manager to handle confidently.)

So Beth may still choose to replace Anne, but Anne can influence how expensive that'll be for Beth. She can at least make sure her replacement needs to be shown the ropes, so she can't be fired on a whim. But she might go further and practically hold the database hostage, which would certainly help her in salary negotiations if she does it right.

This makes it pretty clear how Anne can act altruistically in this situation, and how she can act selfishly. Doesn't it?

The moral argument

To Anne, her replacement cost is an externality and an influence on the length and terms of her employment. To maximize the length of her employment and her salary, her replacement cost would have to be high.

To Beth, Anne's replacement cost is part of the cost of employing her and of course she wants it to be low. This is true for any pair of employer and employee: Anne is unusual only in that she has a great degree of influence on her replacement cost.

Therefore, if Anne documents her database properly etc, this increases her replaceability and constitutes altruistic behavior. Unless she values the positive feeling of doing her employer a favor more highly than she values the money she might make by avoiding replacement, this might even be true altruism.

Unless I suck at Google, replaceability doesn't seem to have been discussed as an aspect of altruism. The two reasons for that I can see are:

  • replacing people is painful to think about
  • and it seems futile as long as people aren't replaceable in more than very specific functions anyway.

But we don't want or get the choice to kill one person to save the life of five, either, and such practical improbabilities shouldn't stop us from considering our moral decisions. This is especially true in a world where copies, and hence replacements, of people are starting to look possible at least in principle.

  1. In some reasonably-near future, software is getting better at modeling people. We still don't know what makes a process intelligent, but we can feed a couple of videos and a bunch of psychological data points into a people modeler, extrapolate everything else using a standard population and the resulting model can have a conversation that could fool a four-year-old. The technology is already good enough for models of pets. While convincing models of complex personalities are at least another decade away, the tech is starting to become good enough for senile grandmothers.

    Obviously no-one wants granny to die. But the kids would like to keep a model of granny, and they'd like to make the model before the Alzheimer's gets any worse, while granny is terrified she'll get no more visits to her retirement home.

    What's the ethical thing to do here? Surely the relatives should keep visiting granny. Could granny maybe have a model made, but keep it to herself, for release only through her Last Will and Testament? And wouldn't it be truly awful of her to refuse to do that?
  2. Only slightly further into the future, we're still mortal, but cryonics does appear to be working. Unfrozen people need regular medical aid, but the technology is only getting better and anyway, the point is: something we can believe to be them can indeed come back.

    Some refuse to wait out these Dark Ages; they get themselves frozen for nonmedical reasons, to fastforward across decades or centuries into a time when the really awesome stuff will be happening, and to get the immortality technologies they hope will be developed by then.

    In this scenario, wouldn't fastforwarders be considered selfish, because they impose on their friends the pain of their absence? And wouldn't their friends mind it less if the fastforwarders went to the trouble of having a good model (see above) made first?
  3. On some distant future Earth, minds can be uploaded completely. Brains can be modeled and recreated so effectively that people can make living, breathing copies of themselves and experience the inability to tell which instance is the copy and which is the original.

    Of course many adherents of soul theories reject this as blasphemous. A couple more sophisticated thinkers worry if this doesn't devalue individuals to the point where superhuman AIs might conclude that as long as copies of everyone are stored on some hard drive orbiting Pluto, nothing of value is lost if every meatbody gets devoured into more hardware. Bottom line is: Effective immortality is available, but some refuse it out of principle.

    In this world, wouldn't those who make themselves fully and infinitely replaceable want the same for everyone they love? Wouldn't they consider it a dreadful imposition if a friend or relative refused immortality? After all, wasn't not having to say goodbye anymore kind of the point?

These questions haven't come up in the real world because people have never been replaceable in more than very specific functions. But I hope you'll agree that if and when people become more replaceable, that will be regarded as a good thing, and it will be regarded as virtuous to use these technologies as they become available, because it spares one's friends and family some or all of the cost of replacing oneself.

Replaceability as an altruist virtue

And if replaceability is altruistic in this hypothetical future, as well as in the limited sense of Anne and Beth, that implies replaceability is altruistic now. And even now, there are things we can do to increase our replaceability, i.e. to reduce the cost our bereaved will incur when they have to replace us. We can teach all our (valuable) skills, so others can replace us as providers of these skills. We can not have (relevant) secrets, so others can learn what we know and replace us as sources of that knowledge. We can endeavour to live as long as possible, to postpone the cost. We can sign up for cryonics. There are surely other things each of us could do to increase our replaceability, but I can't think of any an altruist wouldn't consider virtuous.

As an altruist, I conclude that replaceability is a prosocial, unselfish trait, something we'd want our friends to have, in other words: a virtue. I'd go as far as to say that even bothering to set up a good Last Will and Testament is virtuous precisely because it reduces the cost my bereaved will incur when they have to replace me. And although none of us can be truly easily replaceable as of yet, I suggest we honor those who make themselves replaceable, and are proud of whatever replaceability we ourselves attain.

So, how replaceable are you?

New to LessWrong?

New Comment
41 comments, sorted by Click to highlight new comments since: Today at 1:03 AM

TL;DR

Being replaceable reduces costs to others of replacing you, if necessary. This is altruistic.

Simple, perhaps, but a very underappreciated point. All too often I see people make things unnecessary hard for others to do, often consciously, often not. This point ties in with my upcoming article (thanks, NancyLebovitz!) about giving good explanations and passing on your knowledge, since one of the major problems is fighting the urge to keep yourself "irreplaceable".

From that comes the widespread problem of fields being impenetrable and not getting the external review that they need from outsiders.

This (common) insistence, that something takes "years" to understand, is a great example of motivated cognition, as people are deeply invested in others not being able to do what they can, even and especially if that would be trivial to bring about. "It's hard to make a man understand something when his salary depends on his not understanding [how pass on his own understanding]."

Naturally, I'm biased in self-review, but I try to fall on the side of making myself replaceable.

In theory, that will make you "do yourself out of a job", so find a field where the work never ends.

[-][anonymous]11y220

I enjoyed reading your post, and I agree with what I think you are trying to say.

However, replaceability by itself is not a virtue. If only five people on Earth have a certain skill that is useful to humanity, and then ten more people learn the skill, those ten people have become less replaceable (while the original five have become more replaceable), but they are now able to do more good for humanity. So, I think a better moral of your post than "Make yourself replacable." is "Become stronger, but don't be so prideful as to withhold the same opportunity that was given to you from others." And perhaps this is reducible in some respects to "Cooperate in the Prisoner's Dilemma."

I'd learn a different lesson from your example: Make things in general replaceable. Yourself just happens to be the most common case where there is a selfish motivation to do otherwise.

For example, high quality reproductions of art are a very good thing.

Good correction, thanks. Of course we shouldn't simply maximize replaceability. After all, the most repleaceable person is also the most superfluous.

That's the lesson I got out of the post too, that to cooperate in a prisoner's dilemma is a good thing

That's the lesson I got out of the post too, that to cooperate in a prisoner's dilemma is a good thing

I hope the lesson put conditions on that. If not the lesson is evil (ie. holding the belief would result in destroying everything humanity holds dear if given the right circumstances.)

Ok, I'll amend my previous statement to be more specific; in a prisoner's dilemma where cooperating means both entities get warm fuzzies, and in warm fuzzies I include all my preferences (so if cooperating would result in 100 people dying and me getting 100 $ I'd count that as a net loss), and defecting while the other cooperates gets me more warm fuzzies but not over a certain limit (as a rule of thumb, less than double what I'd get for cooperating, although of course this goes by a case by case basis) and with both people defecting we get less warm fuzzies, then I'd cooperate

I propose it is altruistic to be replaceable and therefore, those who strive to be altruistic should strive to be replaceable.

Non sequitur, there may be other considerations that outweigh the altruistic gain from being replacable. Such as acquiring a unique skillset - even if gaining that unique skillset may by definition entail making yourself much harder to replace.

I.e. it may be more altruistic to become less replaceable, even if replaceability itself is altruistic.

Tl;dr - rent seeking is bad, m'kay?

This was a interesting read, but it's rather narrowly focused. If Anne were a doctor, then the greater her skill at surgery, the less replaceable she would be. For any occupation, the more skilled a person, the less replaceable she becomes. Replace ability isn't really the relevant metric. Rather, Dr. Anne may have the option to teach other people her surgical skill, increasing her replaceability and reducing theirs. But teaching people a useful skill is obviously altruistic; this doesn't turn on replaceability. Likewise, doing a good job is more altruistic than doing a bad job (when there's no reward). Hence, complex database Anne is less altruistic than friendly database Anne because she's doing her job worse. The reason replaceability isn't discussed is because I don't think it really adds much, especially since one should, generally, act to become more skilled and thus less replaceable.

Being able to teach others is itself a useful skill that might increase Dr. Anne's irreplacability at the same time. Maybe her role in the institution will change and she will do fewer surgeries & more teaching; maybe she will do fewer simple surgeries (because she has trained others to do them) while she takes on more challenging ones more suited to her because of her experience.

I think there is a significant difference between being irreplaceable because of skills you have (ie, Einstein or Da Vinci were and still are irreplaceable) and being irreplaceable because you artificially make you so (as Anne does in your example). The first one may be undesirable, but the only way to overcome it would be to just not have genius, and I don't think it would do good to humanity. Einstein or Da Vinci shouldn't have limited themselves so they would stay replaceable.

The second one (the Anne of your example) is definitely a problem, a subset of the more general case of people doing bad work for their own personal benefit.

Don't be irreplaceable. If you can't be replaced, you can't be promoted.

-- Scott Adams

I've not heard it specifically in the sense of altruism, but having a "hit by a bus" plan seems to be a standard part of programmer ethics. I'm finding your reframing very useful, even though the basic idea is very familiar to me from numerous such ethical discussions :)

Otherwise known as a project's "Bus factor", a fairly googlable term.

"The graveyards are full of indispensable men"

  • De Gaulle.

"The graveyards are full of indispensable men" - De Gaulle.

History is full of collapsed empires, failed projects and lost battles. The loss of an indispensable person does not itself prove they were not indispensable. It may just mean you're screwed.

I interpret the quote as meaning somewhat that people kid themselves about how irreplaceable they are.

"The graveyards are full of indispensable men" - De Gaulle.

The graveyards are full of everyone who has ever lived up until ~1850 (and a lot of people born after that).

I kind of assumed that was the anonymous author's point. (I've seen it attributed to many different French.)

It may have been the original author's point, but that makes it kind of a non sequitur comment to this post, at least to my first reading of it. Yes, indispensable people die. People who try to make themselves replaceable also die. The people left behind are better off if the deceased is "more replaceable" in at least a practical sense.

...On further thought, I think I automatically interpreted the comment as disagreeing with the article, because my brain seems to assume that most one-like comments are going to be disagreements. If I interpret it as agreeing with the article, then it makes perfect sense.

Therefore, if Anne documents her database properly etc, this increases her replaceability and constitutes altruistic behavior. Unless she values the positive feeling of doing her employer a favor more highly than she values the money she might make by avoiding replacement, this might even be true altruism.

Anne's payoff matrix looks more like a Prisoner's Dilemma than straightforward altruism to me. She can take steps to make herself less replaceable, but most of them appear to come at the cost of reducing the speed, fault-tolerance, and expansibility of the system she's working on, or at least letting it slowly stagnate into obsolescence. Since the company's performance depends to a large extent on the aggregate behavior of employees like Anne, and since well-performing companies offer much better job security than faltering ones, a cooperation strategy isn't an obvious loss even from a self-interested perspective.

You could probably model employee morale to a large extent as propensity to defect in situations like this one, actually.

Documentation seems largely altruistic:

I'm currently writing a novel. I have a ton of scattered plot notes, some written, some in my head. Since the novel is an active project, I don't need a ton of notes - I remember most of it. In 5 years, these notes would be less useful to me. If someone else took over and didn't have the benefit of my presence, they'd probably be totally lost.


Equally, I use a lot of archaic programs on my home computer. I could spend a week converting my entire music collection to MP3, but right now it only works with Winamp and a suite of custom plugins to handle obscure formats like "raw SNES music files". For me, this is trivial upkeep, since I replace computers every ~5 years, and it takes 15 minutes to re-install the plugins. There's no reason to convert the collection UNTIL I decide to switch away from Winamp, or hand it off to someone else.

In other words, "mainstream, up to date tools" are not necessarily better, even if they're more likely to be FAMILIAR to someone new to the project. Writing code in COBOL might make it 10x more maintainable and faster, as long as you have a COBOL programmer on staff to handle it... The whole Y2K bug speaks well to that.


So, no, I don't see any reason to conclude that replaceability and usability go hand in hand :)

Eliezer Yudkowsky gets hit by a bus. Do you want his unfinished ideas to be in a text file on his cellphone, in a pile of handwritten sheets, or in his head?

Eliezer keeping his notes where I can access them is altruism. It does NOT make him more efficient as Nornagest was claiming, and might actually waste quite a lot of his time.

I'd want him to keep them in whatever format he feels is best - keeping them in his head might drastically increase his productivity, and we'd all benefit. Even being altruistic, it's still a risk-reward tradeoff, and I trust him to make that decision better than me (he knows the factors better, and I'd wager that he's smarter and more rational than I am about such things anyway)

Documentation in a corporate environment, or even in something like an open-source project, serves several purposes: to make it easier to get new team members up to speed (which can be used to train replacements, but also serves an expansibility purpose), to reduce coordination overhead, and to make it easier to remember what the heck you were doing after spending five months tasked with something else. Most of these motivations aren't purely altruistic.

Obviously this is going to be quite different for a solo project, and it does look like the downsides to defection are less severe in situations where you have a unique skillset and your company isn't expecting a need for other people with the same skills. But the point remains that there are performance-oriented reasons for doing a lot of things the OP describes strictly in terms of affecting your replaceability, and in any case the problematic situations seem too limited for cooperation in their context to be called a major virtue.

Can we agree that f your goals are "don't get replaced, but help the company grow" it's a risk-reward tradeoff to do things like documentation? And sometimes documenting won't help growth at all, and sometimes it won't affect how replaceable you are?

My point wasn't meant to be a generalized "documentation NEVER helps", just that it's entirely possible for an action to be primarily a risk-of-being-replaced without much personal gain :)

Yeah, that seems reasonable.

Isn't it somehow the manager's job to make sure that what is good for Anne is aligned with what is good for the company?

If Anne makes a great documentation and teaches Beth everything she knows, it may save the company a lot of money. How about giving a part of that money to Anne as a reward for being helpful?

Or is it just Anne's moral obligation to ignore her own utility function for the benefit of the company? If so, then in addition to making the documentation and teaching Beth, she should also ask the company to give her as small salary as legally possible. (She could also sell her kidney, and donate the money to the company. Perhaps do the same thing with the second kidney on the day that Beth is ready to replace her.)

Also, it is the manager's job to decide whether it benefits company more if Anne works on making herself replaceable, or if she fully concentrates on doing what she does best. Maybe making Anne spend a part of her time writing and maintaining the documentation would cost company $500 per month; the risk of losing Anne in any specific month is 2%; and the cost of replacing her without documentation is $10,000. Then the rational choice is not to let her work on the documentation. Anne does not know all the relevant numbers.

First: Your use of replaceability in regards to people is absurdly different than the standard meanings of the word eg: someone who is unexciting and less than capable. This is probably why you found it hard to relevantly Google.

Second: replaceability as a form of altruism has to compete with all other forms of altruism, as several commenters have mentioned. I put it pretty far down on the list of things someone should strive for if they're trying to do good in the world.

How would this theory interact with polyamory?

I think that this article may have a bit of a level problem. Many good actions have the effect of making one more replaceable, but that's not the reason why they are good. That said, it's good to notice the common cluster, which I probably would not have noticed without this post.

I agree with what you say, and would like to point out that being partially replaceable is also a virtue.

It is said that a good manager is judged in his absence. Furthermore, really good ones don't seem to be doing much at all. My point that while being wholly replaceable is a virtue, as you described - but being partially replaceable is also a virtue, to any small degree of replaceableness.

While Anne could either obfuscate the DB to gain job security and create pains for her replacement, or clearly document it and put her job at risk - either are problematic. What if she has a family? Supporting your family is also virtuous.

What she can do is find the sweet spot and mostly-win on both counts.

Anyhow, having the capability to make yourself replaceable if required tends to make you much more valuable to an organization from my experience - and would actually raise your job security. So, usually its a win-win to be replaceable and wouldn't hire someone who thinks otherwise.

Quick comment: I would say that the altruistic thing is to minimize the merely replaceable-able.

The value in historic artifacts lies largely in the information they convey about ancient times. Replaceable things contain mostly mutual information with their replacements, so if you got a complete ancient bowl set, say, that would be a lot less informative than the same number of bowls from different sets, except on the matter of the existence of such sets.

Uniqueness/rarity of such things doesn't increase total value, but it does increase marginal value (the value to gain or lose one more item).

What if by making myself irreplaceable I maximize my income and donate more money to x-risk reduction? I don't buy the concept of altruism towards people who don't care about x-risk.

I don't buy the concept of altruism towards people who don't care about x-risk.

Can you clarify what you mean by this?

On reflection I think I misused the word altruism. I'm an egoist, but my actions towards others may seem altruistic in cases where those others are optimizing for futures I prefer. For example, Aubrey de Grey is better at optimizing for a future in which I don't die than I am, so I prefer to give him some of my optimization power.

OK, so what you're trying to say is "I, personally, act 'altruistic' towards people who are working on x-risk, and don't buy the concept of being 'altruistic' towards anyone else." Is that right? I was reading it as "a given person's behaviour can't be called altruistic unless they are working on x-risk."