1 min read11th Nov 202113 comments
This is a special post for quick takes by tivelen. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

13 comments, sorted by Click to highlight new comments since: Today at 2:40 PM

Supererogatory morality has never made sense to me previously. Obviously, either doing the thing is optimally moral, in which case you ought to do it, or it isn't, in which case you should instead do the optimally moral thing. Surely you are morally blameworthy for explicitly choosing not to do good regardless. You cannot simply buy a video game instead of mosquito nets because the latter is "optional", right?

I read about slack recently. I nodded and made affirmative noises in my head, excited to have learned a new concept that surely had use in the pursuit of rationality. Obviously we cannot be at 100% at all times, for all these good reasons and in all these good cases! I then clicked off and found another cool concept on LessWrong.

I then randomly stumbled upon an article that offhandedly made a supererogatory moral claim. Something clicked in my brain and I thought "That's just slack applied to morality, isn't it?". Enthralled by the insight, I decided this was as good an opportunity as ever to make my first Shortform. I had failed to think deeply enough about slack to actually integrate it into my beliefs. This was something to work on in the future to up my rationalist game, but I also get to pat myself on the back for realizing it.

Isn't my acceptance of slack still in direct conflict with my current non-acceptance of supererogatory morality? And wasn't I just about to conclude without actually reconciling the two positions?

Oh. Looks like I still have some actual work ahead of me, and some more learning to do.

I suspect that it's a combination of a lot of things. Slack, yes. Also Goodhart's law, in that optimizing directly for any particular expression of morality is liable to collapse it.

There are also second and greater order effects from such moral principles: people who truly believe that people must always do the single most moral thing are likely to fail to convince others to live the same way, and so reduce the total amount of good that could be done. They may also disagree on what the single most moral thing is, and suffer from factionalization and other serious breakdowns of coordination that would be less likely in people who are less dogmatic about moral necessity.

It's a difficult problem, and certainly not one that we are going to solve any time soon.

Second order effects, indeed.

Instead of being (the only) moral agent, imagine yourself in a role of a coordinator of the moral agents. You specify their algorithm, they execute it. Your goal is to create maximum good, but you must do it indirectly, through the agents.

Your constraints at the good maximizations are the following: Doing good requires spending some resources; sometimes a trivial amount, sometimes a significant amount. So you aim at a balance between spending resources to do good now, and saving resources to keep your agents alive and strong for a long time, so they can do more good over their entire lifetimes. Furthermore, the agents do kind of volunteer for the role, and more strict rules statistically make them less likely to volunteer. So you again aim at a balance between more good done per agent, and more agents doing good.

The result is a heuristic where some rules are mandatory to follow, and other rules are optional. The optional rules do not slow down your recruitment of moral agents, but the ones who do not mind having strict rules are provided an option to do more good.

It's useful, but likely not valuable-in-itself for people to strive to be primarily morality optimizers. Thus the optimally moral thing could be to care about the optimally moral thing substantially less than sustainably feasible.

[-]TAG2y10

Obviously, either doing the thing is optimally moral, in which case you ought to do it, or it isn’t, in which case you should instead do the optimally moral thing.

That's all downstream of an implicit definition of "what I am obliged to do" as "the optimally moral thing". If what you are obliged to do is less demandingly, then there is space for the superogatory.

If I am not obliged to do something, then why ought I do it, exactly? If it's morally optimal, then how could I justify not doing it?

Many systems of morality are built more like "do no harm" than "do the best possible good at all times".

That is, you are morally obliged to choose actions from a particular set in some circumstances, but they do not prescribe which action from that set.

Such a system doesn't prescribe which action from that set, but in order for it to contain supererogatory actions, it has to say that some are more "morally virtuous" to others, even in that narrowed set. These are not prescriptive moral claims, though. Even though you follow this moral system, a statement "X is more morally virtuous but not prescribed" coming from this moral system is not relevant to you. The system might as well say "X is more fribble". You won't care either way, unless the moral system also prescribes X, in which case X isn't supererogatory.

[-]TAG2y10

There are things that at good to do, but not obligatory.

As of now, we cannot unfreeze people who have been cryogenically frozen and successfully revive them. However, we can freeze 5-day-old fertilized eggs and revive them successfully years later. When exactly does an embryo become unrevivable?

Identical twins split at around one week after fertilization, so if it were possible to revive past then, we could freeze one twin and let the other gestate, and effectively clone the gestated twin whenever desired. Since we can artificially induce twinning, then we could give every newly born person the ability to be cloned, seemingly with none of the downsides of current methods of cloning, although with the overhead of IVF treatment, which is substantial.

Is this currently possible under current scientific understanding? Is it more ethical than other methods of cloning? What ethical issues remain? Would anyone even want to do this if it were legally available?

When exactly does an embryo become unrevivable?

When it becomes roughly rabbit-kidney-sized, I think the answer is, ~12g, so maybe around week 10?

Since we can artificially induce twinning, then we could give every newly born person the ability to be cloned

Sure, they could be 'cloned' (once). But it's a weird scenario. If you freeze development of one embryo but not the other, what motivation would you or the grownup one have to implant it later? (Outside, of course, of agricultural applications like cattle where one could use "sib testing with embryo transfer" - the point would be sib-testing to see how the first sibling performs compared to predictions to decide whether to implant more embryos/clones of it into surrogate mothers.)

Human sib-testing seems like it would be useful, for one thing. There was a post here about cloning great people from the past. We will be able to do that in the future if most moderately-well-off people keep pre-emptive copies.

In theory, this would have the same use cases as typical cloning, with an upfront cost and time delay. The main benefit it has over current cloning tech is that it avoids the health issues for the clones, which currently make it unviable.

We could clone people successfully with no further advances in science, or unusual costs. The ethical issues with cloning are no longer theoretical, if true. This seems like a big deal, but maybe cloning really isn't much of an ethical issue at all, and few people will be interested.

We will be able to do that in the future if most moderately-well-off people keep pre-emptive copies.

You would get one copy, but why not just use embryo selection instead to shift the population up? The point of cloning in breeding programs is typically to enable a single elite donor to make a huge and disproportionate contribution to the population, when you've already started using selection strategies like truncation selection to the top 10%. I hardly need to explain why no such things are at all viable for human populations.

The main benefit it has over current cloning tech is that it avoids the health issues for the clones, which currently make it unviable.

That didn't seem like it was all that big a deal. Even Dollie's siblings were fine.

Human cloning hasn't been tried and found hard, it hasn't been tried. It's like CRISPR editing human babies; if your inference from the absence of CRISPR babies pre-He Jiankui was that "gosh, this must be very hard", you were very wrong. The true problems with human cloning were never any objections about it being super-difficult or hard to figure out. We could get human cloning to work with very little collective research effort. (How much did those Chinese primates cost? A few million bucks?) Compared to many things researchers accomplish... This is one reason why the Raelians weren't laughed out of the building when they announced they had cloned someone, because the experts knew deep down that a serious effort would definitely succeed. It's just no one wants to look like a Raelian or eugenicist, and there is little demand compared to more ordinary ways of having kids. (Who wants a clone of themself? Gay couples? No, because then one is left out, they'd rather have gametogenesis so they can have a true genetic child of them both.) So, it doesn't happen. And it'll continue on not happening for who knows how long. Lots of things are like that.