Edit to add:

I've read more about compatibilism, including the comments on the original post, and I think it's all been quite helpful in enhancing my understanding. The post and comments here were particularly illuminating: https://whyevolutionistrue.com/2021/04/30/why-do-we-need-free-will-compatibilism/

I'm increasingly inclined to think that the terms "free will" and "moral responsibility" are not helpful words in getting at what I'm interested in.

What I'm really interested in is, similar to my 4th question in the original post, whether certain kinds of "reactive attitudes" (to borrow a term from P. Strawson), e.g. praise and blame, make any sense. And I think these and many other reactive attitudes really depend on the ability to do otherwise. But everyone agrees that determinism rules out the ability to do otherwise. So I'm inclined to think it never makes sense to hold these specific attitudes.

Take, for example, the person who cooly and calmly deliberates and then decides to kill someone. Maybe he stood to gain financially if he murdered his victim. We are reflexively inclined to say he is morally blameworthy. But I think there is a deep sense in which "his brain made him do it." The neurons in his brain fired in such a way that he found the argument "kill X for money" more persuasive than the alternative. Of course, I might still rationally adopt a range of negative attitudes toward the murderer in the same way that I might feel negatively toward a hurricane that killed someone. I might even use some of the same words--e.g., I might say that I "hate" the murderer and the hurricane. But just as it would be incoherent to "blame" the hurricane (in a certain sense), it is similarly incoherent to blame the murderer in this sense (assuming determinism is true). 

In other words, the important point, at least for me, is that I (used to) feel differently about the murderer and the hurricane. There is a certain range of reactive attitudes (of which a certain flavor of blame is one) that I would never be tempted to ascribe to the hurricane. And, on reflection, the reason I wouldn't be tempted to "blame" a hurricane (or a child, or a person with a brain tumor, or an animal, or a killer robot), really does have something to do, at bottom, with the ability to do otherwise. When I reflect on my intuition, I feel like the hurricane "couldn't help it." (Of course, I might still incarcerate the murderer because the probability of her committing another murder conditional on her having murdered once is unacceptably high; or to for more general deterrence.) 

But I think I've been programmed by natural selection to think the murderer could help it. And I now think that this is just a defect in my programming. This might not be a "defect" in every sense; maybe society functions better if we all go around thinking that the murderer could help it. But in true rationalist spirit, I'm interested in whether the intuitive feeling I have that there is a meaningful difference in blameworthiness between the murderer and the hurricane is well justified. We can argue about the definition of "blame" in much the same way you can argue about how to define "free will" and "moral responsibility." That's why I'm trying to point to the difference in our intuitive (moral?) judgments between the murderer and the hurricane to get at whether there is a real difference there or not.

Now, does any of this make a difference to how I feel or act? Actually, yes. Since I've started thinking this way, I'm less hard on myself. In the past, I might have looked at a wildly successful person and felt bad about the comparatively little I've managed to achieve. But now, even if I might still be instinctively inclined to feel that way, I reflect: It makes no sense to feel bad because everything I've done (and everything the wildly successful person has done) is just the product of genes and environment. Our brains made us do it. Similarly, if I feel proud about what I've achieved, I think to myself: I can't really take credit for any of this. My brain made me do it.

I'm less likely to get (or maybe just stay) angry with people in my life. If John does something that makes me mad, I genuinely stop and say to myself "John couldn't help it." I might still be frustrated in the same way I might be frustrated if a person hit me as the result of an involuntary spasm. Again, there are some negative reactive attitudes that I would feel quite justified in feeling toward any hardship imposed on me via an involuntary mechanism. But there is a certain kind of anger I (used to) feel toward competent adults (rooted, I think, in the sense that they chose their own actions) but I'm less likely to feel now.

I'm also more compassionate and empathetic. If someone wrongs me, I think to myself that it is literally true that if I was in their position, I would've done the exact same thing.

These are all positive changes, but I won't deny that there are also negative things about this worldview. For instance, certain types (though not necessarily all types) of gratitude seem not to make sense in a deterministic universe. And yes (because I might as well go here first), I feel less negatively toward Hitler than I would if I felt like Hitler made voluntary choices.

I'm not interested in which set of beliefs would be instrumentally useful (or otherwise "better") for me or for society to adopt. I'm interested in what's actually true with respect to the question whether my instinctive reactions that there is a meaningful sense of (e.g.) blame that it makes sense to ascribe to a competent adult but not to something that manifestly lacks agency.

I haven't fully internalized this worldview, and I doubt I ever could. Some of our hard-wired instincts are impossible to fully overcome. I haven't fully grappled with what it would mean to live according to the principle that we are all meat robots.

But I'd still like to know: Have I missed something? I take it that at least some compatibilists want to say that the instinct we have to feel differently about the murderer and the hurricane is well justified. If you think so, please explain where I've gone wrong!

A final (and somewhat disconnected) point that may serve as an interesting focal point for disagreements between compatibilists and incompatibilists: I think on my view, I basically have to accept the Minority Report thought experiment in which we punish people before they've actually done anything wrong. Assuming we can unerringly predict whether they will do a crime or not, there's no difference between ex ante or ex post (except that if we wait for ex post, there's a crime victim; so we are probably required to act ex ante). Of course, it's a weird though experiment because it assumes perfect foreknowledge, etc. But even on this framing, would a compatibilist disagree that we can punish the criminal before the crime?

The original post is preserved below.

****

I believe in determinism and think (a la Sam Harris) that this means that my will is (at least in some important sense unfree) or (what amounts to the same thing) that I'm not morally responsible for my actions. But I take it from reading posts on LW that most people around these parts, following EY, are compatibilists about free will. I'm trying to understand compatibilism better, so I have some questions I hope someone can answer.

First, how would a compatibilist explain why the mentally insane (or hypnotized etc.) are not morally responsible? Consider two people, Smith and Jones, who are both murderers. Smith has a brain tumor such that he couldn't have done otherwise; the brain tumor "made" him do it. Jones is a "normal" person. I think that all would say that Smith isn't morally responsible for the murder. If asked to explain why, I think my explanation would invoke something like Smith's inability to do otherwise. "He had no other choice," etc. Why is Jones different? If determinism is true, it seems to me that his brain "made" him do it just the same as Smith's did. So how would a compatibilist differentiate between these cases?

Second, would a compatibilist think that a computer programmed with a chess-playing algorithm has free will or is responsible for its decisions? It evaluates a range of options and then makes a decision without external coercion. I think humans are basically just computers running algorithms programmed into us by natural selection. But I also think that the computer lacks responsibility in the sense I care about. If the computer lost a game, I don't think it would make sense for me to get angry with the computer in much the same way that it wouldn't make sense for me to get mad at my car for breaking down. Again, if asked to explain why, I think I would say something like "the computer couldn't help it." Would a compatibilist agree?

Third, and related to the second question, how about animals? Do they have free will? Is my dog in any sense "responsible" for peeing on the carpet? Does it make sense to say that the bear "freely chose" to eat the hiker? If not, what makes humans different?

Fourth, does it ever make sense to feel regret/remorse/guilt on a compatibilist view? Suppose I read a bunch of posts on LessWrong and think that preventing the robot apocalypse is the most important thing I could be working on. But I like reading novels, so I decide to get a PhD in English. I then think to myself "I really should've studied computer science instead." And so, I tell my friend that I regret choosing to study English instead of computer science. But my friend responds, "Your beliefs/desires, and ultimately your choice to study English, was determined by the state of your brain, which was determined by the laws of physics. So don't feel bad! You were forced to study English by the laws of physics."

Is my friend wrong? If so, why? At least intuitively, when I regret something, it seems to be because I feel like I could've done something differently. If someone held a gun to my head and forced me to study English, then I wouldn't regret studying English because I had no choice. But if determinism is true, it seems to me I had no more "choice" in studying English than I would have if someone was holding a gun to my head. And it seems to me that this logic suggests that regret is always illogical.

Finally, I think at least some versions of compatibilism want to distinguish between "internal" motivations and "external" motivations. Why is this difference important? If determinism is true, then it seems to me that I was just as constrained by the laws of physics (and the circumstances of my birth, etc.) to do X as if there were some external force requiring me to do X.

New Answer
New Comment

7 Answers sorted by

shminux

Jan 25, 2023

40

I believe in determinism and think (a la Sam Harris) that this means that my will is (at least in some important sense unfree) or (what amounts to the same thing) that I'm not morally responsible for my actions.

You are "physically responsible", in the sense that your actions are determined by the laws of physics, whether deterministic or stochastic. The "moral responsibility" part has nothing to do with physics and everything to do with emergence: If you compare successful thriving societies with the ones less so, you will notice that apparent defections are punished in one way or another. What counts as a defection from societal norms depends heavily on the society in question. 

In your example Smith and Jones are both compelled to murder by the laws of physics, however a society where Jones is not punished is less thriving on average than the one where Smith is not punished, assuming brain tumors of that kind are rare and easily diagnosable. Compatibilism is an emergent belief that helps society function when more dualist ideas are falsified by experiment. There is nothing deep about it.

FeepingCreature

Jan 25, 2023

20

I'll cheat and give you the ontological answer upfront: you're confusing the alternate worlds simulated in your decision algorithm with physically real worlds. And the practical answer: free will is a tool for predicting whether a person is amenable to persuasion.

Smith has a brain tumor such that he couldn’t have done otherwise

Smith either didn't simulate alternate worlds, didn't evaluate them correctly or the evaluation didn't impact his decisionmaking; there is no process flow through outcome simulation that led to his action. Instead of "I want X dead -> murder" it went "Tumor -> murder". Smith is unfree, despite both being physically determined.

Second, would a compatibilist think that a computer programmed with a chess-playing algorithm has free will or is responsible for its decisions?

Does the algorithm morally evaluate the outcomes of his moves? No. Hence it is not morally responsible. The algorithm does evaluate the outcomes of its moves for chess quality; hence it is responsible for its victory.

Is my dog in any sense “responsible” for peeing on the carpet?

Dogs can be trained to associate bad actions with guilt. There is a flow that leads from action prediction to moral judgment prediction; the dog is morally responsible. Animals that cannot do this are not.

Fourth, does it ever make sense to feel regret/​remorse/​guilt on a compatibilist view?

Sure. First of, note that our ontological restatement upfront completely removed the contradiction between free will and determinism, so the standard counterfactual arguments are back on the table. But also, I think the better approach is to think of these feelings as adaptations and social tools. "Does it make sense" = "is it coherent" + "is it useful". It is coherent in the "counterfactuals exist in the prediction of the agent" model; it is useful in the "push game theory players into cooperate/cooperate" sense.

[-][anonymous]1y30

So essentially it's a question of "COULD the actor have considered societal rules and consequences before acting".

This makes sense on brain tumor vs not cases.

But what about "they got drunk and then committed murder". They were unable to consider the consequences/not murder when drunk.

Hypothetically they don't know that becoming drunk makes you homicidal.

Or the "ignorance of the law is not an excuse".

A lot of these interpretations end up being "we know they probably didn't have any form of ability to not commit this crime but we are going to punish anyway... (read more)

4JBlack1y
It does still make sense for drunk murder, because it is well-known that getting drunk does impair judgement and the person chose to accept the consequences of operating with impaired judgement. They may not specifically have known that they would murder (and almost everyone doesn't), but they are still held responsible for consequences of choosing to knowingly impair their own judgement. As a comparison, we (as a society) do not generally hold responsible those who become drunk or drugged involuntarily. It's the same principle of responsibility, just applied to an earlier action: the decision to impair their own judgement. The same sort of thing applies to "ignorance of the law is no excuse", though with thinner justification than when the principle was first established. It is the responsibility of all people in a state to pay enough attention to the laws governing their actions well enough to know which actions are certainly illegal and which may be dubious. If you are importing goods then it is your responsibility to know the laws relating to the goods you are importing. If you do not or cannot, then it is your responsibility to not import goods. Again, the same principle of moral responsibility applied to an earlier action: the decision to carry out goods importation while knowing that they are not fully aware of the laws governing it. The problem is that the volume of law has increased to such an extent that keeping up with it even in one sub-field has become a full-time specialized occupation.
0[anonymous]1y
Right. Similarly if regular people don't usually murder when drunk and YOU have neurological faults that make you drunkenly homicidal, see what I mean. It's just like the law thing. It's one thing if the law is simple and clear and well known, it's another if you're just helping out a friend by carrying a few live crayfish through customs or an endangered species. The legal judgements are nonsense and unjust, the penalties are imposed for societal convenience.
2JBlack1y
I'm not actually sure what any of this has discussion sub-branch has to do with free will and moral responsibility? It seems to have gone off on a tangent about legal minutiae as opposed to whether moral responsibility is compatible with determinism. But I guess topic drift happens. It may happen to be true that in some specific case, with much better medical technology, it could be proven that a specific person had neurological differences that meant that they would have no hesitation in murdering when drunk even when they would never do so while sober, and that it wasn't known that this could be possible. In this case sure, moral responsibility for this specific act seems to be reduced, apart from the general responsibility due to getting drunk knowing that getting drunk does impair judgement (including moral judgement). But absolutely they should be held very strongly responsible if they ever willingly get drunk knowing this, even if they don't commit murder on that occasion! This holds regardless of whether the world is deterministic or not.

Dagon

Jan 25, 2023

20

If you do NOT believe in some form of compatibilism, you have no expectation that this question changes anything - asking it is just what you do, and responding is just what some of us do, with no motivation-level abstractions to the causality.  Likewise, your use of the term "responsible" is meaningless - there's no way to change any behaviors regarding whether an adult, child, or insane person is punished.  There are noises coming out of the judge's mouth that claim it's in response to a crime, but really it's just what is preordained.  

In other words, your question assumes some form of reasoning is possible over behaviors.

I also try to reason over behaviors, and it feels a lot like I make decisions.  It may or may not be true, and I don't fully understand the mechanism of branching (in fact, I don't even start to understand).  But if it's wrong, it doesn't matter what I believe, and if there IS such a mechanism, then I will experience better outcomes if I take motivation into account.

I mostly agree but partly disagree with one point: even in a fully deterministic world without compatibilism, behaviour can be both predetermined and meaningful responses to previous actions in the world.  It's just not meaningful to say that it could have been different.

It's also still possible to observe and reason about behaviour in such a world, and test a model about whether or not an agent's behaviour changed after punishment or not, and form policies and act on the results (it all adds up to normality after all). Strong determinism doesn't mean nothing changes, it just means that it couldn't have changed in any other way.

JBlack

Jan 25, 2023

20

A major part of "responsibility" is knowing whether an action is wrong or not, and choosing an action that they knew to be wrong. This does not depends upon free will of any kind. If a person is not capable of knowing whether an action is wrong or not then they have diminished responsibility. Note that in such cases the person often still faces legal and societal consequences.

Furthermore, we usually only hold actions to be "wrong" where we (as a society) believe that almost everyone in the corresponding situation would be capable of doing the right thing. This too does not require free will to exist: it is a hypothetical and therefore in the map, not the territory.

These principles can be based in consequentialist models that hold just as well in completely deterministic universes as free-willed ones: agents complex enough to have moral models of behaviour at all, and capable of updating those models, can be trained to not do things harmful to society.

With the example of a chess computer losing a game that you wanted it to win, is it plausible that the computer knew that losing was wrong? Maybe in the case of a very advanced computer that has such things as part of its model, but almost certainly not in general. Did it choose to lose, knowing that it was wrong to do so? Again, almost certainly not. Even in hypothetical cases where it knew that losing was wrong, it likely operated according to a model in which it believed that the moves it made gave it an increased chance of winning over moves it did not. Would almost everyone in a comparable situation have been capable of taking the right action (in the scenario presented, winning)? Almost certainly not. So all three criteria here fail, and by the failure of the third mere loss of a chess game is not even a morally culpable action.

Is it likely that a dog peeing on the carpet knows that it is wrong? An adult, house-trained dog probably does, while a puppy almost certainly does not. Most of the other answers are likewise situation-dependent. An adult, house-trained dog with no reasonable excuse (such as being prevented from accessing the "proper" place to urinate or suffering from medical incontinence) probably could be held morally responsible. A puppy peeing on the carpet, or a bear eating a hiker, almost certainly could not.

I'm not sure where the problem with regret's "irrationality" lies. It's an emotion, not a decision or belief, and rationality does not apply. It may help you update in the right direction though, and communicate that update to others.

Dentin

Jan 24, 2023

2-2

The problem here is that you're using undefined words all over the place. That's why this is confusing. Examples:

  1. "how would a compatibilist explain why the mentally insane (or hypnotized etc.) are not morally responsible?"

What is 'morally' in this context? What's the objective, "down at the quantum mechanical state function level" definition of 'moral'?

What exactly do you mean by 'responsible'?

  1. "would a compatibilist think that a computer programmed with a chess-playing algorithm has free will or is responsible for its decisions?"

What is a 'decision' here? Does that concept even apply to algorithms?

What does 'free will' mean here? Does 'free will' even make sense in this context?

  1. "how about animals? Do they have free will? Is my dog in any sense "responsible" for peeing on the carpet?"

Again, same questions: What do you mean by 'free will'? What do you mean by 'responsible'? The definitions you choose, are they objective, based on the territory, or are they labels on the map that we're free to reassign as we see fit?

The rest of the post continues in a similar vein. You're running into issues because you're confusing the words for being the reality, and saying "hey, these words don't match up". That's not a problem with reality; that's a problem with the words.

My advice would be to remember that ultimately, at the bottom of physics, there's only particles/forces/fields/probabilities - and nowhere in the rules governing physics is there a fundamental force for 'free will' or a particle for 'responsibility'.

[-]TAG1y10

and nowhere in the rules governing physics is there a fundamental force for ‘free will’ or a particle for ‘responsibility’.

Nor is there a "toothbrush". You are confusing physicalism with mereological nihilism.

lalaithion

Jan 25, 2023

10

Reductionism means that "Jones" and "Smith" are not platonic concepts. They're made out of parts, and you can look at the parts and ask how they contribute to moral responsibility.

When you say "Smith has a brain tumor that made him do it", you are conceiving of Smith and the brain tumor as different parts, and concluding that the non-tumor part isn't responsible. If you ask "Is the Smith-and-brain-tumor system responsible for the murder", the answer is yes. If you break the mind of Jones into parts, you could similarly ask "Is the visual cortex of Jones responsible for the murder", and the answer would be no.

So, why do we conceive of "Smith" and "the brain tumor" as more naturally separable? Because we have an understanding that the person Smith was before the brain tumor has continuity with the current Smith+brain tumor system, that the brain tumor is separable using surgery, that the post-surgery Smith would have continuity with the current Smith+brain tumor system, that the post-surgery Smith would approve of the removal of the brain tumor, and the pre-tumor Smith would approve of the removal of the brain tumor.

Whereas we don't have an understanding of Jones in that way. If we did, maybe if we were visited by aliens which had much more advanced neuroscience, and they pointed out that lead exposure had changed a neural circuit so that it operated at 85% efficiency instead of the normal 100%, and they could easily change that, and the post-change Jones didn't want to commit any murders, we might not consider the new Jones morally responsible. But the old Jones-minus-efficiency was responsible.

TAG

Jan 25, 2023

10

But I take it from reading posts on LW that most people around these parts, following EY, are compatibilists about free will

EY isn't a standard compatibilist about FW. He believes that determinism is compatible with the feeling of free will.

I believe in determinism and think (a la Sam Harris) that this means that my will is (at least in some important sense unfree) or (what amounts to the same thing) that I’m not morally responsible for my actions.

It's not obvious that some degree of libertarian free will is the only thing that could explain moral responsibility. Harris explicitly that people, or at least people in the USA, are punished excessively because of a widespread belief in moral responsibility. But he doesn't believe in emptying the prisons. He seems to believe in punishing people less, more constructively, more humanely, etc. ...like they do in some some European countries. But that is a middle of the road position...and compatibilism is a middling position.

First, how would a compatibilist explain why the mentally insane (or hypnotized etc.) are not morally responsible?

How would they say an avalanche or tornado is not morally responsible? For a compatibilist, agency and responsibility are the outcomes of a certain kind of complex mechanism that not every entity has ... and which humans don't necessarily have in a fully functioning form ,for instance, if they have brain tumours or other impairments.

Second, would a compatibilist think that a computer programmed with a chess-playing algorithm has free will or is responsible for its decisions? It evaluates a range of options and then makes a decision without external coercion. I think humans are basically just computers running algorithms programmed into us by natural selection. But I also think that the computer lacks responsibility in the sense I care about. If the computer lost a game, I don’t think it would make sense for me to get angry with the computer in much the same way that it wouldn’t make sense for me to get mad at my car for breaking down.

So you are asking exactly what the mechanism is? Ot arguing that you haven't seen a convincing one?

Fourth, does it ever make sense to feel regret/remorse/guilt on a compatibilist view?

Maybe not. That could be a reason for thinking libertarian free will is the default concept.pp

I believe Harris's view is that we are still justified in imprisoning people for consequentialist reasons, just not based on a "moral desert" theory.

1TAG1y
I know, but that has problems of its own: there isn't much practical difference between imprisonment-for -consequentialist- reasons and imprisonment-for-moralistic reasons, so there's not much basis for a crusade.
6 comments, sorted by Click to highlight new comments since: Today at 10:12 PM

Framing free will as a question of responsibility feels noncentral as LW talk, because this judgement doesn't seem decision relevant.

I'm not sure I follow. But in any event, I was thinking of "free will" and "moral responsibility" as definitionally connected (following a convention I picked up reading about the topic). That is, I'm morally responsible for my actions if I undertook them because of my free will.  And my will is "free" in the relevant sense if I can be held morally responsible for my actions. But I'm not attached to the "responsibility" talk; everything in the above can just be reframed in "free will" terms.

So the usual question for LW is "How to make good decisions?", with many variations of what "good" or "decisions" might mean. These are not necessarily good actions, it could turn out that a bad action results from following a good policy, and the decision was about the policy.

In that context, asking if something is "free will" or "moral responsibility" is not obviously informative. Trying to find a clearer meaning for such terms is still a fine task, but it needs some motivation that makes assignment of such meaning not too arbitrary. I think "free will" does OK as simply a reference to decision making considerations, to decision making algorithms and immediately surrounding theory that gives them meaning, but that's hardly standard.

Moral responsibility is harder to place, perhaps it's a measure of how well an instance of an agent channels their idealized decision algorithm? Then things like brain damage disrupt moral responsibility by making the physical body follow something other than the intended decision algorithm, thus making that algorithm not responsible for what the body does, not being under the algorithm's control.

[-]TAG1y2-1

So the usual question for LW is “How to make good decisions?”,

In the sense of "beneficial to me"...But that doesn't mean other issues vanish. You might like oranges , but apples still exist. And moral responsibility isn't a trivial issue, since it leads to people being jailed and executed.

my friend responds, "Your beliefs/desires, and ultimately your choice to study English, was determined by the state of your brain, which was determined by the laws of physics. So don't feel bad! You were forced to study English by the laws of physics."

The joke I always told was

"I choose to believe in free will, because if free will does exist, then I've chosen correctly, and if free will doesn't exist, then I was determined to choose this way since the beginning of the universe, and it's not my fault that I'm wrong!"

it's a question of where the information is combined, and what options the system considered. yes, the chess engine does have a specific slice of free will about its actions; but clearly, it doesn't have meta free will about what kind of actions to take. it is relatively quite deterministic in decisionmaking; we have lots of incremental steps of noise on a saddle point.

saddle points are edge of chaos; balancing on them is a delicate act that cells walk at all times by staying alive, so in this sense, life almost always has at least a little bit of free will at the cellular scale. When neurons communicate at scale successfully, they are able to form large networks that represent the shape of the outside world in great detail, and then balance on the decision and make it incrementally, thereby integrating large amounts of information. That means that free will is the chaos resolution process, and is able to integrate large amounts of information while still retaining the self-trajectory.

deterministic or not, it's highly chaotic - even if the RNG is pseudorandom instead of random, it looks random from where we sit, many levels of nesting larger than it. Because of that, even if it is in some sense pseudorandomly predetermined, it is not known to us, and so we have the hyperreal decisionmaking process of writing some portion of the future; as far as we are concerned, the future is logically underdetermined until we decide what it is via our information integration into prediction and decisionmaking about how to diffuse away some paths through the decisionmaking.

This is a narrow form of the sparse multiuniverse hypothesis: we are bubbles in a potentially mostly-dense multiverse, but within the bubble where we continue to exist, we decide what is most probable by denoising towards it. when someone has a brain injury of those kinds, they can lose some capability to combine information because some paths through the network of brain cells are not maintaining the same various parts of being a hybrid model based reinforcement learner as well. but they retain free will within what they're able to model.

in the case of the brain tumor case, I would say that the brain tumor causing its output behaviors to defect against other life clearly came from the brain tumor. however, I don't like the penalty example; my view is that the decision to murder made in clear mind should also be considered an unwellness that may be requested to be self modified away, just like cancers that make one a danger to others. And I would say that in both the case of cancer and of "mere" decision, the question should be one of decisionmaking after the fact: will you allow society to request that you prove you've self modified away from the decisionmaking matter-process that caused you to do this last time?