Followup toThou Art Physics, Timeless Control, Hand vs. Fingers, Explaining vs. Explaining Away

I know (or could readily rediscover) how to build a binary adder from logic gates.  If I can figure out how to make individual logic gates from Legos or ant trails or rolling ping-pong balls, then I can add two 32-bit unsigned integers using Legos or ant trails or ping-pong balls.

Someone who had no idea how I'd just done the trick, might accuse me of having created "artificial addition" rather than "real addition".

But once you see the essence, the structure that is addition, then you will automatically see addition whenever you see that structure.  Legos, ant trails, or ping-pong balls.

Even if the system is - gasp!- deterministic, you will see a system that, lo and behold, deterministically adds numbers.  Even if someone - gasp! - designed the system, you will see that it was designed to add numbers.  Even if the system was - gasp!- caused, you will see that it was caused to add numbers.

Let's say that John is standing in front of an orphanage which is on fire, but not quite an inferno yet; trying to decide whether to run in and grab a baby or two.  Let us suppose two slightly different versions of John - slightly different initial conditions.  They both agonize.  They both are torn between fear and duty.  Both are tempted to run, and know how guilty they would feel, for the rest of their lives, if they ran.  Both feel the call to save the children.  And finally, in the end, John-1 runs away, and John-2 runs in and grabs a toddler, getting out moments before the flames consume the entranceway.

This, it seems to me, is the very essence of moral responsibility - in the one case, for a cowardly choice; in the other case, for a heroic one.  And I don't see what difference it makes, if John's decision was physically deterministic given his initial conditions, or if John's decision was preplanned by some alien creator that built him out of carbon atoms, or even if - worst of all - there exists some set of understandable psychological factors that were the very substance of John and caused his decision.

Imagine yourself caught in an agonizing moral dilemma.  If the burning orphanage doesn't work for you - if you wouldn't feel conflicted about that, one way or the other - then substitute something else.  Maybe something where you weren't even sure what the "good" option was.

Maybe you're deciding whether to invest your money in a startup that seems like it might pay off 50-to-1, or donate it to your-favorite-Cause; if you invest, you might be able to donate later... but is that what really moves you, or do you just want to retain the possibility of fabulous wealth?  Should you donate regularly now, to ensure that you keep your good-guy status later?  And if so, how much?

I'm not proposing a general answer to this problem, just offering it as an example of something else that might count as a real moral dilemma, even if you wouldn't feel conflicted about a burning orphanage.

For me, the analogous painful dilemma might be how much time to spend on relatively easy and fun things that might help set up more AI researchers in the future - like writing about rationality - versus just forgetting about the outside world and trying to work strictly on AI.

Imagine yourself caught in an agonizing moral dilemma.  If my examples don't work, make something up.  Imagine having not yet made your decision.  Imagine yourself not yet knowing which decision you will make.  Imagine that you care, that you feel a weight of moral responsibility; so that it seems to you that, by this choice, you might condemn or redeem yourself.

Okay, now imagine that someone comes along and says, "You're a physically deterministic system."

I don't see how that makes the dilemma of the burning orphanage, or the ticking clock of AI, any less agonizing.  I don't see how that diminishes the moral responsibility, at all.  It just says that if you take a hundred identical copies of me, they will all make the same decision.  But which decision will we all make?  That will be determined by my agonizing, my weighing of duties, my self-doubts, and my final effort to be good.  (This is the idea of timeless control:  If the result is deterministic, it is still caused and controlled by that portion of the deterministic physics which is myself.  To cry "determinism" is only to step outside Time and see that the control is lawful.)  So, not yet knowing the output of the deterministic process that is myself, and being duty-bound to determine it as best I can, the weight of moral responsibility is no less.

Someone comes along and says, "An alien built you, and it built you to make a particular decision in this case, but I won't tell you what it is."

Imagine a zillion possible people, perhaps slight variants of me, floating in the Platonic space of computations.  Ignore quantum mechanics for the moment, so that each possible variant of me comes to only one decision.  (Perhaps we can approximate a true quantum human as a deterministic machine plus a prerecorded tape containing the results of quantum branches.)  Then each of these computations must agonize, and must choose, and must determine their deterministic output as best they can.  Now an alien reaches into this space, and plucks out one person, and instantiates them.  How does this change anything about the moral responsibility that attaches to how this person made their choice, out there in Platonic space... if you see what I'm trying to get at here?

The alien can choose which mind design to make real, but that doesn't necessarily change the moral responsibility within the mind.

There are plenty of possible mind designs that wouldn't make agonizing moral decisions, and wouldn't be their own bearers of moral responsibility.  There are mind designs that would just play back one decision like a tape recorder, without weighing alternatives or consequences, without evaluating duties or being tempted or making sacrifices.  But if the mind design happens to be you... and you know your duties, but you don't yet know your decision... then surely, that is the substance of moral responsibility, if responsibility can be instantiated in anything real at all?

We could think of this as an extremely generalized, Generalized Anti-Zombie Principle:  If you are going to talk about moral responsibility, it ought not to be affected by anything that plays no role in your brain.  It shouldn't matter whether I came into existence as a result of natural selection, or whether an alien built me up from scratch five minutes ago, presuming that the result is physically identical.  I, at least, regard myself as having moral responsibility.  I am responsible here and now; not knowing my future decisions, I must determine them.  What difference does the alien in my past make, if the past is screened off from my present?

Am I suggesting that if an alien had created Lenin, knowing that Lenin would enslave millions, then Lenin would still be a jerk?  Yes, that's exactly what I'm suggesting.  The alien would be a bigger jerk.  But if we assume that Lenin made his decisions after the fashion of an ordinary human brain, and not by virtue of some alien mechanism seizing and overriding his decisions, then Lenin would still be exactly as much of a jerk as before.

And as for there being psychological factors that determine your decision - well, you've got to be something, and you're too big to be an atom.  If you're going to talk about moral responsibility at all - and I do regard myself as responsible, when I confront my dilemmas - then you've got to be able to be something, and that-which-is-you must be able to do something that comes to a decision, while still being morally responsible.

Just like a calculator is adding, even though it adds deterministically, and even though it was designed to add, and even though it is made of quarks rather than tiny digits.

New Comment
55 comments, sorted by Click to highlight new comments since:

Artificial addition is not intrinsically addition, any more than a particular string of shapes on a page intrinsically means anything. There is no "structure that is addition", but there are "structures" that can represent addition.

What is addition, primordially? The root concept is one of combination or juxtaposition of actual entities. The intellectual process consists of reasoning about and identifying the changes in quantity that result from such juxtaposition. And artificial addition is anything that allows one to skip some or all of the actual reasoning, and proceed directly to the result.

Husserl's Logical Investigations has a lot about the phenomenology of arithmetic. That's where I'd go for a phenomenological ontology of addition. Ironically, through the exactness of its analyses the book played a distant role in launching cognitive science and the mechanization of thought, even while its metaphysics of mind was rejected.

The basic distinction is between intrinsic intentionality and derived intentionality. Thoughts have intrinsic intentionality, they are intrinsically about what they are about; words and "computations" have derived intentionality, they are convention-dependent assignments of meaning to certain physical things and processes. Artificial addition only has derived intentionality. If something has "the structure of addition", that means it can consistently be interpreted as implementing addition, not that it inherently does so.

The problem, of course, is that in the physical world it seems like nothing has intrinsic intentionality; everything is just a pile of atoms, nothing inherently refers to anything, nothing is inherently about anything. But there are causal relations, and so we have theories of meaning which try to reduce it to causal relations. B is about A if A has the right sort of effects on B. I think that's backwards, and superficial: if B is about A, that implies, among other things, than A has a certain causal relation to B, but the reverse does not hold. It's one of those things that needs a new ontology to be solved.

This perspective does not alter Eliezer's point. Even if thou art monadic intrinsic intentionality, rather than "physics", you're still something, and decisions still involve causation acting through you.

Mitchell: "If something has "the structure of addition", that means it can consistently be interpreted as implementing addition, not that it inherently does so."

Hmm. If a machine can be consistently interpreted as "doing addition", doesn't that indicate that there are intrinsic facts about the machine that have something to do with addition?

Lukas, I don't understand your objection at all. How does disrupting the physical adding machine mid-process prove that it isn't doing addition? One can also disrupt electronic computers mid-process...

Artificial addition is not intrinsically addition
On the contrary, 'addition' is entirely a relationship between inputs and outputs. As long as that relationship has the required properties, it IS addition.

We don't need a mind to perceive meaning in a pattern of electrical impulses generated by a circuit for that circuit to perform arithmetic. As long as the circuit enforces the correct relationship between input and output values, it implements the mathematical operation defined by that relationship.

'Intention' is irrelevant!

"Lukas, I don't understand your objection at all. How does disrupting the physical adding machine mid-process prove that it isn't doing addition? One can also disrupt electronic computers mid-process..."

I didn't say it doesn't do addition; I said that it doesn't do the same addition that the 'theoretical' adder is doing. That's what Elizier called 'artificial addition'.

All you can say is that the physical adder will most of the time do a 'physical addition' that corresponds to the 'theoretical addition'; but you need to make a lot of assumptions about the environment of the physical adder (it doesn't melt, it doesn't explode etc.), and those assumptions don't need to hold.

You don't need to make assumptions for the theoretical adder: You define it to do addition.

To be clear then, your objection is that any physical device that seems to add is doing something different from a "theoretical" device that doesn't actually get built.

(The device made of lego is a red herring, since your argument also applies to real computers.)

I suppose I would say that if the physical device is working properly, it is indeed doing addition. But as a physical device, it always has the potential to fail.

"To be clear then, your objection is that any physical device that seems to add is doing something different from a "theoretical" device that doesn't actually get built."

As long as the domain on which they act is different, they are doing different things. If your theoretical device includes the whole universe as the domain on which it acts, then you generally cannot prove that they are doing different things (Halting problem).

But I do not say that you cannot create an AI that seems to act like a human, and I am not saying that it wouldn't be thinking like a human or that it wouldn't be conscious.

Re your moral dilemma: you've stated that you think your approach needs a half-dozen or so supergeniuses (on the level of the titans of physics). Unless they have already been found -- and only history can judge that -- some recruitment seems necessary. Whether these essays capture supergeniuses is the question.

Demonstrated (published) tangible and rigorous progress on your AI theory seems more likely to attract brilliant productive people to your cause.

[-]iwdw20

bambi: You're taking the very short-term view. Eliezer has stated previously that the plan is to popularize the topic (presumably via projects like this blog and popular science books) with the intent of getting highly intelligent teenagers or college students interested. The desired result would be that a sufficient quantity of them will go and work for him after graduating.

There's lots of ways to implement something as simple as addition, but how many ways are there to implement a man? Curently, the slightest mutation can cause a miscarriage, and a few molecules of poison can kill a man, that's how sensitive the implementation is.

What is the math on this? As the complexity of a thing goes up, surely the number of ways to implement it goes down (given each building block has a limited number ways of combining)? To the point where with something as complex as man, the way he is is the only way he can be. i.e. artificial man is impossible.

I know the claim was that morality was implementation-independent, but I am just bothered by the idea that there can be multiple implementations of John.

[-]iwdw11
I know the claim was that morality was implementation-independent, but I am just bothered by the idea that there can be multiple implementations of John.

Aren't there routinely multiple implementations of John?

John at 1213371457 epoch time John at 1213371458 John at 1213371459 John at 1213371460 John at 1213371461 John at 1213371462

The difference between John in a slightly different branch of reality is probably much smaller than the difference between John and John five seconds later in a given branch of reality (I'm not sure of the correct grammar).

If "moral responsibility" is just moral response-ability, then sure, no problem. But I'd be careful to distinguish that sense of the term from the more common notion of moral responsibility as being morally praise- or blameworthy.

Hopefully, even if you're going to take the evopsych view--which is entirely beside the point--don't you think there's more to morality than status? And even if there weren't, Eliezer's "fluff" is no more in conflict with objective reality than "maximization of persistence odds" is.

Ian, you're being choosy with your examples. There are lots of mutations that do nothing; a few molecules of nonpoison will have no noticeable effect on a woman.


"or donate it to your-favorite-Cause"

I was almost expecting this to be a link to SIAI!

iwdw: you could be right -- perhaps the truly top talented members of the "next generation" are better attracted to AI by wandering internet blog sequences on "rationality" than some actual progress on AI. I am neither a supergenius nor young so I can't say for sure.

Careful Eliezer, the soberishim, the 'open conspiracy' or the 'brights' may be watching. You seem to be coming dangerously close to revealing the Fully Generalized Principle Against Undead that was secretly inscribed by the sane Jew Septimus Browne, in the Vivonumericon.

[-]poke30

I agree that determinism doesn't undermine morality in the way you describe. I remain, however, a moral skeptic (or, perhaps more accurately, a moral eliminativist). I'm skeptical that moral dilemmas exist outside of thought experiments and the pages of philosophy books and I'm skeptical that moral deliberation achieves anything. Since people are bound to play out their own psychology, and since we're inherently social animals and exist in a social environment, I find it unlikely that people would behave substantially different if we eliminated "morality" from our concept space. In that respect I think morality is an epiphenomenon.

Some people want to take part of our psychology and label it "morality" or take the sorts of diplomacy that lead us to cooperate for our mutual benefit and label it "morality" but they're essentially moral skeptics. They're just flexible with labels.

Anyway, is the Alien obviously only a jerk for creating Lenin? It seems to me that by the generalized anti-zombie principle, just having a complete enough model of Lenin to be certain of his actions is as good as creating him. OTOH, I agree that the Alien is a jerk to create Lenin if its merely pretty confident of his expected behavior, but in this case maybe less of a jerk than Lenin. Anyway, if the alien's knowledge is jerkiness it's very plausibly not the case that "that which can be destroyed should be".

poke: cultures differ in the moralities that they affirm and this does lead to substantially different behaviors such as honor killings. Moral reflection has historically been one cause, along with economic selection, noise due to charismatic genius, cultural admixture and others, of changes within a culture in what morality it affirms. That moral reflection is mostly done by people with a concept space that contains morality as a concept and the reflection would probably follow different (I'd guess mostly 'better' in the sense of moving towards more stable states faster, but it might not have happened at all) dynamics if the same people had done it without a concept of morality.

[-]poke00

michael vassar,

I'm skeptical as to whether the affirmed moralities play a causal role in their behavior. I don't think this is obvious. Cultures that differ in what we call moral behavior also differ in culinary tastes but we don't think one causes the other; it's possible that they have their behaviors and they have their explanations of their behaviors and the two do not coincide (just as astrology doesn't coincide with astronomy). I'm also therefore skeptical that changes over time are caused by moral deliberation; obviously if morality plays no causal role in behavior it cannot change behavior.

What anthropologists call moral behavior and what most non-philosophers would recognize as moral behavior tends to coincide with superstitions more than weighty philosophical issues. Most cultures are very concerned with what you eat, how you dress, who you talk to, and so forth, and take these to be moral issues. Whether one should rescue a drowning child if one is a cancer researcher is not as big a concern as who you have sex with and how you do it. How much genuine moral deliberation is really going on in society? How much influence do those who engage in genuine moral deliberation (i.e., moral philosophers) have on society? I think the answers are close to "none" and "not at all."

As far as the Lego adder, it introduces a very tricky question, of what constitutes an implementation or instantiation of an abstract machine or algorithm. This is an issue which has been of some dispute among philosophers. One approach is to see if you can create a map between the physical system and the abstract system, where the algorithmic complexity of the map is small compared to the complexity of the system. Unfortunately this does not give an absolute answer, because of ambiguity in numerical measures of algorithmic complexity.

As far as morality, one could imagine a person, or machine, who made decisions that we would call moral, but without the emotional overtones that we would recognize as part of what makes moral decisions so difficult. He might have trouble deciding whether to save the orphans, because two high priority goals are thrown into an unusual state of conflict, but he would not feel guilt over his decision. It would just be that making the decision took a little longer than most decisions, because of the unusual structure of the situation with regard to satisfying his goals. It would not feel different than playing cards and struggling with a difficult probabilistic calculation in a borderline case where determining the optimal decision took a little longer than normal.

One might imagine, perhaps, an "altruistic psychopath" who was incapable of feeling emotional about these kinds of issues, but who had decided on abstract logical grounds to make helping others be a relatively high priority. And of course ordinary psychopaths are unfortunately not too rare, people who cause harm to others without feeling that they are behaving immorally.

I wonder whether it makes sense to impute moral weight to such individuals, who it seems are acting without a sense of morality. It calls into question the purpose of our sense of morality and why it seems so important to us.

it introduces a very tricky question, of what constitutes an implementation or instantiation of an abstract machine or algorithm.

That's not a tricky question. You simply determine whether the defining relationships hold in the system - if they do, that system is an implementation.

Even compatibilists about moral responsibility and determinism tend to distinguish between deliberative and reason-responsive mechanisms which have (1) a deviant history, as in implantation or manipulation cases (such as those you describe), and those merely involving (2) determinism. You appear to run the two together. Of course, some philosophers argue that cases involving (1) and (2) are indistinguishable, or at least are not relevantly different in terms of our making ascriptions of moral responsibility on their basis. But these philosophers are usually incompatibilists, who question whether or to what extent we could be morally responsible under determinism. I take it that you want to be a compatibilist. But then it seems that you will have to grant that there is a difference between cases involving (1) and (2).

Anyway, is the Alien obviously only a jerk for creating Lenin? It seems to me that by the generalized anti-zombie principle, just having a complete enough model of Lenin to be certain of his actions is as good as creating him.

The jerkiness is not in creating Lenin per se, but in letting him loose on Earth. Keeping him in the Matrix unable to harm anyone might also be jerky, but for a completely different reason, and much more mild.

Vassar:

It seems to me that by the generalized anti-zombie principle, just having a complete enough model of Lenin to be certain of his actions is as good as creating him.

As Tarleton said, the jerkiness is for the resultant of Lenin's actions, which being predictable is a resultant of the Alien's actions just as much as Lenin.

It does not seem to me that predicting people is the same as creating them. Let's say you ask me for a glass of orange juice. I go to the refrigerator and find that there is no orange juice, so I try bringing you lemonade instead. When, in a highly sophisticated form of helpfulness, I project that you would-want lemonade if you knew everything I knew about the contents of the refrigerator, I do not thereby create a copy of Michael Vassar who screams that it is trapped inside my head.

Though I'm not absolutely sure of this, and I did once wonder what would happen if, after the Singularity, we were told that everyone we had ever imagined was themselves a person - including our imaginations of other people and our imaginations of ourselves - and they were all liberated from the confines of our heads, and set loose on the world.

Careful Eliezer, the soberishim, the 'open conspiracy' or the 'brights' may be watching. You seem to be coming dangerously close to revealing the Fully Generalized Principle Against Undead that was secretly inscribed by the sane Jew Septimus Browne, in the Vivonumericon.

I understood every word in that but not the message, unless the message was "I liked your post."

Lukas:

All you can say is that the physical adder will most of the time do a 'physical addition' that corresponds to the 'theoretical addition'; but you need to make a lot of assumptions about the environment of the physical adder (it doesn't melt, it doesn't explode etc.), and those assumptions don't need to hold.

I think you are confusing knowing that a system will perform arithmetic, with the system actually performing arithmetic. The latter does happen sometimes, despite all fallible assumptions.

The posters in this thread aren't even trying to make a good faith attempt to examine these topics in an unbiased way. This seems to me to be a clear example of a group in interaction, sharing some common biases (Lenin bad, us better, "personal moral responsibility" must be defended as existing for us to make these status constructions) working overtime to try to hide the bias. I suppose as much from yourselves as any 3rd party.

I'd recommend a simpler approach. (1) We may or may not have individual agency. (2) We may or may not be capable of making choices, even though we may experience what feels like making choices, anguishing over choices, etc. Kids playing videogames on autoplay seem to experience what feels like making choices, too. (3) Let's try to work together not to die -like Tim Russert just did- in the next 100 years, and onward. Let's not try to save everyone alive. Let's not try to save everyone who ever lived. Let's not try to save everyone who will be born. Let's focus on working together with those of us who want to persist and have something to contribute to the rest, and do our best to make it happen.

As for "moral responsibility", with regards to evaluating how smart people treat each other it's just a layer of straussian inefficiency, with regards to how smart people treat everybody else, it's a costly status game smart people play with each other. Let's reward status directly based on what a given person is doing to maximize persistence odds for the rest of us.

Am I suggesting that if an alien had created Lenin, knowing that Lenin would enslave millions, then Lenin would still be a jerk? Yes, that's exactly what I'm suggesting.

Eliezer, sorry but you fell into the correspondence bias trap.

I agree with your post if you substitute "moral responsibility" with "consequences". We all make decisions and we will have to face the consequences. Lenin enslaved millions, now people call him a jerk. But I don't think he is worse a person than any other human.

Consider that brain tumors can cause aggressive behaviors.

Consider the statement "This mess is your fault." We typically interpret such a statement to endow the actor with freedom of choice, which he has exercised badly. Furthermore, we typically characterize the nature of morality as requiring that freedom. But that interpretation should be taken figuratively. If taken literally, it is superfluous. It is more correct to interpret the statement like "The glitch is in you." This sense of the original statement reflects the determinism in the process of choosing. Matters of reward and punishment are not affected. That they should be is a misguided intuition, because they are choices too, to be interpreted similarly.

('Glitch' is defined as whatever causes the actor to produce undesirable results. The judgement of what is undesirable is wholly outside the scope of this topic. Conflating this with primate social status is not only incoherent, but irrelevant.)

After I wrote my comment I continued to think about it and I guess I might be wrong.

I no longer think that Eliezer fell into the correspondence bias trap. In fact, Lenin's actions seem to show his basic disposition. Another person in his situation would probably act differently.

What I still don't like is the idea of moral responsibility. Who is gonna be the judge on that? That's why I prefer to think of consequences of actions. Although I guess that morality is just another way of evaluating those so in the end it might be the same.

Roland: I would suggest that you might be associating the phrase "moral responsibility" with more baggage (which it admittedly carries) than you need to. I find I can discard the baggage without discarding the phrase. That we call behavior caused by, for example, power lust, "worse" than behavior caused by a tumor, is like a convention. It may not be strictly rational, but it is based on a distinction. Perhaps it is more practical. Perhaps there are other reasons.

Imagine two cars, one of which is 1/5 as efficient as the other. We can call the less efficient one "worse", because we have defined inefficiency as bad relative to our interests. We do not require that the car have freedom of choice about its efficiency before we pass judgement. Many mistakes in philosophy sprout from the strong intuitive wish to endow humans with extra non-physical ingredients that very complicated machines would not have.

I think I agree with perhaps some of the overal point, but I'm not sure.

How does one apply this whole idea to the notion of "Alien reaches into already existing person's brain and tweaks their minds/inclinations just enough to make the person later on do something bad that they otherwise would have done good."

Yes, obviously the alien did somthing bad. But then do we hold the person who did the bad thing morally responsible to? I guess one could say "well, the person who did the bad thing isn't the same person after the alien did the tweak", and that particular new person, which the alien instantiated by doing the tweak, is morally responsible for the bad decision... but that seems itself like a bit of a strech. On the other hand, if we are chucking out the notions of punishment/justice/etc, and just keeping responsibility, maybe it can all be made to work.

Alternately. maybe we need a tweak in the notion or moral responsibility, so that morality remains, but moral responsibility, as such, isn't as strong a concept. "Make good things happen and bad things not happen. 'moral responsibility' is just bookkeeping after the fact to figure out who to applaud or boo."

I'm not sure about that position, but it seems like it could simply some of the reasoning here. Maybe.

Doesn't it depend on what was changed, and how?

And I don't see that what the alien did was necessarily bad, either. If the 'tweaking' involved simply having a conservation, how can the alien be considered responsible for the person's actions? Especially if the alien has no means to anticipate the outcome of its intervention.

Andy, you seem to be close to the same place on this topic as I am (at least relative to most other commenters in this thread). It would be great to get critical feedback from you on my blog.

I lean towards starting with desired outcomes (such as maximizing my personal odds of persistence as a subjective conscious entity) and then look at all of reality and determine how I can best influence its configuration to accomplish that outcome. So then the whole process becomes an evaluation of things like efficiencies at achieving that outcome. Calling Lenin "bad", using the phrase "moral responsibility" in any of the various ways one could, these all seem to me to be at most propaganda tools to attempt to influence reality to achieve my desired outcomes, rather than best models of that reality.

Psy-Kosh: I think of locating responsibility as a convention. My favorite convention is to locate responsibility in the actor who carries out the deed deemed "bad." For example, suppose that I got mugged last night while walking home. My actions and choices were factors in the mugging, but we locate responsibility squarely within the attacker. Even if another person instructed him to attack me, I still locate responsibility in the attacker, because it was his decision to do it. However, I might assign a separate crime to his boss for the specific act of instructing (but not for the act of attacking). The reason that I prefer this convention is that it seems elegant, and it simplifies tasks like writing laws and formulating personal life-approaches. It is not that I think it is "right" in some universal sense. I view the attitudes adopted by societies similarly - as conventions of varying usefulness.

Hopefully: I call Lenin "bad," not to influence anything, but because I mean to say that he really is bad relative to a defined set of assumptions. This framework includes rules such as "torturing and killing is bad." The question of where, exactly, we get rules like this, and whether they are universal or arbitrary is not one that I find particularly interesting to debate. I will briefly state that my own concept of such rules derives mostly from empathy - from being able to imagine the agony of being tortured and killed.

"But if we assume that Lenin made his decisions after the fashion of an ordinary human brain, and not by virtue of some alien mechanism seizing and overriding his decisions, then Lenin would still be exactly as much of a jerk as before."

I must admit that I still don't really understand this. It seems to violate what we usually mean by moral responsibility.

"When, in a highly sophisticated form of helpfulness, I project that you would-want lemonade if you knew everything I knew about the contents of the refrigerator, I do not thereby create a copy of Michael Vassar who screams that it is trapped inside my head."

This is, I think, because humans are a tiny subset of all possible computers, and not because there's a qualitative difference between predicting and creating. It is, for instance, possible to look at a variety of factorial algorithms, and rearrange them to predictably compute triangular numbers. This, of course, doesn't mean that you can look at an arbitrary algorithm and determine whether it computes triangular numbers. I conjecture that, in the general case, it's impossible to predict the output of an arbitrary Turing machine at any point along its computation without doing a calculation at least as long as the calculations the original Turing machine does. Hence, predicting the output of a mind-in-general would require at least as much computing power as running the mind-in-general.

Incidentally, I think that there's a selection bias at work here due to our limited technology. Since we don't yet know how to copy or create a human, all of the predictions about humans that we come up with are, by necessity, easier than creating a human. However, for most predictions on most minds, the reverse should be true. Taking Michael Vassar and creating an electronic copy (uploading), or creating a human from scratch with a set of prespecified characteristics, are both technologically feasible with tools we know how to build. Creating a quantum simulation of Michael Vassar or a generic human to predict their behavior would be utterly beyond the processing power of any classical computer.

Of course your origins are only screened off by who you are to the extent you know who you are. When you are uncertain about your makeup, knowing about your origins can be relevant to your decisions.

"I will briefly state that my own concept of such rules derives mostly from empathy - from being able to imagine the agony of being tortured and killed."

I think I see a standard commission/ommission bias here. To a degree building such biases into policy formation can increase instances of suffering and death in the world, relative to policy denuded from such biases. Do you consider yourself morally responsible if suffering and death is increased as a result of promoting these biases? Although personally, this matters to me more in terms of how it affects my personal persistence odds. Still, I think I'd be helped somewhat by more rational policy than that which leans so heavily on eww bias rather than on trying to more rationally minimize total instances of suffering and death in the human population (rather than trying to do so in a way that conforms with popular biases and aesthetics).

Hopefully: I'm not sure how the part that you quoted relates to ommission bias. If you were referring to the rest of my comment, feel free to include harmful inaction in the same category as harmful action when deciding where to locate responsibility, and pardon me for not explicitly calling out the negative case.

I am unsure about whether your meaning is perhaps something more general. For example, the question of exactly what effect all of my decisions today had on the global economy and population is one that appears intractable to me, so I don't lost sleep about it. For the same reason, I would be suspicious of pragmatic policies unless their effects could be reasonably shown to be unambiguous.

All that said, I do not claim to have any special knowledge or interest in moral philosophy. Also, I make no claim that my preferred way of locating responsibility is optimal, or "true" in any sense - only that it is the most appealing to me. If you think there is a more optimal way of looking at it, I would be interested to hear it. What I do have strong views on is the compatibility between choice and determinism, which is really the only thing I originally intended to comment on here.

poke: I'm so tired of exaggerated cynicism. Just because almost all people's intuitions about the age of the universe are two low doesn't mean its age is actually infinite, nor even that this produces a more correct model or approximation than the average model. 100M years and powered by gravitational collapse gave better results than 3^^^3 years and thermal noise. Arguably even 6000 years and anthropomorphically constructed was more accurate.

"Whether one should rescue a drowning child if one is a cancer researcher is not as big a concern as who you have sex with and how you do it."

Sure sounds to me like I'm writing to a moral realist who thinks that whether one rescues a drowning child is a bigger concern than who you have sex with.

"How much genuine moral deliberation is really going on in society? How much influence do those who engage in genuine moral deliberation (i.e., moral philosophers) have on society? I think the answers are close to "none" and "not at all.""

I think that its obvious that the answers are "disappointingly little but some" and "disappointingly little but some" even if you remove the world "moral" from the above question each time it appears. (of course, most people seem to think real moral deliberation where one doesn't know one's eventual conclusion to be intrinsically immoral). It's equally obvious both that inadequate deliberation prevents vast numbers of pareto improvements in economic efficiency from taking place AND that deliberation has allowed the production of complex useful engineered systems. Likewise, that people in modern cultures have deliberated enough to substantially increase the impact of Haidt's harm and fairness moral dimensions on decision making relative to purity, hierarchy and loyalty, AND that they are still so afflicted by non-reflective moral intuitions that most members of advanced cultures feel little revulsion at laws that deny the poorest of the poor fair access to customers in developed markets.

Allan Crossman: "If a machine can be consistently interpreted as "doing addition", doesn't that indicate that there are intrinsic facts about the machine that have something to do with addition?"

The same physical process, as a computation, can have entirely different semantics depending on interpretation. That already tells you that none of those interpretations is intrinsic to the physical process.

Caledonian: "We don't need a mind to perceive meaning in a pattern of electrical impulses generated by a circuit for that circuit to perform arithmetic. As long as the circuit enforces the correct relationship between input and output values, it implements the mathematical operation defined by that relationship."

See previous comment. There is a physical relationship between inputs and outputs, and then there is a plethora of mathematical (and other) relationships which can be mapped onto the physical relationship.

One may as well say that the words in a natural language intrinsically have certain meanings. If that were true, it would literally be impossible to utilize them in some inverted or nonstandard way, which is false.

Eliezer: why do you say John-1 (the "coward") is morally responsible if under your scenario it was physically impossible for him to act as John-2 did given his initial physical conditions? (If it were not impossible, then his actions wouldn't have been fully determined by his initial physical condition.)

To possibly confuse matters a little more, here's a thought experiment that occurred to me for some reason. I'd be curious to hear what anybody who says that determinism does not undermine moral responsibility, or who makes the even stronger claim that there is absolutely no conflict between determinism and moral responsibility, has to say about the following:

You wake up in front of a schoolhouse, where you've just taken a nap, and discover that your body has been encased in thick metal armor that has actuators at all the joints and is covered in sensors (nobody else did it; this just happens to be the one in a google^^^google chance of this spontaneously happening -- so there's nobody else to "blame"). You are not strong enough to move the armor or break free, but the sensors and actuators are wired up to your brain such that the sensors send their data to certain parts of your brain that you have no conscious awareness of, and the actuators respond to signals from some (perhaps the same) part of your brain whose happenings you are also not conscious of.

The schoolhouse is burning, cherubic youth are screaming, and you could probably save a child or two. But of course, you are not physically capable of doing anything except going along for the ride and doing whatever the armor does based on the firings in your brain that you have no control over or awareness of.

Let's say that the armor turns and runs. Are you -- the person inside -- morally responsible?

If under normal circumstances one's actions were totally predetermined, does one have any more ability to choose than the individual in the armor does? If not, how do you assert that the John-1 would be morally responsible but armored John-A1 would not be morally responsible?

I'm not sure what I think about determinism and moral responsibility, but I have a difficult time understanding how these two topics could have no relation to each other, as some people in this thread seem to believe.

Joseph, that's easy. Even though there is technically a causal relationship between my brain and the armor's actions, its randomness means for all practical purposes there might as well be no relationship. I am no more responsible for the armor's actions than I am for some disastrous hurricane that almost certainly wouldn't have happened if I had wiggled my finger differently a year before. Under determinism, my thoughts predictably affect my body's actions.

Nick, I don't understand what you mean by random. There is nothing in the slightest random (as I understand the term) in the scenario I gave. The primary difference between the two cases is that in one case you believe you are effecting action via your conscious thoughts (but you aren't) and in the other you do not believe you are effecting action via your conscious thoughts. In both cases, your actions are fully determined by what is going on in your brain; it's just that your conscious thoughts are irrelevant to what happens (and you are deluded about this in one of the two scenarios).

Um, what is "thought" if not "what is going on in my brain"?

By "random" I mean subjectively unpredictable. Sorry.

See previous comment. There is a physical relationship between inputs and outputs, and then there is a plethora of mathematical (and other) relationships which can be mapped onto the physical relationship.

What you're not grasping is that there are relationships which cannot be mapped onto the inputs and outputs. The circuit determines what relationships are possible. If it constrains them in certain ways, then the behavior of the circuit will be compatible with the laws of addition, and we can say that it implements those laws.

One may as well say that the words in a natural language intrinsically have certain meanings. If that were true, it would literally be impossible to utilize them in some inverted or nonstandard way, which is false.

Totally irrelevant. If I take a normal, 'functional' calculator, and interpret the symbols it produces in a way contrary to convention, that doesn't change the nature of the circuit within it. It still produces outputs whose relationship to the inputs implement the rules of addition. The meaning of the symbols comes from their use, and the circuit uses them in a particular way. If we assume that the calculator is consistent in its behavior, I could examine the relationships between the symbols it displays and determine what they mean, determine in what ways my interpretation fails to encompass the nature of the circuit's rules.

[-]poke00

michael vassar,

I think you misunderstand me. I'm not being cynical; I'm trying to demonstrate that moral dilemmas and moral deliberation aren't empirically established. I tried to do this, first, by pointing out that what most people consider the subject of morality differs substantially from the subject of academic philosophers and, second, by arguing that the type of moral reasoning found in philosophy isn't found in society at large and doesn't influence it. People really do heroically rescue orphans from burning buildings in real life and they do it without viewing the situation as a moral dilemma and without moral deliberation. I don't think a world where moral philosophy turns out to be perfectly worthless is necessarily a bad one.

If I was the author of this series, I tend to think I would have this post be about causality and goals or causality and utility functions rather than causality and moral responsibility. I did not spend the time to see if Eliezer's post still makes sense if "moral responsibility" is everywhere replaced with "goal" but that would be a worthwhile thing to try IMHO. (The definition of "goal" would have to be expanded a bit to include what in normal speech is usually referred to as systems of goals, preferences or desires.)

I consider my moral responsibilities or my moral obligations to be more or less the same thing as my goals, but other readers will not make that identification, and since everyone has goals but not everyone sees themselves as having moral responsibilites, speaking of goals would IMHO make the post more general and probably acceptable to more people.

In my conception of the laws of rationality, "minds have goals," comes right after, "cause-and-effect relationships are real." Free will is in there somewhere too.

poke, I agree that michael vassar misread you, but I think his last paragraph is concrete engagement. If you're worried that moral philosophers are too abstract, I would stress the moral deliberation of political philosophers. The Enlightenment, eg, the abolition of slavery, seems to me a pretty clear-cut case.

Here are the possible objections I can see:

  1. I have the timeline wrong and the philosophers jumped on a bandwagon 1b philosophers reflect the elites, but it takes time for elite morality to affect the world
  2. we only remember the philosophers on the winning side
  3. moral deliberation is a good guess about which way society is going to go, but philosophers have no impact (but this suggests that the masses are doing moral deliberation!)

Surely the place to look for the effect (if any) of moral philosophy on the real world is in jurisprudence?

Nick: note that I said "conscious thoughts" and not "thoughts", and I specified that the individual is not aware of the inputs/outputs from/to the actuators/sensors and has no control over them.

@Elizier

"I think you are confusing knowing that a system will perform arithmetic, with the system actually performing arithmetic. The latter does happen sometimes, despite all fallible assumptions."

I think you didn't understand my argumentation; when you say that a physical system does perform arithmetic, then your theory of arithmetic is wrong as soon as you have a contradicting result.Therefore the system is not allowed to perform arithmetic sometimes, but it is required to do it always!

Let's consider this: I find a machine I don't know anything about it. I soon find out it has two input dials and what looks like an output register.
By experimenting with the inputs and noting the outputs I found out that the inputs are presumably decimal numbers and the output looks like the arithmetic sum of these numbers.
I say: 'Oh, looks like it adds two numbers.'; now I'm using this machine many many times, until one day where the result isn't the arithmetic sum (let's assume there is no overflow).

'Bugger, seems this machine is broken...'

Now, the 1M $ question: 'At which point did the machine got broken?'. Did it get broken exactly at the point when it printed the wrong result? But what if the inner workings had the defect way before? And it only prints 'wrong' results for specific inputs?

You clearly don't want to question your theory of arithmetic (because your theory doesn't have any contradictions). But let's assume that the creator of this machine didn't want it to perform addition, but he wanted it to calculate the Foobar-value of two 'numbers'. The Foobar-value looks like addition for a majority of values, but for some combinations it's something completely different.

Of course you can examine the inner workings of the machine; but if you don't know the intentions of the engineer, you perhaps find the special part that is responsible for the 'wrong' results. You can 'fix' the machine by replacing that part (assuming it's an engineering mistake), or you assume it doesn't calculate the sum of two numbers and try to find out what this Foobar-value might be for.

But we can avoid this problem by knowing that theoretical devices work on different domains than physical devices. And this is, what technology/engineering is all about: Find a mapping between both, that is reasonable under real-world constraints.

Sorry I'm late, but this is really a great opportunity to plug For the law, neuroscience changes nothing and everything.

Am I responsible for my moral choices?
Yes.

Is John in front of the burning orphanage responsible for his moral choices?
Also yes.

But can I be angry at John-1 if he runs away?
I find that I can't. Not when my anti-Correspondence bias-heuristics kick in, when I envision his situation, when I realize he is the product of a specific set of understandable environmental factors and psychological factors, which are the product of a specific combination of nature and nurture. Yes, some babies dying is John-1's fault. But John-1 is the "fault" of his upbringing.

I find I can't be honestly be angry, and I can't honestly blame, when I have considered this reasoning. I can be sad, sure, but that's different.

For myself, it doesn't give me a catch-all excuse. I have a choice, I make it, and I am responsible for making it, even if I am a product of nature and nurture. This agrees with the viewpoint expressed in the article, as far as I understand it.

As for law, these views unfold like this: Most people still need to be punished for transgressions, in order to conserve the law's pre-commitment that produces the negative expected utility upon transgressions. For some people, there's also a sense of "justice" involved with it, but that doesn't come into play for my rational reasoning.
As it turns out, these views are also predicted and recommended as the future of law in the paper that TGGP2 linked. I've only read the abstract so far though.

I don't think you've established that Lenin was a jerk, in the sense of moral responsibility.

I think people usually have little control (and little illusion of freedom) over what options, consequences and (moral) reasons they consider, as well as what reasons and emotions they find compelling, and how much. Therefore, they can't be blamed for an error in (moral) judgement unless they were under the illusion they could have come to a different judgement. It seems you've only established the possibility that someone is morally culpable for a wrong act that they themselves believed was wrong before acting. How often is that actually the case, even for the acts you find repugnant?

Lenin might have thought he was doing the right thing. Psychopaths may not adequately consider the consequences of their actions and recognize much strength in moral reasons.

There are no universally compelling arguments, after all.