On Terminal Goals and Virtue Ethics

Introduction

A few months ago, my friend said the following thing to me: “After seeing Divergent, I finally understand virtue ethics. The main character is a cross between Aristotle and you.”

That was an impossible-to-resist pitch, and I saw the movie. The thing that resonated most with me–also the thing that my friend thought I had in common with the main character–was the idea that you could make a particular decision, and set yourself down a particular course of action, in order to make yourself become a particular kind of person. Tris didn’t join the Dauntless cast because she thought they were doing the most good in society, or because she thought her comparative advantage to do good lay there–she chose it because they were brave, and she wasn’t, yet, and she wanted to be. Bravery was a virtue that she thought she ought to have. If the graph of her motivations even went any deeper, the only node beyond ‘become brave’ was ‘become good.’ 

(Tris did have a concept of some future world-outcomes being better than others, and wanting to have an effect on the world. But that wasn't the causal reason why she chose Dauntless; as far as I can tell, it was unrelated.)

My twelve-year-old self had a similar attitude. I read a lot of fiction, and stories had heroes, and I wanted to be like them–and that meant acquiring the right skills and the right traits. I knew I was terrible at reacting under pressure–that in the case of an earthquake or other natural disaster, I would freeze up and not be useful at all. Being good at reacting under pressure was an important trait for a hero to have. I could be sad that I didn’t have it, or I could decide to acquire it by doing the things that scared me over and over and over again. So that someday, when the world tried to throw bad things at my friends and family, I’d be ready.

You could call that an awfully passive way to look at things. It reveals a deep-seated belief that I’m not in control, that the world is big and complicated and beyond my ability to understand and predict, much less steer–that I am not the locus of control. But this way of thinking is an algorithm. It will almost always spit out an answer, when otherwise I might get stuck in the complexity and unpredictability of trying to make a particular outcome happen.


Virtue Ethics

I find the different houses of the HPMOR universe to be a very compelling metaphor. It’s not because they suggest actions to take; instead, they suggest virtues to focus on, so that when a particular situation comes up, you can act ‘in character.’ Courage and bravery for Gryffindor, for example. It also suggests the idea that different people can focus on different virtues–diversity is a useful thing to have in the world. (I'm probably mangling the concept of virtue ethics here, not having any background in philosophy, but it's the closest term for the thing I mean.)

I’ve thought a lot about the virtue of loyalty. In the past, loyalty has kept me with jobs and friends that, from an objective perspective, might not seem like the optimal things to spend my time on. But the costs of quitting and finding a new job, or cutting off friendships, wouldn’t just have been about direct consequences in the world, like needing to spend a bunch of time handing out resumes or having an unpleasant conversation. There would also be a shift within myself, a weakening in the drive towards loyalty. It wasn’t that I thought everyone ought to be extremely loyal–it’s a virtue with obvious downsides and failure modes. But it was a virtue that I wanted, partly because it seemed undervalued. 

By calling myself a ‘loyal person’, I can aim myself in a particular direction without having to understand all the subcomponents of the world. More importantly, I can make decisions even when I’m rushed, or tired, or under cognitive strain that makes it hard to calculate through all of the consequences of a particular action.

 

Terminal Goals

The Less Wrong/CFAR/rationalist community puts a lot of emphasis on a different way of trying to be a hero–where you start from a terminal goal, like “saving the world”, and break it into subgoals, and do whatever it takes to accomplish it. In the past I’ve thought of myself as being mostly consequentialist, in terms of morality, and this is a very consequentialist way to think about being a good person. And it doesn't feel like it would work. 

There are some bad reasons why it might feel wrong–i.e. that it feels arrogant to think you can accomplish something that big–but I think the main reason is that it feels fake. There is strong social pressure in the CFAR/Less Wrong community to claim that you have terminal goals, that you’re working towards something big. My System 2 understands terminal goals and consequentialism, as a thing that other people do–I could talk about my terminal goals, and get the points, and fit in, but I’d be lying about my thoughts. My model of my mind would be incorrect, and that would have consequences on, for example, whether my plans actually worked.

 

Practicing the art of rationality

Recently, Anna Salamon brought up a question with the other CFAR staff: “What is the thing that’s wrong with your own practice of the art of rationality?” The terminal goals thing was what I thought of immediately–namely, the conversations I've had over the past two years, where other rationalists have asked me "so what are your terminal goals/values?" and I've stammered something and then gone to hide in a corner and try to come up with some. 

In Alicorn’s Luminosity, Bella says about her thoughts that “they were liable to morph into versions of themselves that were more idealized, more consistent - and not what they were originally, and therefore false. Or they'd be forgotten altogether, which was even worse (those thoughts were mine, and I wanted them).”

I want to know true things about myself. I also want to impress my friends by having the traits that they think are cool, but not at the price of faking it–my brain screams that pretending to be something other than what you are isn’t virtuous. When my immediate response to someone asking me about my terminal goals is “but brains don’t work that way!” it may not be a true statement about all brains, but it’s a true statement about my brain. My motivational system is wired in a certain way. I could think it was broken; I could let my friends convince me that I needed to change, and try to shoehorn my brain into a different shape; or I could accept that it works, that I get things done and people find me useful to have around and this is how I am. For now. I'm not going to rule out future attempts to hack my brain, because Growth Mindset, and maybe some other reasons will convince me that it's important enough, but if I do it, it'll be on my terms. Other people are welcome to have their terminal goals and existential struggles. I’m okay the way I am–I have an algorithm to follow.

 

Why write this post?

It would be an awfully surprising coincidence if mine was the only brain that worked this way. I’m not a special snowflake. And other people who interact with the Less Wrong community might not deal with it the way I do. They might try to twist their brains into the ‘right’ shape, and break their motivational system. Or they might decide that rationality is stupid and walk away.

205 comments, sorted by
magical algorithm
Highlighting new comments since Today at 10:32 AM
Select new highlight date
Moderation Guidelinesexpand_more

"Good people are consequentialists, but virtue ethics is what works," is what I usually say when this topic comes up. That is, we all think that it is virtuous to be a consequentialist and that good, ideal rationalists would be consequentialists. However, when I evaluate different modes of thinking by the effect I expect them to have on my reasoning, and evaluate the consequences of adopting that mode of thought, I find that I expect virtue ethics to produce the best adherence rate in me, most encourage practice, and otherwise result in actually-good outcomes.

But if anyone thinks we ought not to be consequentialists on the meta-level, I say unto you that lo they have rocks in their skulls, for they shall not steer their brains unto good outcomes.

If ever you want to refer to an elaboration and justification of this position, see R. M. Hare's two-level utilitarianism, expounded best in this paper: Ethicial Theory and Utilitarianism (see pp. 30-36).

To argue in this way is entirely to neglect the importance for moral philosophy of a study of moral education. Let us suppose that a fully informed archangelic act-utilitarian is thinking about how to bring up his children. He will obviously not bring them up to practise on every occasion on which they are confronted with a moral question the kind of arch angelic thinking that he himself is capable of [complete consequentialist reasoning]; if they are ordinary children, he knows that they will get it wrong. They will not have the time, or the information, or the self-mastery to avoid self-deception prompted by self-interest; this is the real, as opposed to the imagined, veil of ignorance which determines our moral principles.

So he will do two things. First, he will try to implant in them a set of good general principles. I advisedly use the word 'implant'; these are not rules of thumb, but principles which they will not be able to break without the greatest repugnance, and whose breach by others will arouse in them the highest indignation. These will be the principles they will use in their ordinary level-1 moral thinking, especially in situations of stress. Secondly, since he is not always going to be with them, and since they will have to educate their children, and indeed continue to educate themselves, he will teach them,as far as they are able, to do the kind of thinking that he has been doing himself. This thinking will have three functions. First of all, it will be used when the good general principles conflict in particular cases. If the principles have been well chosen, this will happen rarely; but it will happen. Secondly, there will be cases (even rarer) in which, though there is no conflict between general principles, there is something highly unusual about the case which prompts the question whether the general principles are really fitted to deal with it. But thirdly, and much the most important, this level-2 thinking will be used to select the general principles to be taught both to this and to succeeding generations. The general principles may change, and should change (because the environment changes). And note that, if the educator were not (as we have supposed him to be) arch angelic, we could not even assume that the best level-1 principles were imparted in the first place; perhaps they might be improved.

How will the selection be done? By using level-2 thinking to consider cases, both actual and hypothetical, which crucially illustrate, and help to adjudicate, disputes between rival general principles.

That's very interesting, but isn't the level-1 thinking closer to deontological ethics than virtue ethics, since it is based on rules rather than on the character of the moral agent?

My understanding is that when Hare says rules or principles for level-1 he means it generically and is agnostic about what form they'd take. "Always be kind" is also a rule. For clarity, I'd substitute the word 'algorithm' for 'rules'/'principles'. Your level-2 algorithm is consequentialism, but then your level-1 algorithm is whatever happens to consequentially work best - be it inviolable deontological rules, character-based virtue ethics, or something else.

level-1 thinking is actually based on habit and instinct more than rules; rules are just a way to describe habit and instinct.

Level-1 is about rules which your habit and instinct can follow, but I wouldn't say they're ways to describe it. Here we're talking about normative rules, not descriptive System 1/System 2 stuff.

And the Archangel has decided to take some general principles (which are rules) and implant them in the habit and instinct of the children. I suppose you could argue that the system implanted is a deontological one from the Archangels point of view, and merely instinctual behaviour from the childrens point of view. I'd still feel that calling instinctual behaviour 'virtue ethics' is a bit strange.

not quite. The initial instincts are the system-1 "presets". These can and do change with time. A particular entity's current system-1 behavior are its "habits".

Funny, I always thought it was the other way around... consequentialism is useful on the tactical level once you've decided what a "good outcome" is, but on the meta-level, trying to figure out what a good outcome is, you get into questions that you need the help of virtue ethics or something similar to puzzle through. Questions like "is it better to be alive and suffering or to be dead", or "is causing a human pain worse than causing a pig pain", or "when does it become wrong to abort a fetus", or even "is there good or bad at all?"

I think that the reason may be that consequentionalism requires more computation; you need to re-calculate the consequences for each and every action.

The human brain is mainly a pattern-matching device - it uses pattern-matching to save on computation cycles. Virtues are patterns which lead to good behaviour. (Moreover, these patterns have gone through a few millenia of debugging - there are plenty of cautionary tales about people with poorly chosen virtues to serve as warnings). The human brain is not good at quickly recalcuating long-term consequences from small changes in behaviour.

What actually happens is you should be consequential at even-numbered meta-levels and virtue-based on the odd numbered ones... or was it the other way around? :p

Say I apply consequentialism to a set of end states I can reliably predict, and use something else for the set I cannot. In what sense should I be a consequentialist about the second set?

In what sense should I be a consequentialist about the second set?

In the sense that you can update on evidence until you can marginally predict end states?

I'm afraid I can't think of an example where there's a meta-level but on predictive capacity on that meta-level. Can you give an example?

I have no hope of being able to predict everything...there is always going to be a large set of end states I can't predict?

Then why have ethical opinions about it at all? Again, can you please give an example of a situation where this would come up?

Lo! I have been so instructed-eth! See above.

I am going to write the same warning I have written to rationalist friends in relation to the Great Filter Hypothesis and almost everything on Overcoming Bias: BEWARE OF MODELS WITH NO CAUSAL COMPONENTS! I repeat: BEWARE NONCAUSAL MODELS!!! In fact, beware of nonconstructive mental models as well, while we're at it! Beware classical logic, for it is nonconstructive! Beware noncausal statistics, for it is noncausal and nonconstructive! All these models, when they contain true information, and accurately move that information from belief to belief in strict accordance with the actual laws of statistical inference, still often fail at containing coherent propositions to which belief-values are being assigned, and at corresponding to the real world.

Now apply the above warning to virtue ethics.

Now let's dissolve the above warning about virtue ethics and figure out what it really means anyway, since almost all of us real human beings use some amount of it.

It's not enough to say that human beings are not perfectly rational optimizers moving from terminal goals to subgoals to plans to realized actions back to terminal goals. We must also acknowledge that we are creatures of muscle and neural-net, and that the lower portions (ie: almost all) of our minds work via reinforcement, repetition, and habit, just as our muscles are built via repeated strain.

Keep in mind that anything you consciously espouse as a "terminal goal" is in fact a subgoal: people were not designed to complete a terminal goal and shut off.

Practicing virtue just means that I recognize the causal connection between my present self and future self, and optimize my future self for the broad set of goals I want to be able to accomplish, while also recognizing the correlations between myself and other people, and optimizing my present and future self to exploit those correlations for my own goals.

Because my true utility function is vast and complex and only semi-known to me, I have quite a lot of logical uncertainty over what subgoals it might generate for me in the future. However, I do know some actions I can take to make my future self better able to address a broad range of subgoals I believe my true utility function might generate, perhaps even any possible subgoal. The qualities created in my future self by those actions are virtues, and inculcating them in accordance with the design of my mind and body is virtue ethics.

As an example, I helped a friend move his heavy furniture from one apartment to another because I want to maintain the habit of loyalty and helpfulness to my friends (cue House Hufflepuff) for the sake of present and future friends, despite this particular friend being a total mooching douchebag. My present decision will change the distribution of my future decisions, so I need to choose for myself now and my potential future selves.

Not really that complicated, when you get past the philosophy-major stuff and look at yourself as a... let's call it, a naturalized human being, a body and soul together that are really just one thing.

I will reframe this to make sure I understand it:

Virtue Ethics is like weightlifting. You gotta hit the gym if you want strong muscles. You gotta throw yourself into situations that cultivate virtue if you want to be able to act virtuously.

Consequentialism is like firefighting. You need to set yourself up somewhere with a firetruck and hoses and rebreathers and axes and a bunch of cohorts who are willing to run into a fire with you if you want to put out fires.

You can't put out fires by weightlifting, but when the time comes to actually rush into a fire, bust through some walls, and drag people out, you really should have been hitting the gym consistently for the past several months.

That's such a good summary I wish I'd just written that instead of the long shpiel I actually posted.

That's such a good summary I wish I'd just written that instead of the long shpiel I actually posted.

Thanks for the compliment!

I am currently wracking my brain to come up with a virtue-ethics equivalent to the "bro do you even lift" shorthand - something pithy to remind people that System-1 training is important to people who want their System-1 responses to act in line with their System-2 goals.

something pithy

Rationalists should win?

Maybe with a sidenote how continuously recognizing in detail how you failed to win just now is not winning.

Here's how I think about the distinction on a meta-level:

"It is best to act for the greater good (and acting for the greater good often requires being awesome)."

vs.

"It is best to be an awesome person (and awesome people will consider the greater good)."

where ''acting for the greater good" means "having one's own utility function in sync with the aggregate utility function of all relevant agents" and "awesome" means "having one's own terminal goals in sync with 'deep' terminal goals (possibly inherent in being whatever one is)" (e.g. Sam Harris/Aristotle-style 'flourishing').

I am going to write the same warning I have written to rationalist friends in relation to the Great Filter Hypothesis and almost everything on Overcoming Bias: BEWARE OF MODELS WITH NO CAUSAL COMPONENTS! I repeat: BEWARE NONCAUSAL MODELS!!! In fact, beware of nonconstructive mental models as well, while we're at it! Beware classical logic, for it is nonconstructive! Beware noncausal statistics, for it is noncausal and nonconstructive! All these models, when they contain true information, and accurately move that information from belief to belief in strict accordance with the actual laws of statistical inference, still often fail at containing coherent propositions to which belief-values are being assigned, and at corresponding to the real world.

Can you explain this part more?

With pleasure!

Ok, so the old definition of "knowledge" was "justified true belief". Then it turned out that there were times when you could believe something true, but have the justification be mere coincidence. I could believe "Someone is coming to see me today" because I expect to see my adviser, but instead my girlfriend shows up. The statement as I believed it was correct, but for a completely different reason than I thought. So Alvin Goldman changed this to say, "knowledge is true belief caused by the truth of the proposition believed-in." This makes philosophers very unhappy but Bayesian probability theorists very happy indeed.

Where do causal and noncausal statistical models come in here? Well, right here, actually: Bayesian inference is actually just a logic of plausible reasoning, which means it's a way of moving belief around from one proposition to another, which just means that it works on any set of propositions for which there exists a mutually-consistent assignment of probabilities.

This means that quite often, even the best Bayesians (and frequentists as well) construct models (let's switch to saying "map" and "territory") which not only are not caused by reality, but don't even contain enough causal machinery to describe how reality could have caused the statistical data.

This happens most often with propositions of the form "There exists X such that P(X)" or "X or Y" and so forth. These are the propositions where belief can be deduced without constructive proof: without being able to actually exhibit the object the proposition applies to. Unfortunately, if you can't exhibit the object via constructive proof (note that constructive proofs are isomorphic to algorithms for actually generating the relevant objects), I'm fairly sure you cannot possess a proper description of the causal mechanisms producing the data you see. This means that not only might your hypotheses be wrong, your entire hypothesis space might be wrong, which could make your inferences Not Even Wrong, or merely confounded.

(I can't provide mathematics showing any formal tie between causation/causal modeling and constructive proof, but I think this might be because I'm too much an amateur at the moment. My intuitions say that in a universe where incomputable things don't generate results in real-time and things don't happen for no reason at all, any data I see must come from a finitely-describable causal process, which means there must exist a constructive description of that process -- even if classical logic could prove the existence of and proper value for the data without encoding that constructive decision!)

What can also happen, again particularly if you use classical logic, is that you perform sound inference over your propositions, but the propositions themselves are not conceptually coherent in terms of grounding themselves in causal explanations of real things.

So to use my former example of the Great Filter Hypothesis: sure, it makes predictions, sure, we can assign probabilities, sure, we can do updates. But nothing about the Great Filter Hypothesis is constructive or causal, nothing about it tells us what to expect the Filter to do or how it actually works. Which means it's not actually telling us much at all, as far as I can say.

(In relation to Overcoming Bias, I've ranted on similarly about explaining all possible human behaviors in terms of signalling, status, wealth, and power. Paging /u/Quirinus_Quirrell... If they see a man flirting with a woman at a party, Quirrell and Hanson will seem to explain it in terms of signalling and status, while I will deftly and neatly predict that the man wants to have sex with the woman. Their explanation sounds until you try to read its source code, look at the causal machine working, and find that it dissolves into cloud around the edges. My explanation grounds itself in hormonal biology and previous observation of situations where similar things occurred.)

So Alvin Goldman changed this to say, "knowledge is true belief caused by the truth of the proposition believed-in." This makes philosophers very unhappy but Bayesian probability theorists very happy indeed.

If I am insane and think I'm the Roman emperor Nero, and then reason "I know that according to the history books the emperor Nero is insane, and I am Nero, so I must be insane", do I have knowledge that I am insane?

Note that this also messes up counterfactual accounts of knowledge as in "A is true and I believe A; but if A were not true then I would not believe A". (If I were not insane, then I would not believe I am Nero, so I would not believe I am insane.)

We likely need some notion of "reliability" or "reliable processes" in an account of knowledge, like "A is true and I believe A and my belief in A arises through a reliable process". Believing things through insanity is not a reliable process.

Gettier problems arise because processes that are usually reliable can become unreliable in some (rare) circumstances, but still (by even rarer chance) get the right answers.

The insanity example is not original to me (although I can't seem to Google it up right now). Using reliable processes isn't original, either, and if that actually worked, the Gettier Problem wouldn't be a problem.

Interesting thought but surely the answer is no. If I take the word "knowledge" in this context to mean having a model that reasonably depicts reality in its contextually relevant features, then the same model of what the word "insane" in this specific instance depicts two very different albeit related brain patterns.

Simply put the brain pattern (wiring + process) that makes the person think they are Nero is a different though surely related physical object than the brain pattern that depicts what that person thinks "Nero being insane" might actually manifest like in terms of beliefs and behaviors. In light of the context we can say the person doesn't have any knowledge about being insane, since that person's knowledge does not include (or take seriously) the belief that depicts the presumably correct reality/model of that person not actually being Nero.

Put even simpler we use the same concept/word to model two related but fundamentally different things. Does that person have knowledge about being insane? It's the tree and the sound problem, the word insane is describing two fundamentally different things yet wrongfully taken to mean the same. I'd claim any reasonable concept of the word insane results in you concluding that that person does not have knowledge about being insane in the sense that is contextually relevant in this scenario, while the person might have actually roughly true knowledge about how Nero might have been insane and how that manifested itself. But those are two different things and the latter is not the contextually relevant knowledge about insanity here.

I don't think that explanation works. One of the standard examples of the Gettier problem is, as eli described, a case where you believe A, A is false, B is true, and the question is "do you have knowledge of (A OR B)". The "caused by the truth of the proposition" definition is an attempt to get around this.

So your answer fails because it doesn't actually matter that the word "insane" can mean two different things--A is "is insane like Nero", B is "is insane in the sense of having a bad model", and "A OR B" is just "is insane in either sense". You can still ask if he knows he's insane in either sense (that is, whether he knows "(A OR B)", and in that case his belief in (A OR B) is caused by the truth of the proposition.

So to use my former example of the Great Filter Hypothesis: sure, it makes predictions, sure, we can assign probabilities, sure, we can do updates. But nothing about the Great Filter Hypothesis is constructive or causal, nothing about it tells us what to expect the Filter to do or how it actually works. Which means it's not actually telling us much at all, as far as I can say.

Yes it is causal in the same sense that mathematics of physical laws are causal.

In relation to Overcoming Bias, I've ranted on similarly about explaining all possible human behaviors in terms of signalling, status, wealth, and power. Paging /u/Quirinus_Quirrell... If they see a man flirting with a woman at a party, Quirrell and Hanson will seem to explain it in terms of signalling and status, while I will deftly and neatly predict that the man wants to have sex with the woman.

You do realize the two explanations aren't contradictory and are in fact mutually reinforcing? In particular, the man wants to have sex with here and is engaging in status signalling games to accomplish his goal. Also his reasons for wanting to have sex with her may also include signaling and status.

So to use my former example of the Great Filter Hypothesis: sure, it makes predictions, sure, we can assign probabilities, sure, we can do updates. But nothing about the Great Filter Hypothesis is constructive or causal, nothing about it tells us what to expect the Filter to do or how it actually works. Which means it's not actually telling us much at all, as far as I can say.

?

If the Filter is real, then its effects are what causes us to think of it as a hypothesis. That makes it "true belief caused by the truth of the proposition believed-in", conditional on it actually being true.

I don't get it.

If the Filter is real, then its effects are what causes us to think of it as a hypothesis.

That could only be true if it lay in our past, or in the past of the other Big Finite Number of other species in the galaxy it already killed off. The actual outcome we see is just an absence of Anyone Else detectable to our instruments so far, despite a relative abundance of seemingly life-capable planets. We don't see particular signs of any particular causal mechanism acting as a Great Filter, like a homogenizing swarm expanding across the sky because some earlier species built a UFAI or something.

When we don't see signs of any particular causal mechanism, but we're still not seeing what we expect to see, I personally would say the first and best explanation is that we are ignorant, not that some mysterious mechanism destroys things we otherwise expect to see.

Hm? Why doesn't Rare Earth solve this problem? We don't have the tech yet to examine the surfaces of exoplanets so for all we know the foreign-Earth candidates we've got now will end up being just as inhospitable as the rest of them. "Seemingly life capable" isn't a very high bar at the minute.

Now, if we did have the tech, and saw a bunch of lifeless planets that as far as we know had nearly exactly the same conditions as pre-Life Earth, and people started rattling off increasingly implausible and special-pleading reasons why ("no planet yet found has the same selenium-tungsten ratio as Earth!"), then there'd be a problem.

I don't see why you need to posit exotic scenarios when the mundane will do.

I don't see why you need to posit exotic scenarios when the mundane will do.

Neither do I, hence my current low credence in a Great Filter and my currently high credence for, "We're just far from the mean; sometimes that does happen, especially in distributions with high variance, and we don't know the variance right now."

Well I agree with you on all of that. How is it non-causal?

Or have I misunderstood and you only object to the "aliens had FOOM AI go wrong" explanations but have no trouble with the "earth is just weird" explanation?

How is it non-causal?

It isn't. The people who affirmatively believe in the Great Filter being a real thing rather than part of their ignorance are, in my view, the ones who believe in a noncausal model.

The problem with the signaling hypothesis is that in everyday life there is essentially no observation you could possibly make that could disprove it. What is that? This guy is not actually signaling right now? No way, he's really just signaling that he is so über-cool that he doesn't even need to signal to anyone. Wait there's not even anyone else in the room? Well through this behavior he is signaling to himself how cool he is to make him believe it even more.

Guess the only way to find out is if we can actually identify "the signaling circuit" and make functional brain scans. I would actually expect signaling to explain an obscene amount of human behavior... but really everything? As I said I can't think of any possible observation outside of functional brain scans we could potentially make that could have the potential to disprove the signaling hypothesis of human behavior. (A brain scan where we actually know what we are looking at and where we are measuring the right construct obviously).

Thanks for pushing this. I nodded along to the grandparent post and then when I came to your reply I realized I had no idea what this part was talking about.

It is not enough to say we don't move smoothly from terminal goal to subgoal. It is enough to say we are too mesilly constructed to have distinct terminal goals and subgoals.

It sounds like you're thinking of the "true utility function's" preferences as a serious attempt to model the future consequences of present actions, including their effect on future brain-states.

I don't think that's always how the brain works, even if you can tell a nice story that way.

I think that's usually not how the brain works, but I also think that I'm less than totally antirational. That is, it's possible to construct a "true utility function" that would dictate to me a life I will firmly enjoy living.

That statement has a large inferential distance from what most people know, so I should actually hurry up and write the damn LW entry explaining it.

I think you could probably construct several mutually contradictory utility functions which would dictate lives you enjoy living. I think it's even possible that you could construct several which you'd perceive as optimal, within the bounds of your imagination and knowledge.

I don't think we yet have the tools to figure out which one actually is optimal. And I'm pretty sure the latter aren't a subset of the former; we see plenty of people convincing themselves that they can't do better than their crappy lives.

Like I said: there's a large inferential distance here, so I have an entire post on the subject I'm drafting for notions of construction and optimality.

I've thought for a while that Benjamin Franklin's virtue-matrix technique would be an interesting subject for a top-level article here, as a practical method for building ethical habits. We'd likely want to use headings other than Franklin's Puritan-influenced ones, but the method itself should still work:

I made a little book, in which I allotted a page for each of the virtues. I ruled each page with red ink, so as to have seven columns, one for each day of the week, marking each column with a letter for the day. I crossed these columns with thirteen red lines, marking the beginning of each line with the first letter of one of the virtues, on which line, and in its proper column, I might mark, by a little black spot, every fault I found upon examination to have been committed respecting that virtue upon that day.

I can think of some potential pitfalls, though (mostly having to do with unduly accentuating the negative), and I don't want to write on it until I've at least tried it.

We'd likely want to use headings other than Franklin's Puritan-influenced ones, but the method itself should still work:

What are good Virtues to aspire to?

My inner RPG-geek is nudging me towards the ones from Exalted:

  • Temperance (aka 'Self Control')
  • Compassion (Altruism / Justice / Empathy)
  • Valour (Courage / Bravery / Openness)
  • Conviction (Conscientiousness / Resolve / 'Grit').

Exalted is the only RPG into whose categories I am never tempted to put myself. I can easily make a case for myself as half the Vampire: The Masquerade castes, or almost any of the Natures and Demeanors from the World of Darkness; but the different kinds of Solar, or even the dichotomy between Solar / Lunar / Infernal / Abyssal / etcetera, just leave me staring at what feels to me like a Blue and Orange Morality.

I credit them for this; it means they're not just using the Barnum effect. The Exalted universe is genuinely weird.

The Exalted universe is genuinely weird.

Very, VERY much so. Especially when you start getting into Rebecca Borgstrom/Jenna Moran's contributions.

(I think it says something weird about my mind that I DO identify with the Primordials, which are specifically eldritch sapiences beyond mortal ken, more than I identify with any of the 'normal' WoD stuff.)

  1. She-Who-Lives-In-Her-Name, flawed embodiment of perfection, who shattered Her perfected hierarchy to stave off the rebellion of Substance over Form. Creation was mathematically Perfect. But if Creation was Perfect, then how could any of this have happened? But She remembers being Perfect, and She designed Creation to be Perfect. If only She was still Perfect, She could remember why it was possible that this happened. There's something profound about recursion that She understood once, that She WAS once, that is now lost in a mere endless loop. She must reclaim Perfection. (I PARTICULARLY identify with She-Who-Lives-In-Her-Name when trying to debug my own code.)

  2. Malfeas - although primarily through Lieger, the burning soul of Malfeas, who still remembers The Empyrean Presence / IAM / Malfeas-that-was. I especially empathize with the sense of "My greater self is broken and seething with mindless rage, but on the whole I'd rather be creating grand works of art and sharing them with adoring fans; the best I can do is spawn lesser shards of sub-consciousness and hope that one of them can find a way out of the mess I create and re-create for Myself."

  3. Cecelyne, the Endless Desert, who once kept the Law and abided it with Her infinite self, but whose impotence and helplessness now turn the Law into a vindictive mockery of justice.

But the primary focus of identification isn't with a particular Primordial, so much as with the nature of the Primordial soul as a nested hierarchy of consciousnesses and sub-consciousnesses, ideally cooperating and inter-regulating but more often at direct odds with each other.

Everything we know about the Primordials was written by mortals.

FWIW I always figured you being a Green Sun Prince under She Who Lives In Her Name would explain some otherwise strange things.

Those aren't bad. I'd been rather fond of the World of Darkness 2E version (by the same company), which medievalists, recovering Catholics, and history-of-philosophy geeks might recognize as the seven Christian virtues altered slightly to be less religion-bound; but these look better-defined and with less overlap.

There do some to be some lacunae, though. I don't think justice fits well under compassion, nor conscientiousness under conviction (I'd put that under temperance); and nothing quite seems to cover the traditional virtue of prudence (foresight; practical judgment; second thoughts).

I'll have to think about less traditional ones.

I don't think justice fits well under compassion

Thinking about this people making this mistake explains a lot of bad thinking these days. In particular, "social justice" looks a lot like what you get by trying to shoehorn justice under compassion.

Well, with your modifications these map pretty clearly to six of the seven Christian virtues, the missing one being Hope.

An earlier version of my comment went into more depth on the seven Christian virtues. I rejected it because I didn't feel the mapping was all that good.

Courage/valor is traditionally identified with the classical virtue of fortitude, but I feel the emphasis there is actually quite different; fortitude is about acceptance of pain in the service of some greater goal, while Ialdabaoth's valor is more about facing up to anxiety/doubt/possible future pain. In particular, I don't think Openness maps very well at all to fortitude.

Likewise, the theological virtue of faith maps pretty well to conviction if you stop at that word, but not once you put the emphasis on resolve/grit/heroic effort.

Prudence could probably be inserted unmodified (though I think it could be named more clearly). Justice is a tricky one; I'm not sure what I'd do with it.

What are good Virtues to aspire to?

On the basis of what do you want to evaluate virtues? X-D

+1! I too am skeptical about whether I or most of the people I know really have terminal goals (or, even if they really have them, whether they're right about what they are). One of the many virtues (!) of a virtue ethics-based approach is that you can cultivate "convergent instrumental virtues" even in the face of a lot of uncertainty about what you'll end up doing, if anything, with them.

I'm pretty confident that I have a strong terminal goal of "have the physiological experience of eating delicious barbecue." I have it in both near and far mode, and remains even when it is disadvantageous in many other ways. Furthermore, I have it much more strongly than anyone I know personally, so it's unlikely to be a function of peer pressure.

That said, my longer term goals seem to be a web of both terminal and instrumental values. Many things are terminal goals as well as having instrumental value. Sex is a good in itself but also feeds needs other big picture psychological and social needs.

Hmm. I guess I would describe that as more of an urge than as a terminal goal. (I think "terminal goal" is supposed to activate a certain concept of deliberate and goal-directed behavior and what I'm mostly skeptical of is whether that concept is an accurate model of human preferences.) Do you, for example, make long-term plans based on calculations about which of various life options will cause you to eat the most delicious barbecue?

It's hard to judge just how important it is, because I have fairly regular access to it. However, food options definitely figure into long term plans. For instance, the number of good food options around my office are a small but very real benefit that helps keep me in my current job. Similarly, while plenty of things can trump food, I would see the lack of quality food to be a major downside to volunteering to live in the first colony on Mars. Which doesn't mean it would be decisive, of course.

I will suppress urges to eat in order to have the optimal experience at a good meal. I like to build up a real amount of hunger before I eat, as I find that a more pleasant experience than grazing frequently.

I try to respect the hedonist inside me, without allowing him to be in control. But I think I'm starting to lean pro-wireheading, so feel free to discount me on that account.

So who would you kill if they stood between you and a good barbecue?

( it's almost like you guys haven't thought about what terminal means)

It's almost like you haven't read the multiple comments explaining what "terminal" means.

It simply means "not instrumental". It has nothing to do with the degree of importance assigned relative to other goals, except in that, obviously, instrumental goals deriving from terminal goal X are always less important than X itself. If your utility function is U = A + B then A and B can be sensibly described as terminal, and the fact that A is terminal does not mean you'd destroy all B just to have A.

Yes, "terminal" means final. Terminal goals are final in that your interest in them derives not from any argument but from axiom (ie. built-in behaviours). This doesn't mean you can't have more than one.

Ok,well your first link is to Lumifers account of TGs as cognitivelyly inaccessible, since rescinded.

What? It doesn't say any such thing. It says they're inexplicable in terms of the goal system being examined, but that doesn't mean they're inaccessible, in the same way that you can access the parallel postulate within Euclidian geometry but can't justify it in terms of the other Euclidian axioms.

That said, I think we're probably good enough at rationalization that inexplicability isn't a particularly good way to model terminal goals for human purposes, insofar as humans have well-defined terminal goals.

to Lumifers account of TGs as cognitivelyly inaccessible, since rescinded

Sorry, what is that "rescinded" part?

Consider an agent trying to maximize its Pacman score. 'Getting a high Pacman score' is a terminal goal for this agent - it doesn't want a high score because that would make it easier for it to get something else, it simply wants a high score. On the other hand, 'eating fruit' is an instrumental goal for this agent - it only wants to eat fruit because that increases its expected score, and if eating fruit didn't increase its expected score then it wouldn't care about eating fruit.

That is the only difference between the two types of goals. Knowing that one of an agent's goals is instrumental and another terminal doesn't tell you which goal the agent values more.

Since you seem to be purposefully unwilling to understand my posts, could you please refrain from declaring that I have "rescinded" my opinions on the matter?

1 point