Stupid Questions Open Thread Round 2

From Costanza's original thread (entire text):

This is for anyone in the LessWrong community who has made at least some effort to read the sequences and follow along, but is still confused on some point, and is perhaps feeling a bit embarrassed. Here, newbies and not-so-newbies are free to ask very basic but still relevant questions with the understanding that the answers are probably somewhere in the sequences. Similarly, LessWrong tends to presume a rather high threshold for understanding science and technology. Relevant questions in those areas are welcome as well.  Anyone who chooses to respond should respectfully guide the questioner to a helpful resource, and questioners should be appropriately grateful. Good faith should be presumed on both sides, unless and until it is shown to be absent.  If a questioner is not sure whether a question is relevant, ask it, and also ask if it's relevant.

Meta:

 

  • How often should these be made? I think one every three months is the correct frequency.
  • Costanza made the original thread, but I am OpenThreadGuy. I am therefore not only entitled but required to post this in his stead. But I got his permission anyway.

 

208 comments, sorted by
magical algorithm
Highlighting new comments since Today at 11:37 AM
Select new highlight date

What practical things should everyone be doing to extend their lifetimes?

Good question.

Its probably easier to list things they shouldn't be doing that are known to significantly reduce life expectancy (e.g. smoking). I would guess it would mainly be obvious things like exercise and diet, but it would be interesting to see the effects quantified.

What about vitamins/medication? Isn't Ray Kurzweil on like fifty different pills? Why isn't everyone?

It's unclear whether taking vitamin supplements would actually help. (See also the Quantified Health Prize post army1987 linked.)

Regarding medication, I'll add that for people over 40, aspirin seems to be a decent all-purpose death reducer. The effect's on the order of a 10% reduction in death rate after taking 75mg of aspirin daily for 5-10 years. (Don't try to take more to enhance the effect, as it doesn't seem to work. And you have to take it daily; only taking it on alternating days appears to kill the effect too.)

Michaelcurzi's How to avoid dying in a car crash is relevant. Bentarm's comment on that thread makes an excellent point regarding coronary heart disease.

There is also Eliezer Yudkowsky's You Only Live Twice and Robin Hanson's We Agree: Get Froze on cryonics.

Basically, any effective plan boils down to diligence and clean living. But here are changes I've made for longevity reasons:

You can retain nervous control of your muscles with regular exercise; this is a good place to start on specifically anti-aging exercise.

Abdominal breathing can significantly reduce your risk of heart attacks. (The previously linked book contains one way to switch styles.)

Intermittent fasting (only eating in a 4-8 hour window, or on alternating days, or a few other plans) is surprisingly easy to adopt and maintain, and may have some (or all) of the health benefits of calorie restriction, which is strongly suspected to lengthen human lifespans (and known to lengthen many different mammal lifespans).

In general, I am skeptical of vitamin supplements as compared to eating diets high in various good things- for example, calcium pills are more likely to give you kidney stones than significantly improve bone health, but eating lots of vegetables / milk / clay is unlikely to give you kidney stones and likely to help your bones. There are exceptions: taking regular low doses of lithium can reduce your chance of suicide and may have noticeable mood benefits, and finding food with high lithium content is difficult (plants absorb it from dirt with varying rates, but knowing that the plant you're buying came from high-lithium dirt is generally hard).

Can you cite a source for your claim about lithium? It sounds interesting.

Ah, yes. Sounds like it. Interestingly, the Quantified Health Prize winner also recommends low-dose lithium, but for a different reason: its effect on long-term neural health.

I don't think it's really a different reason; also, AFAIK I copied all the QHP citations into my section.

Gwern's research, as linked here, is better than anything I could put together.

Are there studies to support the abdominal breathing bit? If so, how were they conducted?

The one I heard about, but have not been able to find the last few times I looked for it, investigated how cardiac arrest patients at a particular hospital breathed. All (nearly all?) of them were chest breathers, and about 25% of the general adult population breathes abdominally. I don't think I've seen a randomized trial that taught some subjects how to breath abdominally and then saw how their rates compared, which is what would give clearer evidence. My understanding of why is that abdominal breathing increases oxygen absorbed per breath, lowering total lung/heart effort.

I don't know the terms to do a proper search of the medical literature, and would be interested in the results of someone with more domain-specific expertise investigating the issue.

What is your method of intermittent fasting?

Don't eat before noon or after 8 PM. Typically, that cashes out as eating between 1 and 7 because it's rarely convenient for me to start prepping food before noon, and I have a long habit of eating dinner at 5 to 6. On various days of the week (mostly for convenience reasons), I eat one huge meal, a big meal and a moderately sized meal, or three moderately sized meals, so my fasting period stretches from 16 hours at the shortest to ~21 hours at the longest.

I'm not a particularly good storehouse for information on IF- I would look to people like Leangains or Precision Nutrition for more info.

thank you. It seems like there's a lot of contradictory opinions on the subject :(

lots of [...] milk

I seem to recall a study suggesting that it can be bad for adults to drink lots of milk (more than a cup a day).

Bad in what way? The majority of humanity is lactose intolerant and should not drink milk for that reason. And milk contains a bunch of fat and sugar which isn't exactly good for you if you drink extreme amounts. Is that what you are talking about, or is it something new?

I've found it: it was in “Fear of a Vegan Planet” by Mickey Z. It suggests milk can lower the pH of the blood which will try to take calcium from the bones to compensate it, citing the 1995 radio show “Natural Living”. (It doesn't look as much as a reliable source to me now as I remembered it did.)

I've found materials both supporting and refuting this idea. It IS possible for diet to effect your blood pH, but whether or not that effects the bones is not clear. Here are two research papers that discuss the topic: http://www.ncbi.nlm.nih.gov/pubmed/21529374 and http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3195546/?tool=pubmed

“Everyone” is tricky, since the main causes of mortality vary with your age. Anyway, I'd say, not smoking, exercising, not being obese (nor emaciated, but in the parts of the world where most Internet users are, short of anorexia nervosa this isn't likely to be a problem), driving less and in a less aggressive way, not committing suicide... Don't they teach this stuff in high school?

The last sentence is patronizing, and especially inappropriate in a thread about asking stupid questions.

Don't they teach this stuff in high school?

To the extent that a given fact about life extension can be sneered at like that I would assume that the question was intended to encompass facts of at least one degree less obvious. ie. "What practical things should everyone be doing to extend their lifetimes apart from, you know, breathing, eating, sleeping, drinking?" is implicit.

Given the huge number of smokers and obese people, I daresay the things I said in the grandparent are not that obvious (or most people aren't interested in living longer).

"Obvious to the population as a whole" and "obvious to a LessWrong reader" probably differ dramatically. I don't think repeating the advice is necessarily bad, since those are common points of failure, but the value of the advice is probably fairly minimal.

Don't they teach this stuff in high school?

Yes, they do teach this stuff in high school (and middle school and elementary school for that matter), but they generally had an agenda significantly different from "give students the most accurate possible information about how to be healthy." Based on my admittedly anecdotal recollections, the main goals were to scare us as much as possible about sex and drugs and avoid having to explain anything complicated. As such, I would trust the LW community far more than what I was taught in school.

Of course, if you want to get your health advice from DARE and the Food Pyramid, I guess that's your right.

What do people mean here when they say "acausal"?

Also, if MWI hypothesis is true, there's no way for one branch to interact with another later, right? If there are two worlds that are different based on some quantum event that occurred in 1000 CE, those two worlds will never interact, in principle, right?

"Acausal" is used as a contrast to Causal Decision Theory (CDT). CDT states that decisions should be evaluated with respect to their causal consequences; ie if there's no way for a decision to have a causal impact on something, then it is ignored. (More precisely, in terms of Pearl's Causality, CDT is equivalent to having your decision conduct a counterfactual surgery on a Directed Acyclic Graph that represents the world, with the directions representing causality, then updating nodes affected by the decision.) However, there is a class of decisions for which your decision literally does have an acausal impact. The classic example is Newcomb's Problem, in which another agent uses a simulation of your decision to decide whether or not to put money in a box; however, the simulation took place before your actual decision, and so the money is already in the box or not by the time you're making your decision.

"Acausal" refers to anything falling in this category of decisions that have impacts that do not result causally from your decisions or actions. One example is, as above, Newcomb's Problem; other examples include:

  • Acausal romance: romances where interaction is impossible
  • The Prisoner's Dilemma, or any other symmetrical game, when played against the same algorithm you are running. You know that the other player will make the same choice as you, but your choice has no causal impact on their choice.

There are a number of acausal decision theories: Evidential Decision Theory (EDT), Updateless Decision Theory (UDT), Timeless Decision Theory (TDT), and Ambient Decision Theory (ADT).

In EDT, which originates in academia, casuality is completely ignored, and only correlations are used. This leads to the correct answer on Newscomb's Problem, but fails on others- for example, the Smoking Lesion. UDT is essentially EDT, but with an agent that has access to its own code. (There's a video and transcript explaining this in more detail here).

TDT, like CDT, relies on causality instead of correlation; however, instead of having agents chose a decision that is implemented, it has agents first chose a platonic computation that is instantiated in, among other things, the actual decision maker; however, is is also instantiated in every other algorithm is equal, acausally, to the decision maker's algorithm, including simulations, other agents, etc. And, given all of these instantiations, the agent then choses the utility-maximizing algorithm.

ADT...I don't really know, although the wiki says that it is "variant of updateless decision theory that uses first order logic instead of mathematical intuition module (MIM), emphasizing the way an agent can control which mathematical structure a fixed definition defines, an aspect of UDT separate from its own emphasis on not making the mistake of updating away things one can still acausally control."

One must distinguish different varieties of MWI. There is an old version of the interpretation (which, I think, is basically what most informed laypeople think of when they hear "MWI") according to which worlds cannot interact. This is because "world-splitting" is a postulate that is added to the Schrodinger dynamics. Whenever a quantum measurement occurs, the entire universe (the ordinary 3+1-dimensional universe we are all familiar with) duplicates (except that the two versions have different outcomes for the measurement). It's basically as mysterious a process as collapse, perhaps even more mysterious.

This is different from the MWI most contemporary proponents accept. This MWI (also called "Everettianism" or "The Theory of the Universal Wavefunction" or...) does not actually have full-fledged separate universes. The fundamental ontology is just a single wavefunction. When macroscopic branches of the wavefunction are sufficiently separate in configuration space, one can loosely describe it as world-splitting. But there is nothing preventing these branches from interfering in principle, just as microscopic branches interfere in the two-slit experiment. There is no magical threshold of branch size/separation where the Schrodinger equation no longer permits interference. And in MWI understood this way, the Schrodinger equation is all the dynamics there are. So yeah, MWI does allow for the interaction of "worlds" in principle. The reason we never see this happening at a macroscopic scale is usually explained by appeal to special initial conditions (just like the thermodynamic arrow of time).

ETA: And in some sense, all the separate worlds are actually interacting all the time, just at a scale that is impossible for our instruments to detect.

Suppose I use the luck of Mat Cauthon to pick a random direction to fly my perfect spaceship. Assuming I live forever, do I eventually end up in a world that split from this world?

No. The splitting is not in physical space (the space through which you travel in a spaceship), but in configuration space. Each point in configuration space represents a particular arrangement of fundamental particles in real space.

Moving in real space changes your position in configuration space of course, but this doesn't mean you'll eventually move out of your old branch into a new one. After all, the branches aren't static. You moving in real space is a particular aspect of the evolution of the universal wavefunction. Specifically, your branch (your world) is moving in configuration space.

Don't think of the "worlds" in MWI as places. It's more accurate to think of them as different (evolving) narratives or histories. Splitting of worlds is a bifurcation of narratives. Moving around in real space doesn't change the narrative you're a part of, it just adds a little more to it. Narratives can collide, as in the double slit experiment, which leads to things appearing as if both (apparently incompatible) narratives are true -- the particle went through both slits. But we don't see this collision of narratives at the macroscopic level.

Also, if MWI hypothesis is true, there's no way for one branch to interact with another later, right? If there are two worlds that are different based on some quantum event that occurred in 1000 CE, those two worlds will never interact, in principle, right?

To expand on what pragmatist said: The wavefunction started off concentrated in a tiny corner of a ridiculously high-dimensional space (configuration space has several dimensions for every particle), and then spread out in a very non-uniform way as time passed.

In many cases, the wavefunction's rule for spreading out (the Schrödinger equation) allows for two "blobs" to "separate" and then "collide again" (thus the two-split experiment, Feynman paths and all sorts of wavelike behavior). The quote marks around these are because it's not ever like perfect physical separation, more like the way that the pointwise sum of two Gaussian functions with very different means looks like two "separated" blobs.

But certain kinds of interactions (especially those that lead to a cascade of other interactions) correspond to those blobs "losing" each other. And if they do so, then it's highly unlikely they'll accidentally "collide" again later. (A random walk in a high-dimensional space never finds its way back, heuristically speaking.)

So, as long as the universe has relatively low entropy (as it will until what we would call the end of the universe), significant interference with "blobs" of wavefunction that "split off of our blob" in macroscopic ways a long time ago would be fantastically unlikely. Not impossible, just "a whale and a petunia spontaneously appear out of quantum noise" degree of improbability.

What do people mean here when they say "acausal"?

As I understand it: If you draw out events as a DAG with arrows representing causality, then A acausally effects B in the case that there is no path from A to B, and yet a change to A necessitates a change to B, normally because of either a shared ancestor or a logical property.

I most often use it informally to mean "contrary to our intuitive notions of causality, such as the idea that causality must run forward in time", instead of something formal having to do with DAGs. Because from what I understand, causality theorists still disagree on how to formalize causality (e.g., what constitutes a DAG that correctly represents causality in a given situation), and it seems possible to have a decision theory (like UDT) that doesn't make use of any formal definition of causality at all.

Broadly, branches become less likely to interact as they become more dissimilar; dissimilarity tends to bring about dissimilarity, so they can quickly diverge to a point where the probability of further interaction is negligible. Note that at least as Eliezer tells it, branches aren't ontologically basic objects in MWI any more than chairs are; they're rough high-level abstractions we imagine when we think about MWI. If you want a less confused understanding than this, I don't have a better suggestion than actually reading the quantum physics sequence!

Having seen the exchange that probably motivated this, one note: in my opinion, events can be linked both causally and acausally. The linked post gives an example. I don't think that's an abuse of language; we can say that people are simultaneously communicating verbally and non-verbally.

Also, if MWI hypothesis is true, there's no way for one branch to interact with another later, right?

Basically, if two of them evolve into the same "world", they interfere. It could be constructive or destructive. It averages out to be that it occurs as often as you'd expect, so outside of stuff like the double-slit experiment, they won't really interact.

Hmm. I'm also pretty sure that the double-slit experiments are not evidence of MWI vs. waveform collapse.

They are evidence against wave-form collapse, in that they give a lower bound as to when it must occur. Since, if it does exist, it's fairly likely that waveform collapse happens at a really extreme point, there's really only a fairly small amount of evidence you can get against waveform collapse without something that disproves MWI too. The reason MWI is more likely is Occam's razor, not evidence.

Well, I tried to understand some double-slit corner-cases. If I read some classical Copenhagen-approach quantum physics textbook, it is hard to describe what happens if you install a non-precise particle detector securely protected from experimenter's attempts to ever read it.

Of course, in some cases Penrose model and MWI are hard to distinguish, because gravitons are hard to screen off and can cause entanglement over large distances.

Also, if MWI hypothesis is true, there's no way for one branch to interact with another later, right? If there are two worlds that are different based on some quantum event that occurred in 1000 CE, those two worlds will never interact, in principle, right?

I think it depends on perspective. We notice worlds interfering with each other in the double slit experiment. I think that maybe once you are in a world you no longer see evidence of it interfering with other worlds? I'm not really sure.

Pretty sure double slit stuff is an effect of wave-particle duality, which is just as consistent with MWI as with waveform collapse.

"Acausal" means formal or final as opposed to efficient causality.

Also, a monad is just a monoid in the category of endofunctors.

That looks like precise jargon. Where should I look up the words you just used?

Philosophy sources, eg the online Stanford Encyclopedia thereof or some work on Aristotle. But I don't recommend you bother. "Formal" means a logical implication. "Final" suggests a purpose, which makes sense in the context of decision theory.

Aristotelian tradition. I'm sure you could find a lot of similarly motivated classifications in the cybernetics and complex systems literature if you're not into old school metaphysics.

I've read the metaethics sequence twice and am still unclear on what the basic points it's trying to get across are. (I read it and get to the end and wonder where the "there" is there. What I got from it is "our morality is what we evolved, and humans are all we have therefore it is fundamentally good and therefore it deserves to control the entire future", which sounds silly when I put it like that.) Would anyone dare summarise it?

Morality is good because goals like joy and beauty are good. (For qualifications, see Appendices A through OmegaOne.) This seems like a tautology, meaning that if we figure out the definition of morality it will contain a list of "good" goals like those. We evolved to care about goodness because of events that could easily have turned out differently, in which case "we" would care about some other list. But, and here it gets tricky, our Good function says we shouldn't care about that other list. The function does not recognize evolutionary causes as reason to care. In fact, it does not contain any representation of itself. This is a feature. We want the future to contain joy, beauty, etc, not just 'whatever humans want at the time,' because an AI or similar genie could and probably would change what we want if we told it to produce the latter.

Okay, now this definitely sounds like standard moral relativism to me. It's just got the caveat that obviously we endorse our own version of morality, and that's the ground on which we make our moral judgements. Which is known as appraiser relativism.

I must confess I do not understand what you just said at all. Specifically:

  • the second sentence: could you please expand on that?
  • I think I get that the function does not evaluate itself at all, and if you ask it just says "it's just good 'cos it is, all right?"
  • Why is this a feature? (I suspect the password is "Löb's theorem", and only almost understand why.)
  • The last bit appears to be what I meant by "therefore it deserves to control the entire future." It strikes me as insufficient reason to conclude that this can in no way be improved, ever.

Does the sequence show a map of how to build metamorality from the ground up, much as writing the friendly AI will need to work from the ground up?

the second sentence: could you please expand on that?

I'll try: any claim that a fundamental/terminal moral goal 'is good' reduces to a tautology on this view, because "good" doesn't have anything to it besides these goals. The speaker's definition of goodness makes every true claim of this kind true by definition. (Though the more practical statements involve inference. I started to say it must be all logical inference, realized EY could not possibly have said that, and confirmed that in fact he did not.)

I get that the function does not evaluate itself at all,

Though technically it may see the act of caring about goodness as good. So I have to qualify what I said before that way.

Why is this a feature?

Because if the function could look at the mechanical, causal steps it takes, and declare them perfectly reliable, it would lead to a flat self-contradiction by Lob's Theorem. The other way looks like a contradiction but isn't. (We think.)

Thank you, this helps a lot.

Though technically it may see the act of caring about goodness as good. So I have to qualify what I said before that way.

Ooh yeah, didn't spot that one. (As someone who spent a lot of time when younger thinking about this and trying to be a good person, I certainly should have spotted this.)

This comment by Richard Chappell explained clearly and concisely Eliezer's metaethical views. It was very highly upvoted, so apparently the collective wisdom of the community considered it accurate. It didn't receive an explicit endorsement by Eliezer, though.

From the comment by Richard Chappell:

(namely, whatever terminal values the speaker happens to hold, on some appropriate [if somewhat mysterious] idealization).

(i) 'Right' means, roughly, 'promotes external goods X, Y and Z' (ii) claim i above is true because I desire X, Y, and Z.

People really think EY is saying this? It looks to me like a basic Egoist stance, where "your values" also include your moral preferences. That is my position, but I don't think EY is on board.

"Shut up and multiply" implies a symmetry in value between different people that isn't implied by the above. Similarly, the diversion into mathematical idealization seemed like a maneuver toward Objective Morality - One Algorithm to Bind Them, One Algorithm to Rule them All. Everyone gets their own algorithm as the standard of right and wrong? Fantastic, if it were true, but that's not how I read EY.

It's strange, because Richard seems to say that EY agrees with me, while I think EY agrees with him.

I think you are mixing up object-level ethics and metaethics here. You seem to be contrasting an Egoist position ("everyone should do what they want") with an impersonal utilitarian one ("everyone should do what is good for everyone, shutting up and multiplying"). But the dispute is about what "should", "right" and related words mean, not about what should be done.

Eliezer (in Richard's interpretation) says that when someone says "Action A is right" (or "should be done"), the meaning of this is roughly "A promotes ultimate goals XYZ". Here XYZ is in fact the outcome of a complicated computation based from of the speaker's state of mind, which can be translated roughly as "the speaker's terminal values" (for example, for a sincere philanthropist XYZ might be "everyone gets joy, happiness, freedom, etc"). But the fact that XYZ are the speaker's terminal values is not part of the meaning of "right", so it is not inconsistent for someone to say "Everyone should promote XYZ, even if they don't want it" (e.g. "Babyeaters should not eat babies"). And needless to say, XYZ might include generalized utilitarian values like "everyone gets their preferences satisfied", in which case impersonal, shut-up-and-multiply utilitarianism is what is needed to make actual decisions for concrete cases.

But the dispute is about what "should", "right" and related words mean, not about what should be done.

Of course it's about both. You can define labels in any way you like. In the end, your definition better be useful for communicating concepts with other people, or it's not a good definition.

Let's define "yummy". I put food in my mouth. Taste buds fire, neural impulses propagate fro neuron to neuron, and eventually my mind evaluates how yummy it is. Similar events happen for you. Your taste buds fire, your neural impulses propagate, and your mind evaluates how yummy it is. Your taste buds are not mine, and your neural networks are not mine, so your response and my response are not identical. If I make a definition of "yummy" that entails that what you find yummy is not in fact yummy, I've created a definition that is useless for dealing with the reality of what you find yummy.

From my inside view of yummy, of course you're just wrong if you think root beer isn't yummy - I taste root beer, and it is yummy. But being a conceptual creature, I have more than the inside view, I have an outside view as well, of you, and him, and her, and ultimately of me too. So when I talk about yummy with other people, I recognize that their inside view is not identical to mine, and so use a definition based on the outside view, so that we can actually be talking about the same thing, instead of throwing our differing inside views at each other.

Discussion with the inside view: "Let's get root beer." "What? Root beer sucks!" "Root beer is yummy!" "Is not!" "Is too!"

Discussion with the outside view: "Let's get root beer." "What? Root beer sucks!" "You don't find root beer yummy?" "No. Blech." "OK, I'm getting a root beer." "And I pick pepsi."

If you've tied yourself up in conceptual knots, and concluded that root beer really isn't yummy for me, even though my yummy detector fires whenever I have root beer, you're just confused and not talking about reality.

But the fact that XYZ are the speaker's terminal values is not part of the meaning of "right"

This is the problem. You've divorced your definition from the relevant part of reality - the speaker's terminal values, and somehow twisted it around to where what he *should" do is at odds with his terminal values. This definition is not useful for discussing moral issues with the given speaker. He's a machine that maximizes his terminal values. If his algorithms are functioning properly, he'll disregard your definition as irrelevant to achieving his ends. Whether from the inside view of morality for that speaker, or his outside view, you're just wrong. And you're also wrong from any outside view that accurately models what terminal values people actually have.

Rational discussions of morality start with the observation that people have differing terminal values. Our terminal values are our ultimate biases. Recognizing that my biases are mine, and not identical to yours, is the first step away from the usual useless babble in moral philosophy.

Shifting the lump under the rug but not getting rid of it is how it looks to me too. But I don't understand the rest of that comment and will need to think harder about it (when I'm less sleep-deprived).

I note that that's the comment Lukeprog flagged as his favourite answer, but of course I can't tell if it got the upvotes before or after he did so.

Let me try...

Something is green if it emits or scatters much more light between 520 and 570 nm than between 400 and 520 nm or between 570 and 700 nm. That's what green means, and it also applies to places where there are no humans: it still makes sense to ask whether the skin of tyrannosaurs was green even though there were no humans back then. On the other hand, the reason why we find the concept of ‘something which emits or scatters much more light between 520 and 570 nm than between 400 and 520 nm or between 570 and 700 nm’ important enough to have a word (green) for it is that for evolutionary reasons we have cone cells which work in those ranges; if we saw in the ultraviolet, we might have a word, say breen, for ‘something which emits or scatters much more light between 260 and 285 nm than between 200 and 260 nm or between 285 and 350 nm’. This doesn't mean that greenness is relative, though.

Likewise, something is good if it leads to sentient beings living, to people being happy, to individuals having the freedom to control their own lives, to minds exploring new territory instead of falling into infinite loops, to the universe having a richness and complexity to it that goes beyond pebble heaps, etc. That's what good means, and it also applies to places where there are no humans: it still makes sense to ask whether it's good for Babyeaters to eat their children even though there are no humans on that planet. On the other hand, the reason why we find the concept of ‘something which leads to sentient beings living, to people being happy, to individuals having the freedom to control their own lives, to minds exploring new territory instead of falling into infinite loops, to the universe having a richness and complexity to it that goes beyond pebble heaps, etc.’ important enough to have a word (good) for it is that for evolutionary reasons we value such kind of things; if we valued heaps composed by prime numbers of pebbles, we might have a word, say pood, for ‘something which leads to lots of heaps with a prime number of pebbles in each’. This doesn't mean that goodness is relative, though.

I have recently read this post and thought it describes very well how I always thought about morality, even though it talks about 'sexiness'.

Would reading the metaethics sequence explain to me that it would be wrong to view morality in a similar fashion as sexiness?

One part of it that did turn out well, in my opinion, is Probability is Objectively Subjective and related posts. Eliezer's metaethical theory is, unless I'm mistaken, an effort to do for naive moral intuitions what Bayesianism should do for naive probabilistic intuitions.

"I am not a moral relativist." http://lesswrong.com/lw/t9/no_license_to_be_human/

"I am not a meta-ethical relativist" http://lesswrong.com/lw/t3/the_bedrock_of_morality_arbitrary/mj4

"what is right is a huge computational property—an abstract computation—not tied to the state of anyone's brain, including your own brain." http://lesswrong.com/lw/sm/the_meaning_of_right/

I'm pretty sure Eliezer is actually wrong about whether he's a meta-ethical relativist, mainly because he's using words in a slightly different way from the way they use them. Or rather, he thinks that MER is using one specific word in a way that isn't really kosher. (A statement which I think he's basically correct about, but it's a purely semantic quibble and so a stupid thing to argue about.)

Basically, Eliezer is arguing that when he says something is "good" that's a factual claim with factual content. And he's right; he means something specific-although-hard-to-compute by that sentence. And similarly, when I say something is "good" that's another factual claim with factual content, whose truth is at least in theory computable.

But importantly, when Eliezer says something is "good" he doesn't mean quite the same thing I mean when I say something is "good." We actually speak slightly different languages in which the word "good" has slightly different meanings. Meta-Ethical Relativism, at least as summarized by wikipedia, describes this fact with the sentence "terms such as "good," "bad," "right" and "wrong" do not stand subject to universal truth conditions at all." Eliezer doesn't like that because in each speaker's language, terms like "good" stand subject to universal truth conditions. But each speaker speaks a slightly different language where the truth conditions on the word represented by the string "good" stands subject to a slightly different set of universal truth conditions.

For an analogy: I apparently consistently define "blonde" differently from almost everyone I know. But it has an actual definition. When I call someone "blonde" I know what I mean, and people who know me well know what I mean. But it's a different thing from what almost everyone else means when they say "blonde." (I don't know why I can't fix this; I think my color perception is kinda screwed up). An MER guy would say that whether someone is "blonde" isn't objectively true or false because what it means varies from speaker to speaker. Eliezer would say that "blonde" has a meaning in my language and a different meaning in my friends' language, but in either language whether a person is "blonde" is in fact an objective fact.

And, you know, he's right. But we're not very good at discussing phenomena where two different people speak the same language except one or two words have different meanings; it's actually a thing that's hard to talk about. So in practice, "'good' doesn't have an objective definition" conveys my meaning more accurately to the average listener than "'good' has one objective meaning in my language and a different objective meaning in your language."

But importantly, when Eliezer says something is "good" he doesn't mean quite the same thing I mean when I say something is "good." We actually speak slightly different languages in which the word "good" has slightly different meaning

In http://lesswrong.com/lw/t0/abstracted_idealized_dynamics/mgr, user steven wrote "When X (an agent) judges that Y (another agent) should Z (take some action, make some decision), X is judging that Z is the solution to the problem W (perhaps increasing a world's measure under some optimization criterion), where W is a rigid designator for the problem structure implicitly defined by the machinery shared by X and Y which they both use to make desirability judgments. (Or at least X is asserting that it's shared.) Due to the nature of W, becoming informed will cause X and Y to get closer to the solution of W, but wanting-it-when-informed is not what makes that solution moral." with which Eliezer agreed.

This means that, even though people might presently have different things in mind when they say something is "good", Eliezer does not regard their/our/his present ideas as either the meaning of their-form-of-good or his-form-of-good. The meaning of good is not "the things someone/anyone personally, presently finds morally compelling", but something like "the fixed facts that are found but not defined by clarifying the result of applying the shared human evaluative cognitive machinery to a wide variety of situations under reflectively ideal conditions of information." That is to say, Eliezer thinks, not only that moral questions are well defined, "objective", in a realist or cognitivist way, but that our present explicit-moralities all have a single, fixed, external referent which is constructively revealed via the moral computations that weigh our many criteria.

I haven't finished reading CEV, but here's a quote from Levels of Organization that seems relevant: "The target matter of Artificial Intelligence is not the surface variation that makes one human slightly smarter than another human, but rather the vast store of complexity that separates a human from an amoeba". Similarly, the target matter of inferences that figure out the content of morality is not the surface variation of moral intuitions and beliefs under partial information which result in moral disagreements, but the vast store of neural complexity that allows humans to disagree at all, rather than merely be asking different questions.

So the meaning of presently-acted-upon-and-explicitly-stated-rightness in your language, and the meaning of it in my language might be different, but one of the many points of the meta-ethics sequence is that the expanded-enlightened-mature-unfolding of those present usages gives us a single, shared, expanded-meaning in both our languages.

If you still think that moral relativism is a good way to convey that in daily language, fine. It seems the most charitable way in which he could be interpreted as a relativist is if "good" is always in quotes, to denote the present meaning a person attaches to the word. He is a "moral" relativist, and a moral realist/cognitivist/constructivist.

Hm, that sounds plausible, especially your last paragraph. I think my problem is that I don't see any reason to suspect that the expanded-enlightened-mature-unfolding of our present usages will converge in the way Eliezer wants to use as a definition. See for instance the "repugnant conclusion" debate; people like Peter Singer and Robin Hanson think the repugnant conclusion actually sounds pretty awesome, while Derek Parfit thinks it's basically a reductio on aggregate utilitarianism as a philosophy and I'm pretty sure Eliezer agrees with him, and has more or less explicitly identified it as a failure mode of AI development. I doubt these are beliefs that really converge with more information and reflection.

Or in steven's formulation, I suspect that relatively few agents actually have Ws in common; his definition presupposes that there's a problem structure "implicitly defined by the machinery shared by X and Y which they both use to make desirability judgments". I'm arguing that many agents have sufficiently different implicit problem structures that, for instance, by that definition Eliezer and Robin Hanson can't really make "should" statements to each other.

Just getting citations out of the way, Eliezer talked about the repugnant conclusion here and here. He argues for shared W in Psychological Unity and Moral Disagreement. Kaj Sotala wrote a notable reply to Psychological Unity, Psychological Diversity. Finally Coherent Extrapolated Volition is all about finding a way to unfold present-explicit-moralities into that shared-should that he believes in, so I'd expect to see some arguments there.

Now, doesn't the state of the world today suggest that human explicit-moralities are close enough that we can live together in a Hubble volume without too many wars, without a thousand broken coalitions of support over sides of irreconcilable differences, without blowing ourselves up because the universe would be better with no life than with the evil monsters in that tribe on the other side of the river?

Human concepts are similar enough that we can talk to each other. Human aesthetics are similar enough that there's a billion dollar video game industry. Human emotions are similar enough that Macbeth is still being produced three hundred years later on the other side of the globe. We have the same anatomical and functional regions in our brains. Parents everywhere use baby talk. On all six populated continents there are countries in which more than half of the population identifies with the Christian religions.

For all those similarities, is humanity really going to be split over the Repugnant Conclusion? Even if the Repugnant Conclusion is more of a challenge than muscling past a few inductive biases (scope insensitivity and the attribute substitution heuristic are also universal), I think we have some decent prospect for a future in which you don't have to kill me. Whatever will help us to get to that future, that's what I'm looking for when I say "right". No matter how small our shared values are once we've felt the weight of relevant moral arguments, that's what we need to find.

This comment may be a little scattered; I apologize. (In particular, much of this discussion is beside the point of my original claim that Eliezer really is a meta-ethical relativist, about which see my last paragraph).

I certainly don't think we have to escalate to violence. But I do think there are subjects on which we might never come to agreement even given arbitrary time and self-improvement and processing power. Some of these are minor judgments; some are more important. But they're very real.

In a number of places Eliezer commented that he's not too worried about, say, two systems morality_1 and morality_2 that differ in the third decimal place. I think it's actually really interesting when they differ in the third decimal place; it's probably not important to the project of designing an AI but I don't find that project terribly interesting so that doesn't bother me.

But I'm also more willing to say to someone, ""We have nothing to argue about [on this subject], we are only different optimization processes." With most of my friends I really do have to say this, as far as I can tell, on at least one subject.

However, I really truly don't think this is as all-or-nothing as you or Eliezer seem to paint it. First, because while morality may be a compact algorithm relative to its output, it can still be pretty big, and disagreeing seriously about one component doesn't mean you don't agree about the other several hundred. (A big sticking point between me and my friends is that I think getting angry is in general deeply morally blameworthy, whereas many of them believe that failing to get angry at outrageous things is morally blameworthy; and as far as I can tell this is more or less irreducible in the specification for all of us). But I can still talk to these people and have rewarding conversations on other subjects.

Second, because I realize there are other means of persuasion than argument. You can't argue someone into changing their terminal values, but you can often persuade them to do so through literature and emotional appeal, largely due to psychological unity. I claim that this is one of the important roles that story-telling plays: it focuses and unifies our moralities through more-or-less arational means. But this isn't an argument per se and has no particular reason one would expect it to converge to a particular outcome--among other things, the result is highly contingent on what talented artists happen to believe. (See Rorty's Contingency, Irony, and Solidarity for discussion of this).

Humans have a lot of psychological similarity. They also have some very interesting and deep psychological variation (see e.g. Haidt's work on the five moral systems). And it's actually useful to a lot of societies to have variation in moral systems--it's really useful to have some altruistic punishers, but not really for everyone to be an altruistic punisher.

But really, this is beside the point of the original question, whether Eliezer is really a meta-ethical relativist, because the limit of this sequence which he claims converges isn't what anyone else is talking about when they say "morality". Because generally, "morality" is defined more or less to be a consideration that would/should be compelling to all sufficiently complex optimization processes. Eliezer clearly doesn't believe any such thing exists. And he's right.

"We have nothing to argue about [on this subject], we are only different optimization processes."

Calling something a terminal value is the default behavior when humans look for a justification and don't find anything. This happens because we perceive little of our own mental processes and in the absence of that information we form post-hoc rationalizations. In short, we know very little about our own values. But that lack of retrieved / constructed justification doesn't mean it's impossible to unpack moral intuitions into algorithms so that we can more fully debate which factors we recognize and find relevant.

A big sticking point between me and my friends is that I think getting angry is in general deeply morally blameworthy, whereas many of them believe that failing to get angry at outrageous things is morally blameworthy

Your friends can understand why humans have positive personality descriptors for people who don't get angry in various situations: descriptors like reflective, charming, polite, solemn, respecting, humble, tranquil, agreeable, open-minded, approachable, cooperative, curious, hospitable, sensitive, sympathetic, trusting, merciful, gracious.

You can understand why we have positive personality descriptors for people who get angry in various situations: descriptors like impartial, loyal, decent, passionate, courageous, boldness, leadership, strength, resilience, candor, vigilance, independence, reputation, and dignity.

Both you and your friends can see how either group could pattern match their behavioral bias as being friendly, supportive, mature, disciplined, or prudent.

These are not deep variations, they are relative strengths of reliance on the exact same intuitions.

You can't argue someone into changing their terminal values, but you can often persuade them to do so through literature and emotional appeal, largely due to psychological unity. I claim that this is one of the important roles that story-telling plays: it focuses and unifies our moralities through more-or-less arational means. But this isn't an argument per se and has no particular reason one would expect it to converge to a particular outcome--among other things, the result is highly contingent on what talented artists happen to believe.

Stories strengthen our associations of different emotions in response to analogous situations, which doesn't have much of a converging effect (Edit: unless, you know, it's something like the bible that a billion people read. That certainly pushes humanity in some direction), but they can also create associations to moral evaluative machinery that previously wasn't doing its job. There's nothing arational about this: neurons firing in the inferior frontal gyrus are evidence relevant to a certain useful categorizing inference, "things which are sentient".

Because generally, "morality" is defined more or less to be a consideration that would/should be compelling to all sufficiently complex optimization processes

I'm not in a mood to argue definitions, but "optimization process" is a very new concept, so I'd lean toward "less".

You're...very certain of what I understand. And of the implications of that understanding.

More generally, you're correct that people don't have a lot of direct access to their moral intuitions. But I don't actually see any evidence for the proposition they should converge sufficiently other than a lot of handwaving about the fundamental psychological similarity of humankind, which is more-or-less true but probably not true enough. In contrast, I've seen lots of people with deeply, radically separated moral beliefs, enough so that it seems implausible that these all are attributable to computational error.

I'm not disputing that we share a lot of mental circuitry, or that we can basically understand each other. But we can understand without agreeing, and be similar without being the same.

As for the last bit--I don't want to argue definitions either. It's a stupid pastime. But to the extent Eliezer claims not to be a meta-ethical relativist he's doing it purely through a definitional argument.

He does intend to convey something real and nontrivial (well, some people might find it trivial, but enough people don't that it is important to be explicit) by saying that he is not a meta-ethical realist. The basic idea is that, while his brain is the causal reason for him wanting to do certain things, it is not referenced in the abstract computation that defines what is right. To use a metaphor from the meta-ethics sequence, it is a fact about a calculator that it is computing 1234 * 5678, but the fact that 1234 * 5678 = 7 006 652 is not a fact about that calculator.

This distinguishes him from some types of relativism, which I would guess to be the most common types. I am unsure whether people understand that he is trying to draw this distinction and still think that it is misleading to say that he is not a moral relativist or whether people are confused/have a different explanation for why he does not identify as a relativist.

In contrast, I've seen lots of people with deeply, radically separated moral beliefs, enough so that it seems implausible that these all are attributable to computational error.

Do you know anyone who never makes computational errors? If 'mistakes' happen at all, we would expect to see them in cases involving tribal loyalties. See von Neumann and those who trusted him on hidden variables.

The claim wasn't that it happens too often to attribute to computation error, but that the types of differences seem unlikely to stem from computational errors.

The problem is, EY may just be contradicting himself, or he may be being ambiguous, and even deliberately so.

"what is right is a huge computational property—an abstract computation—not tied to the state of anyone's brain, including your own brain."

I think his views could be clarified in a moment if he stated clearly whether this abstract computation is identical for everyone. Is it AC_219387209 for all of us, or AC_42398732 for you, and AC_23479843 for me, with the proviso that it might be the case that AC_42398732 = AC_23479843?

Your quote makes it appear the former.Other quotes in this thread about a "shared W" point to that as well.

Then again, quotes in the same article make it appear the latter, as in:

If you hoped that morality would be universalizable—sorry, that one I really can't give back. Well, unless we're just talking about humans. Between neurologically intact humans, there is indeed much cause to hope for overlap and coherence;

We're all busy playing EY Exegesis. Doesn't that strike anyone else as peculiar? He's not dead. He's on the list. And he knows enough about communication and conceptualization to have been clear in the first place. And yet on such a basic point, what he writes seems to go round and round and we're not clear what the answer is. And this, after years of opportunity for clarification.

It brings to mind Quirrell:

“But if your question is why I told them that, Mr. Potter, the answer is that you will find ambiguity a great ally on your road to power. Give a sign of Slytherin on one day, and contradict it with a sign of Gryffindor the next; and the Slytherins will be enabled to believe what they wish, while the Gryffindors argue themselves into supporting you as well. So long as there is uncertainty, people can believe whatever seems to be to their own advantage. And so long as you appear strong, so long as you appear to be winning, their instincts will tell them that their advantage lies with you. Walk always in the shadow, and light and darkness both will follow.”

If you're trying to convince people of your morality, and they have already picked teams, there is an advantage in letting it appear to each that they haven't really changed sides.

Ah, neat, you found exactly what it is. Although the LW version is a bit stronger, since it involves thoughts like "the cause of me thinking some things are moral does not come from interacting with some mysterious substance of moralness."

That's it? That's the whole takeaway?

I mean, I can accept "the answer is there is no answer" (just as there is no point to existence of itself, we're just here and have to work out what to do for ourselves). It just seems rather a lot of text to get that across.