From Costanza's original thread (entire text):

This is for anyone in the LessWrong community who has made at least some effort to read the sequences and follow along, but is still confused on some point, and is perhaps feeling a bit embarrassed. Here, newbies and not-so-newbies are free to ask very basic but still relevant questions with the understanding that the answers are probably somewhere in the sequences. Similarly, LessWrong tends to presume a rather high threshold for understanding science and technology. Relevant questions in those areas are welcome as well.  Anyone who chooses to respond should respectfully guide the questioner to a helpful resource, and questioners should be appropriately grateful. Good faith should be presumed on both sides, unless and until it is shown to be absent.  If a questioner is not sure whether a question is relevant, ask it, and also ask if it's relevant.



  • How often should these be made? I think one every three months is the correct frequency.
  • Costanza made the original thread, but I am OpenThreadGuy. I am therefore not only entitled but required to post this in his stead. But I got his permission anyway.


New Comment
209 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

What practical things should everyone be doing to extend their lifetimes?

Michaelcurzi's How to avoid dying in a car crash is relevant. Bentarm's comment on that thread makes an excellent point regarding coronary heart disease. There is also Eliezer Yudkowsky's You Only Live Twice and Robin Hanson's We Agree: Get Froze on cryonics.
Good question. Its probably easier to list things they shouldn't be doing that are known to significantly reduce life expectancy (e.g. smoking). I would guess it would mainly be obvious things like exercise and diet, but it would be interesting to see the effects quantified.
0Joshua Hobbes
What about vitamins/medication? Isn't Ray Kurzweil on like fifty different pills? Why isn't everyone?
And Aubrey de Grey doesn't take any. (
It's unclear whether taking vitamin supplements would actually help. (See also the Quantified Health Prize post army1987 linked.) Regarding medication, I'll add that for people over 40, aspirin seems to be a decent all-purpose death reducer. The effect's on the order of a 10% reduction in death rate after taking 75mg of aspirin daily for 5-10 years. (Don't try to take more to enhance the effect, as it doesn't seem to work. And you have to take it daily; only taking it on alternating days appears to kill the effect too.)
Laziness and lack of information
1Joshua Hobbes
Isn't Less Wrong supposed to be partially about counteracting those? The topic must have come up at some point in the sequences.
I follow the "Bulletproof" diet.
Donate to SENS.
Basically, any effective plan boils down to diligence and clean living. But here are changes I've made for longevity reasons: You can retain nervous control of your muscles with regular exercise; this is a good place to start on specifically anti-aging exercise. Abdominal breathing can significantly reduce your risk of heart attacks. (The previously linked book contains one way to switch styles.) Intermittent fasting (only eating in a 4-8 hour window, or on alternating days, or a few other plans) is surprisingly easy to adopt and maintain, and may have some (or all) of the health benefits of calorie restriction, which is strongly suspected to lengthen human lifespans (and known to lengthen many different mammal lifespans). In general, I am skeptical of vitamin supplements as compared to eating diets high in various good things- for example, calcium pills are more likely to give you kidney stones than significantly improve bone health, but eating lots of vegetables / milk / clay is unlikely to give you kidney stones and likely to help your bones. There are exceptions: taking regular low doses of lithium can reduce your chance of suicide and may have noticeable mood benefits, and finding food with high lithium content is difficult (plants absorb it from dirt with varying rates, but knowing that the plant you're buying came from high-lithium dirt is generally hard).
Can you cite a source for your claim about lithium? It sounds interesting.
He's probably going off my section on lithium:
Ah, yes. Sounds like it. Interestingly, the Quantified Health Prize winner also recommends low-dose lithium, but for a different reason: its effect on long-term neural health.
I don't think it's really a different reason; also, AFAIK I copied all the QHP citations into my section.
Gwern's research, as linked here, is better than anything I could put together.
Are there studies to support the abdominal breathing bit? If so, how were they conducted?
The one I heard about, but have not been able to find the last few times I looked for it, investigated how cardiac arrest patients at a particular hospital breathed. All (nearly all?) of them were chest breathers, and about 25% of the general adult population breathes abdominally. I don't think I've seen a randomized trial that taught some subjects how to breath abdominally and then saw how their rates compared, which is what would give clearer evidence. My understanding of why is that abdominal breathing increases oxygen absorbed per breath, lowering total lung/heart effort. I don't know the terms to do a proper search of the medical literature, and would be interested in the results of someone with more domain-specific expertise investigating the issue.
What is your method of intermittent fasting?
Don't eat before noon or after 8 PM. Typically, that cashes out as eating between 1 and 7 because it's rarely convenient for me to start prepping food before noon, and I have a long habit of eating dinner at 5 to 6. On various days of the week (mostly for convenience reasons), I eat one huge meal, a big meal and a moderately sized meal, or three moderately sized meals, so my fasting period stretches from 16 hours at the shortest to ~21 hours at the longest. I'm not a particularly good storehouse for information on IF- I would look to people like Leangains or Precision Nutrition for more info.
thank you. It seems like there's a lot of contradictory opinions on the subject :(
I seem to recall a study suggesting that it can be bad for adults to drink lots of milk (more than a cup a day).
Bad in what way? The majority of humanity is lactose intolerant and should not drink milk for that reason. And milk contains a bunch of fat and sugar which isn't exactly good for you if you drink extreme amounts. Is that what you are talking about, or is it something new?
I've found it: it was in “Fear of a Vegan Planet” by Mickey Z. It suggests milk can lower the pH of the blood which will try to take calcium from the bones to compensate it, citing the 1995 radio show “Natural Living”. (It doesn't look as much as a reliable source to me now as I remembered it did.)
I've found materials both supporting and refuting this idea. It IS possible for diet to effect your blood pH, but whether or not that effects the bones is not clear. Here are two research papers that discuss the topic: and
Thank you

Is there anywhere I can find a decent analysis of the effectiveness and feasibility of our current methods of cryonic preservation?

(one that doesn't originate with a cryonics institute)


Well, that doesn't seem too difficult -

(one that doesn't originate with a cryonics institute)


So, who exactly do you expect to be doing this analysis? The most competent candidates are the cryobiologists, and they are ideologically committed to cryonics not working and have in the past demonstrated their dishonesty\*.

* Literally; I understand the bylaw banning any cryonicists from the main cryobiology association is still in effect. ** eg. by claiming on TV cryonics couldn't work because of the 'exploding lysosomes post-death' theory, even after experiments had disproven the theory.

Cryonicists have the same incentive to lie. Reading the current article series of Mike Darwin on makes a good case on how cryonics currently is broken.
I hope you appreciate the irony of bringing up Darwin's articles on the quality of cryopreservation in the context of someone demanding articles on quality by someone not associated with cryonics institutes.
No, since his articles make the case against current cryonics organisations, despite coming from a strong supporter of the idea.
Do you have a specific example of a pro-cryonics lie? Because as far as I can tell, Mike is arguing for incompetence and not dishonesty or ideological bias as the culprit.
Incompetence is at least as bad as dishonesty. Not sure if it can be distinguished.
No! The distinction not only exists but is incredibly important to this context. Incompetence is a problem of an unqualified person doing the job. It can be fixed by many things, e.g. better on-the-job training, better education, or experience. Replacing them with a more qualified candidate is also an option, assuming you can find one. With a dishonest person, you have a problem of values; they are likely to defect rather than behave superrationally in game-theoretic situations. The only way to deal with that is to keep them out of positions that require trust. Dishonesty can be used to cover one's tracks when one is incompetent. (Bob Nelson was doing this.) I'm not arguing that incompetence isn't Bayesean evidence for dishonesty -- it is. However, there are plenty of other explanations for incompetence as well: cognitive bias (e.g. near/far bias), lack of relevant experience, personality not suited to the job, extreme difficulty of the job, lack of information and feedback to learn from mistakes, lack of time spent learning the job... Of all these, why did your mental pattern-matching algorithms choose to privilege dishonesty as likely to be prevalent? Doesn't the fact that there is all this public information about their failings strike you as evidence that they are generally more interested in learning from their mistakes rather than covering their tracks? I've even seen Max More (Alcor's current CEO) saying positive things about Chronosphere, despite having been personally named and criticized in several of Darwin's articles. The culture surrounding cryonics during the few years I've been observing it actually seems to be one of skeptical reserve and indeed hunger for criticism. Moreover, the distinction cuts both ways: Multiple cryobiologists who are highly competent in their field have repeatedly made demonstrably false statements about cryonics, and have demonstrated willingness to use political force to silence the opposition. There is no inherent contr
No idea. Particularly if all cryobiologists are so committed to discrediting cryonics that they'll ignore/distort the relevant science. I'm not sure how banning cryonicists* from the cryobiology association is a bad thing though. Personally I think organisations like the American Psychiatric Association should follow suit and ban all those with financial ties to pharmaceutical companies. I just want to know how far cryonics needs to go in preventing information-theoretic death in order to allow people to be "brought back to life" and to what extent current cryonics can fulfil that criterion. * This is assuming that by cryonicists you mean people who work for cryonics institutes or people who support cryonics without having an academic background in cryobiology.
There are cryobiologists who are cryonicists, e,g. the authors of this paper.
The paper does not mention cryonics, nor does the lead author's bio mention being a member of the Society for Cryobiology.
So the by-law bans anyone sympathetic to cryonics?
See this article by Mike Darwin.
Thanks! I'm starting to suspect that my dream of finding an impartial analysis of cryonics is doomed to be forever unfulfilled...
2handoflixue I've been finding to be a very insightful blog. I can't speak to the degree of bias of the author, but most of the posts I've read so far have been reasonably well cited. I found it sort of terrifying to read the case reports he links in that comment - I read 101, 102, and 103, and it largely spoke to this being a distinctly amateur organization that is still running everything on hope and guesswork, not precise engineering/scientific principles. Case 101 in particular sort of horrifies me for the aspects of preserving someone who committed suicide, without any consent from the individual in question. I can't help but feel that "patient must be held in dry ice for at least two weeks" is also a rather bad sign. Feel free to read them for yourself and draw your own conclusions - these reports are straight from CI itself, so you can reasonably assume that, if anything, they have a bias towards portraying themselves favorably.
0Paul Crowley
Only one technical analysis of cryonics which concludes it won't work has ever been written:
Interesting, thanks! Have you come across any analysis that establishes cryonics as something that prevents information-theoretic death?
1Paul Crowley
We don't know whether it does or not. The current most in-depth discussion is Scientific Justification of Cryonics Practice

I've read the metaethics sequence twice and am still unclear on what the basic points it's trying to get across are. (I read it and get to the end and wonder where the "there" is there. What I got from it is "our morality is what we evolved, and humans are all we have therefore it is fundamentally good and therefore it deserves to control the entire future", which sounds silly when I put it like that.) Would anyone dare summarise it?

Morality is good because goals like joy and beauty are good. (For qualifications, see Appendices A through OmegaOne.) This seems like a tautology, meaning that if we figure out the definition of morality it will contain a list of "good" goals like those. We evolved to care about goodness because of events that could easily have turned out differently, in which case "we" would care about some other list. But, and here it gets tricky, our Good function says we shouldn't care about that other list. The function does not recognize evolutionary causes as reason to care. In fact, it does not contain any representation of itself. This is a feature. We want the future to contain joy, beauty, etc, not just 'whatever humans want at the time,' because an AI or similar genie could and probably would change what we want if we told it to produce the latter.

Okay, now this definitely sounds like standard moral relativism to me. It's just got the caveat that obviously we endorse our own version of morality, and that's the ground on which we make our moral judgements. Which is known as appraiser relativism.
I must confess I do not understand what you just said at all. Specifically: * the second sentence: could you please expand on that? * I think I get that the function does not evaluate itself at all, and if you ask it just says "it's just good 'cos it is, all right?" * Why is this a feature? (I suspect the password is "Löb's theorem", and only almost understand why.) * The last bit appears to be what I meant by "therefore it deserves to control the entire future." It strikes me as insufficient reason to conclude that this can in no way be improved, ever. Does the sequence show a map of how to build metamorality from the ground up, much as writing the friendly AI will need to work from the ground up?
I'll try: any claim that a fundamental/terminal moral goal 'is good' reduces to a tautology on this view, because "good" doesn't have anything to it besides these goals. The speaker's definition of goodness makes every true claim of this kind true by definition. (Though the more practical statements involve inference. I started to say it must be all logical inference, realized EY could not possibly have said that, and confirmed that in fact he did not.) Though technically it may see the act of caring about goodness as good. So I have to qualify what I said before that way. Because if the function could look at the mechanical, causal steps it takes, and declare them perfectly reliable, it would lead to a flat self-contradiction by Lob's Theorem. The other way looks like a contradiction but isn't. (We think.)
Thank you, this helps a lot. Ooh yeah, didn't spot that one. (As someone who spent a lot of time when younger thinking about this and trying to be a good person, I certainly should have spotted this.)

This comment by Richard Chappell explained clearly and concisely Eliezer's metaethical views. It was very highly upvoted, so apparently the collective wisdom of the community considered it accurate. It didn't receive an explicit endorsement by Eliezer, though.

From the comment by Richard Chappell: People really think EY is saying this? It looks to me like a basic Egoist stance, where "your values" also include your moral preferences. That is my position, but I don't think EY is on board. "Shut up and multiply" implies a symmetry in value between different people that isn't implied by the above. Similarly, the diversion into mathematical idealization seemed like a maneuver toward Objective Morality - One Algorithm to Bind Them, One Algorithm to Rule them All. Everyone gets their own algorithm as the standard of right and wrong? Fantastic, if it were true, but that's not how I read EY. It's strange, because Richard seems to say that EY agrees with me, while I think EY agrees with him.
I think you are mixing up object-level ethics and metaethics here. You seem to be contrasting an Egoist position ("everyone should do what they want") with an impersonal utilitarian one ("everyone should do what is good for everyone, shutting up and multiplying"). But the dispute is about what "should", "right" and related words mean, not about what should be done. Eliezer (in Richard's interpretation) says that when someone says "Action A is right" (or "should be done"), the meaning of this is roughly "A promotes ultimate goals XYZ". Here XYZ is in fact the outcome of a complicated computation based from of the speaker's state of mind, which can be translated roughly as "the speaker's terminal values" (for example, for a sincere philanthropist XYZ might be "everyone gets joy, happiness, freedom, etc"). But the fact that XYZ are the speaker's terminal values is not part of the meaning of "right", so it is not inconsistent for someone to say "Everyone should promote XYZ, even if they don't want it" (e.g. "Babyeaters should not eat babies"). And needless to say, XYZ might include generalized utilitarian values like "everyone gets their preferences satisfied", in which case impersonal, shut-up-and-multiply utilitarianism is what is needed to make actual decisions for concrete cases.
Of course it's about both. You can define labels in any way you like. In the end, your definition better be useful for communicating concepts with other people, or it's not a good definition. Let's define "yummy". I put food in my mouth. Taste buds fire, neural impulses propagate fro neuron to neuron, and eventually my mind evaluates how yummy it is. Similar events happen for you. Your taste buds fire, your neural impulses propagate, and your mind evaluates how yummy it is. Your taste buds are not mine, and your neural networks are not mine, so your response and my response are not identical. If I make a definition of "yummy" that entails that what you find yummy is not in fact yummy, I've created a definition that is useless for dealing with the reality of what you find yummy. From my inside view of yummy, of course you're just wrong if you think root beer isn't yummy - I taste root beer, and it is yummy. But being a conceptual creature, I have more than the inside view, I have an outside view as well, of you, and him, and her, and ultimately of me too. So when I talk about yummy with other people, I recognize that their inside view is not identical to mine, and so use a definition based on the outside view, so that we can actually be talking about the same thing, instead of throwing our differing inside views at each other. Discussion with the inside view: "Let's get root beer." "What? Root beer sucks!" "Root beer is yummy!" "Is not!" "Is too!" Discussion with the outside view: "Let's get root beer." "What? Root beer sucks!" "You don't find root beer yummy?" "No. Blech." "OK, I'm getting a root beer." "And I pick pepsi." If you've tied yourself up in conceptual knots, and concluded that root beer really isn't yummy for me, even though my yummy detector fires whenever I have root beer, you're just confused and not talking about reality. This is the problem. You've divorced your definition from the relevant part of reality - the speaker's terminal values, and
Shifting the lump under the rug but not getting rid of it is how it looks to me too. But I don't understand the rest of that comment and will need to think harder about it (when I'm less sleep-deprived). I note that that's the comment Lukeprog flagged as his favourite answer, but of course I can't tell if it got the upvotes before or after he did so.
Let me try... Something is green if it emits or scatters much more light between 520 and 570 nm than between 400 and 520 nm or between 570 and 700 nm. That's what green means, and it also applies to places where there are no humans: it still makes sense to ask whether the skin of tyrannosaurs was green even though there were no humans back then. On the other hand, the reason why we find the concept of ‘something which emits or scatters much more light between 520 and 570 nm than between 400 and 520 nm or between 570 and 700 nm’ important enough to have a word (green) for it is that for evolutionary reasons we have cone cells which work in those ranges; if we saw in the ultraviolet, we might have a word, say breen, for ‘something which emits or scatters much more light between 260 and 285 nm than between 200 and 260 nm or between 285 and 350 nm’. This doesn't mean that greenness is relative, though. Likewise, something is good if it leads to sentient beings living, to people being happy, to individuals having the freedom to control their own lives, to minds exploring new territory instead of falling into infinite loops, to the universe having a richness and complexity to it that goes beyond pebble heaps, etc. That's what good means, and it also applies to places where there are no humans: it still makes sense to ask whether it's good for Babyeaters to eat their children even though there are no humans on that planet. On the other hand, the reason why we find the concept of ‘something which leads to sentient beings living, to people being happy, to individuals having the freedom to control their own lives, to minds exploring new territory instead of falling into infinite loops, to the universe having a richness and complexity to it that goes beyond pebble heaps, etc.’ important enough to have a word (good) for it is that for evolutionary reasons we value such kind of things; if we valued heaps composed by prime numbers of pebbles, we might have a word, say pood, for
I have recently read this post and thought it describes very well how I always thought about morality, even though it talks about 'sexiness'. Would reading the metaethics sequence explain to me that it would be wrong to view morality in a similar fashion as sexiness?
One part of it that did turn out well, in my opinion, is Probability is Objectively Subjective and related posts. Eliezer's metaethical theory is, unless I'm mistaken, an effort to do for naive moral intuitions what Bayesianism should do for naive probabilistic intuitions.
I think it's just Meta-ethical moral relativism.
"I am not a moral relativist." "I am not a meta-ethical relativist" "what is right is a huge computational property—an abstract computation—not tied to the state of anyone's brain, including your own brain."

I'm pretty sure Eliezer is actually wrong about whether he's a meta-ethical relativist, mainly because he's using words in a slightly different way from the way they use them. Or rather, he thinks that MER is using one specific word in a way that isn't really kosher. (A statement which I think he's basically correct about, but it's a purely semantic quibble and so a stupid thing to argue about.)

Basically, Eliezer is arguing that when he says something is "good" that's a factual claim with factual content. And he's right; he means something specific-although-hard-to-compute by that sentence. And similarly, when I say something is "good" that's another factual claim with factual content, whose truth is at least in theory computable.

But importantly, when Eliezer says something is "good" he doesn't mean quite the same thing I mean when I say something is "good." We actually speak slightly different languages in which the word "good" has slightly different meanings. Meta-Ethical Relativism, at least as summarized by wikipedia, describes this fact with the sentence "terms such as "good," "bad," "right&q... (read more)

In, user steven wrote "When X (an agent) judges that Y (another agent) should Z (take some action, make some decision), X is judging that Z is the solution to the problem W (perhaps increasing a world's measure under some optimization criterion), where W is a rigid designator for the problem structure implicitly defined by the machinery shared by X and Y which they both use to make desirability judgments. (Or at least X is asserting that it's shared.) Due to the nature of W, becoming informed will cause X and Y to get closer to the solution of W, but wanting-it-when-informed is not what makes that solution moral." with which Eliezer agreed. This means that, even though people might presently have different things in mind when they say something is "good", Eliezer does not regard their/our/his present ideas as either the meaning of their-form-of-good or his-form-of-good. The meaning of good is not "the things someone/anyone personally, presently finds morally compelling", but something like "the fixed facts that are found but not defined by clarifying the result of applying the shared human evaluative cognitive machinery to a wide variety of situations under reflectively ideal conditions of information." That is to say, Eliezer thinks, not only that moral questions are well defined, "objective", in a realist or cognitivist way, but that our present explicit-moralities all have a single, fixed, external referent which is constructively revealed via the moral computations that weigh our many criteria. I haven't finished reading CEV, but here's a quote from Levels of Organization that seems relevant: "The target matter of Artificial Intelligence is not the surface variation that makes one human slightly smarter than another human, but rather the vast store of complexity that separates a human from an amoeba". Similarly, the target matter of inferences that figure out the content of morality is not the surfac
Hm, that sounds plausible, especially your last paragraph. I think my problem is that I don't see any reason to suspect that the expanded-enlightened-mature-unfolding of our present usages will converge in the way Eliezer wants to use as a definition. See for instance the "repugnant conclusion" debate; people like Peter Singer and Robin Hanson think the repugnant conclusion actually sounds pretty awesome, while Derek Parfit thinks it's basically a reductio on aggregate utilitarianism as a philosophy and I'm pretty sure Eliezer agrees with him, and has more or less explicitly identified it as a failure mode of AI development. I doubt these are beliefs that really converge with more information and reflection. Or in steven's formulation, I suspect that relatively few agents actually have Ws in common; his definition presupposes that there's a problem structure "implicitly defined by the machinery shared by X and Y which they both use to make desirability judgments". I'm arguing that many agents have sufficiently different implicit problem structures that, for instance, by that definition Eliezer and Robin Hanson can't really make "should" statements to each other.
Just getting citations out of the way, Eliezer talked about the repugnant conclusion here and here. He argues for shared W in Psychological Unity and Moral Disagreement. Kaj Sotala wrote a notable reply to Psychological Unity, Psychological Diversity. Finally Coherent Extrapolated Volition is all about finding a way to unfold present-explicit-moralities into that shared-should that he believes in, so I'd expect to see some arguments there. Now, doesn't the state of the world today suggest that human explicit-moralities are close enough that we can live together in a Hubble volume without too many wars, without a thousand broken coalitions of support over sides of irreconcilable differences, without blowing ourselves up because the universe would be better with no life than with the evil monsters in that tribe on the other side of the river? Human concepts are similar enough that we can talk to each other. Human aesthetics are similar enough that there's a billion dollar video game industry. Human emotions are similar enough that Macbeth is still being produced three hundred years later on the other side of the globe. We have the same anatomical and functional regions in our brains. Parents everywhere use baby talk. On all six populated continents there are countries in which more than half of the population identifies with the Christian religions. For all those similarities, is humanity really going to be split over the Repugnant Conclusion? Even if the Repugnant Conclusion is more of a challenge than muscling past a few inductive biases (scope insensitivity and the attribute substitution heuristic are also universal), I think we have some decent prospect for a future in which you don't have to kill me. Whatever will help us to get to that future, that's what I'm looking for when I say "right". No matter how small our shared values are once we've felt the weight of relevant moral arguments, that's what we need to find.
This comment may be a little scattered; I apologize. (In particular, much of this discussion is beside the point of my original claim that Eliezer really is a meta-ethical relativist, about which see my last paragraph). I certainly don't think we have to escalate to violence. But I do think there are subjects on which we might never come to agreement even given arbitrary time and self-improvement and processing power. Some of these are minor judgments; some are more important. But they're very real. In a number of places Eliezer commented that he's not too worried about, say, two systems morality_1 and morality_2 that differ in the third decimal place. I think it's actually really interesting when they differ in the third decimal place; it's probably not important to the project of designing an AI but I don't find that project terribly interesting so that doesn't bother me. But I'm also more willing to say to someone, ""We have nothing to argue about [on this subject], we are only different optimization processes." With most of my friends I really do have to say this, as far as I can tell, on at least one subject. However, I really truly don't think this is as all-or-nothing as you or Eliezer seem to paint it. First, because while morality may be a compact algorithm relative to its output, it can still be pretty big, and disagreeing seriously about one component doesn't mean you don't agree about the other several hundred. (A big sticking point between me and my friends is that I think getting angry is in general deeply morally blameworthy, whereas many of them believe that failing to get angry at outrageous things is morally blameworthy; and as far as I can tell this is more or less irreducible in the specification for all of us). But I can still talk to these people and have rewarding conversations on other subjects. Second, because I realize there are other means of persuasion than argument. You can't argue someone into changing their terminal values, but yo
Calling something a terminal value is the default behavior when humans look for a justification and don't find anything. This happens because we perceive little of our own mental processes and in the absence of that information we form post-hoc rationalizations. In short, we know very little about our own values. But that lack of retrieved / constructed justification doesn't mean it's impossible to unpack moral intuitions into algorithms so that we can more fully debate which factors we recognize and find relevant. Your friends can understand why humans have positive personality descriptors for people who don't get angry in various situations: descriptors like reflective, charming, polite, solemn, respecting, humble, tranquil, agreeable, open-minded, approachable, cooperative, curious, hospitable, sensitive, sympathetic, trusting, merciful, gracious. You can understand why we have positive personality descriptors for people who get angry in various situations: descriptors like impartial, loyal, decent, passionate, courageous, boldness, leadership, strength, resilience, candor, vigilance, independence, reputation, and dignity. Both you and your friends can see how either group could pattern match their behavioral bias as being friendly, supportive, mature, disciplined, or prudent. These are not deep variations, they are relative strengths of reliance on the exact same intuitions. Stories strengthen our associations of different emotions in response to analogous situations, which doesn't have much of a converging effect (Edit: unless, you know, it's something like the bible that a billion people read. That certainly pushes humanity in some direction), but they can also create associations to moral evaluative machinery that previously wasn't doing its job. There's nothing arational about this: neurons firing in the inferior frontal gyrus are evidence relevant to a certain useful categorizing inference, "things which are sentient". I'm not in a mood to argue defin
You're...very certain of what I understand. And of the implications of that understanding. More generally, you're correct that people don't have a lot of direct access to their moral intuitions. But I don't actually see any evidence for the proposition they should converge sufficiently other than a lot of handwaving about the fundamental psychological similarity of humankind, which is more-or-less true but probably not true enough. In contrast, I've seen lots of people with deeply, radically separated moral beliefs, enough so that it seems implausible that these all are attributable to computational error. I'm not disputing that we share a lot of mental circuitry, or that we can basically understand each other. But we can understand without agreeing, and be similar without being the same. As for the last bit--I don't want to argue definitions either. It's a stupid pastime. But to the extent Eliezer claims not to be a meta-ethical relativist he's doing it purely through a definitional argument.
He does intend to convey something real and nontrivial (well, some people might find it trivial, but enough people don't that it is important to be explicit) by saying that he is not a meta-ethical realist. The basic idea is that, while his brain is the causal reason for him wanting to do certain things, it is not referenced in the abstract computation that defines what is right. To use a metaphor from the meta-ethics sequence, it is a fact about a calculator that it is computing 1234 * 5678, but the fact that 1234 * 5678 = 7 006 652 is not a fact about that calculator. This distinguishes him from some types of relativism, which I would guess to be the most common types. I am unsure whether people understand that he is trying to draw this distinction and still think that it is misleading to say that he is not a moral relativist or whether people are confused/have a different explanation for why he does not identify as a relativist.
Do you know anyone who never makes computational errors? If 'mistakes' happen at all, we would expect to see them in cases involving tribal loyalties. See von Neumann and those who trusted him on hidden variables.
The claim wasn't that it happens too often to attribute to computation error, but that the types of differences seem unlikely to stem from computational errors.
The problem is, EY may just be contradicting himself, or he may be being ambiguous, and even deliberately so. I think his views could be clarified in a moment if he stated clearly whether this abstract computation is identical for everyone. Is it AC_219387209 for all of us, or AC_42398732 for you, and AC_23479843 for me, with the proviso that it might be the case that AC_42398732 = AC_23479843? Your quote makes it appear the former.Other quotes in this thread about a "shared W" point to that as well. Then again, quotes in the same article make it appear the latter, as in: We're all busy playing EY Exegesis. Doesn't that strike anyone else as peculiar? He's not dead. He's on the list. And he knows enough about communication and conceptualization to have been clear in the first place. And yet on such a basic point, what he writes seems to go round and round and we're not clear what the answer is. And this, after years of opportunity for clarification. It brings to mind Quirrell: If you're trying to convince people of your morality, and they have already picked teams, there is an advantage in letting it appear to each that they haven't really changed sides.
Ah, neat, you found exactly what it is. Although the LW version is a bit stronger, since it involves thoughts like "the cause of me thinking some things are moral does not come from interacting with some mysterious substance of moralness."
That's it? That's the whole takeaway? I mean, I can accept "the answer is there is no answer" (just as there is no point to existence of itself, we're just here and have to work out what to do for ourselves). It just seems rather a lot of text to get that across.
Well, just because there is no moral argument that will convince any possible intelligence doesn't mean there's nothing left to explore. For example, you might apply the "what words mean" posts to explore what people mean when they say "do the right thing," and how to program that into an AI :P
My summary is pretty close to yours. I would summarize it as: * All questions about the morality of actions can be restated as questions about the moral value of the states of the world that those actions give rise to. * All questions about the moral value of the states of the world can in principle be answered by evaluating those world-states in terms of the various things we've evolved to value, although actually performing that evaluation is difficult. * Questions about whether the moral value of states of the world should be evaluated in terms of the things we've evolved to value, as opposed to evaluated in terms of something else, can be answered by pointing out that the set of things we've evolved to value is what right means and is therefore definitionally the right set of things to use. I consider that third point kind of silly, incidentally.
Yeah, that's the bit that looks like begging the question. The sequence seems to me to fail to build its results from atoms.
Well, it works OK if you give up on the idea that "right" has some other meaning, which he spent rather a long time in that sequence trying to convince people to give up on. So perhaps that's the piece that failed to work. I mean, once you get rid of that idea, then saying that "right" means the values we all happen to have (positing that there actually is some set of values X such that we all have X) is rather a lot like saying a meter is the distance light travels in 1 ⁄ 299,792,458 of a second... it's arbitrary, sure, but it's not unreasonable. Personally, I would approach it from the other direction. "Maybe X is right, maybe it isn't, maybe both, maybe neither. What does it matter? How would you ever tell? What is added to the discussion by talking about it? X is what we value; it would be absurd to optimize for anything else. We evaluate in terms of what we care about because we care about it; to talk about it being "right" or "not right," insofar as those words don't mean "what we value" and "what we don't value", adds nothing to the discussion." But saying that requires me to embrace a certain kind of pragmatism that is, er, socially problematic to be seen embracing.
Morality is a sense, similar to taste or vision. If I eat a food, I can react by going 'yummy' or 'blech'. If I observe an action, I can react by going 'good' or 'evil'. Just like your other senses, it's not 100% reliable. Kids eventually learn that while candy is 'yummy', eating nothing but candy is 'blech' - your first-order sensory data is being corrected by a higher-order understanding (whether this be "eating candy is nutritionally bad" or "I get a stomach ache on days I just eat candy"). The above paragraph ties in with the idea of "The lens that sees its flaws". We can't build a model of "right and wrong" from scratch any more than we could build a sense of yumminess from scratch; you have to work with the actual sensory input you have. To return to the food analogy, a diet consisting of ostensibly ideal food, but which lacks 'yumminess', will fail because almost no one can actually keep to it. Equally, our morality has to be based in our actual gut reaction of 'goodness' - you can't just define a mathematical model and expect people to follow it. Finally, and most important to the idea of "CEV", is the idea that, just as science leads us to a greater understanding of nutrition and what actually works for us, we can also work towards a scientific understanding of morality. As an example, while 'revenge' is a very emotionally-satisfying tactic, it's not always an effective tactic; just like candy, it's something that needs to be understood and used in moderation. Part of growing up as a kid is learning to eat right. Part of growing up as a society is learning to moralize correctly :)
Having flawed vision means that you might, for example, fail to see an object. What does having flawed morality cause you to be incorrect about?
From Bury the Chains, the idea that slavery was wrong hit England as a surprise. Quakers and Evangelicals were opposed to slavery, but the general public went from oblivious to involved very quickly.
It can mean you value short-term reactions instead of long-term consequences. A better analogy would be flavor: candy tastes delicious, but it's long-term consequences are undesirable. In this case, a flawed morality leads you to conclude that because something registers as 'righteous' (say, slaying all the unbelievers), you should go ahead and do it, without realizing the consequences ("because this made everyone hate us, we have even less ability to slay/convert future infidels") On another level, one can also realize that values conflict ("I really like the taste of soda, but it makes my stomach upset!") -> ("I really like killing heretics, but isn't murder technically a sin?") Edit: There's obviously numerous other flaws that can occur (you might not notice that something is "evil" until you've done it and are feeling remorse, to try and more tightly parallel your example). This isn't meant to be comprehensive :)

I am wondering if risk analysis and mitigation is a separate "rationality" skill. I am not talking about some esoteric existential risk, just your basic garden-variety everyday stuff. While there are several related items here (ugh fields, halo effect), I do not recall EY or anyone else addressing the issue head-on, so feel free to point me to the right discussion.

Two related points that I think are very important and not dealt with: * Exploration vs. exploitation (or when to stop doing research). * Judgment under uncertainty (or how to deal with unpredictable long term consequences).

A couple of embarrassingly basic physics questions, inspired by recent discussions here:

  • On occasion people will speak of some object "exiting one's future light cone". How is it possible to escape a light cone without traveling in a spacelike direction?

  • Does any interpretation of quantum mechanics offer a satisfactory derivation of the Born rule? If so, why are interpretations that don't still considered candIdates? If not, why do people speak as if the lack of such a derivation were a point against MWI?

Suppose (just to fix ideas) that you are at rest, in some coordinate system. Call FLC(t) your future light cone from your space-time position at time t.

An object that is with you at t=0 cannot exit FLC(0), no matter how it moves from there on. But it can accelerate in a way that its trajectory is entirely outside FLC(T) from some T>0. Then it makes sense to say that the object has exited your future light cone: nothing you do after time T can affect it.

Well, every object is separated from you by a spacelike interval. If some distant object starts accelerating quickly enough, it may become forever inaccessible. Also, an object distant enough on way-bigger-than-galaxy-superclaster scale can have Hubble speed more than c relative to us.
Are you sure about this? I don't understand relativity much, but I would suspect this to be another case of "by adding speeds classically, it would be greater than c, but by applying proper relativistic calculation it turns out to be always less than c".
It looks like it is even weirder. Proper relativistic velocity arithmetics you mention is about special relativity theory - i.e. local flat-space case. Hubble runaway speed is supposed to be about global ongoing space distortion, i.e. it is strictly about general relativity. As far as I know, it is actually measured based on impulse change in photons, but it can be theoretically defined using time needed for a lightspeed round-trip. When this relative speed is small, everything is fine; if I understand correctly, if Hubble constant is constant in the long term and there are large enough distances in the universe, it would take ray of light exponential time (not linear) to cross distances above some threshold. In the inflationary model of early universe, there is some strange phase where distances grow faster than light could cover them - it is possible as it is not motion of matter in space, but change of the stucture of space.
The primary argument in favor of MWI is that it doesn't require you to postulate additional natural laws other than the basic ones we know for quantum evolution. This argument can pretty easily be criticized on the grounds that yes, MWI does require you to know an additional fact about the universe (the Born rule) before you can actually generate correct predictions.
Usually people do include traveling in a spacelike direction as a component of the 'exiting'. But the alternative is for the objects ('you' and 'the other thing') to be at rest relative to each other but a long distance apart - while living in a universe that does stuff like this. ie. Imagine ants on the outside of a balloon that is getting blown up at an accelerating rate.
Nobody has derived the Born rule, though I think some have managed to argue that it is the only rule that makes sense? (I'm not sure how successful they were). I think people may count it against mwi because of either simple double standards or because it's more obvious as an assumption since it's the only one MWI needs to make. (In other theories the rule may be hidden in with the other stuff like collapse, so it doesn't seem like a single assumption but a natural part of the theory. Since MWI is so lean, the assumed rule may be more noticeable, especially to people who are seeing it from the other side of the fence.)

What does FOOM stand for?

It's not an acronym. It's an onomatopoeia for what happens when an AI self-recursively improves and becomes unimaginably powerful.

(A regular explosion goes BOOM; an intelligence explosion goes FOOM.)

I added it to the jargon page.


It's the sound of an AI passing by, I guess.


Very rapid increase/acceleration. Originally it's the sound you hear if you pour gasoline on the ground and set fire to it.


What do people mean here when they say "acausal"?

Also, if MWI hypothesis is true, there's no way for one branch to interact with another later, right? If there are two worlds that are different based on some quantum event that occurred in 1000 CE, those two worlds will never interact, in principle, right?

"Acausal" is used as a contrast to Causal Decision Theory (CDT). CDT states that decisions should be evaluated with respect to their causal consequences; ie if there's no way for a decision to have a causal impact on something, then it is ignored. (More precisely, in terms of Pearl's Causality, CDT is equivalent to having your decision conduct a counterfactual surgery on a Directed Acyclic Graph that represents the world, with the directions representing causality, then updating nodes affected by the decision.) However, there is a class of decisions for which your decision literally does have an acausal impact. The classic example is Newcomb's Problem, in which another agent uses a simulation of your decision to decide whether or not to put money in a box; however, the simulation took place before your actual decision, and so the money is already in the box or not by the time you're making your decision.

"Acausal" refers to anything falling in this category of decisions that have impacts that do not result causally from your decisions or actions. One example is, as above, Newcomb's Problem; other examples include:

... (read more)

One must distinguish different varieties of MWI. There is an old version of the interpretation (which, I think, is basically what most informed laypeople think of when they hear "MWI") according to which worlds cannot interact. This is because "world-splitting" is a postulate that is added to the Schrodinger dynamics. Whenever a quantum measurement occurs, the entire universe (the ordinary 3+1-dimensional universe we are all familiar with) duplicates (except that the two versions have different outcomes for the measurement). It's basically as mysterious a process as collapse, perhaps even more mysterious.

This is different from the MWI most contemporary proponents accept. This MWI (also called "Everettianism" or "The Theory of the Universal Wavefunction" or...) does not actually have full-fledged separate universes. The fundamental ontology is just a single wavefunction. When macroscopic branches of the wavefunction are sufficiently separate in configuration space, one can loosely describe it as world-splitting. But there is nothing preventing these branches from interfering in principle, just as microscopic branches interfere in the two-slit ... (read more)

Suppose I use the luck of Mat Cauthon to pick a random direction to fly my perfect spaceship. Assuming I live forever, do I eventually end up in a world that split from this world?

No. The splitting is not in physical space (the space through which you travel in a spaceship), but in configuration space. Each point in configuration space represents a particular arrangement of fundamental particles in real space.

Moving in real space changes your position in configuration space of course, but this doesn't mean you'll eventually move out of your old branch into a new one. After all, the branches aren't static. You moving in real space is a particular aspect of the evolution of the universal wavefunction. Specifically, your branch (your world) is moving in configuration space.

Don't think of the "worlds" in MWI as places. It's more accurate to think of them as different (evolving) narratives or histories. Splitting of worlds is a bifurcation of narratives. Moving around in real space doesn't change the narrative you're a part of, it just adds a little more to it. Narratives can collide, as in the double slit experiment, which leads to things appearing as if both (apparently incompatible) narratives are true -- the particle went through both slits. But we don't see this collision of narratives at the macroscopic level.

Also, if MWI hypothesis is true, there's no way for one branch to interact with another later, right? If there are two worlds that are different based on some quantum event that occurred in 1000 CE, those two worlds will never interact, in principle, right?

To expand on what pragmatist said: The wavefunction started off concentrated in a tiny corner of a ridiculously high-dimensional space (configuration space has several dimensions for every particle), and then spread out in a very non-uniform way as time passed.

In many cases, the wavefunction's rule for spreading out (the Schrödinger equation) allows for two "blobs" to "separate" and then "collide again" (thus the two-split experiment, Feynman paths and all sorts of wavelike behavior). The quote marks around these are because it's not ever like perfect physical separation, more like the way that the pointwise sum of two Gaussian functions with very different means looks like two "separated" blobs.

But certain kinds of interactions (especially those that lead to a cascade of other interactions) correspond to those blobs "losing" each other. And if they do so, then it's highly unli... (read more)

What do people mean here when they say "acausal"?

As I understand it: If you draw out events as a DAG with arrows representing causality, then A acausally effects B in the case that there is no path from A to B, and yet a change to A necessitates a change to B, normally because of either a shared ancestor or a logical property.

I most often use it informally to mean "contrary to our intuitive notions of causality, such as the idea that causality must run forward in time", instead of something formal having to do with DAGs. Because from what I understand, causality theorists still disagree on how to formalize causality (e.g., what constitutes a DAG that correctly represents causality in a given situation), and it seems possible to have a decision theory (like UDT) that doesn't make use of any formal definition of causality at all.

Having seen the exchange that probably motivated this, one note: in my opinion, events can be linked both causally and acausally. The linked post gives an example. I don't think that's an abuse of language; we can say that people are simultaneously communicating verbally and non-verbally.
1Paul Crowley
Broadly, branches become less likely to interact as they become more dissimilar; dissimilarity tends to bring about dissimilarity, so they can quickly diverge to a point where the probability of further interaction is negligible. Note that at least as Eliezer tells it, branches aren't ontologically basic objects in MWI any more than chairs are; they're rough high-level abstractions we imagine when we think about MWI. If you want a less confused understanding than this, I don't have a better suggestion than actually reading the quantum physics sequence!
I found an excellent answer here
I think it depends on perspective. We notice worlds interfering with each other in the double slit experiment. I think that maybe once you are in a world you no longer see evidence of it interfering with other worlds? I'm not really sure.
Pretty sure double slit stuff is an effect of wave-particle duality, which is just as consistent with MWI as with waveform collapse.
Basically, if two of them evolve into the same "world", they interfere. It could be constructive or destructive. It averages out to be that it occurs as often as you'd expect, so outside of stuff like the double-slit experiment, they won't really interact.
Hmm. I'm also pretty sure that the double-slit experiments are not evidence of MWI vs. waveform collapse.
They are evidence against wave-form collapse, in that they give a lower bound as to when it must occur. Since, if it does exist, it's fairly likely that waveform collapse happens at a really extreme point, there's really only a fairly small amount of evidence you can get against waveform collapse without something that disproves MWI too. The reason MWI is more likely is Occam's razor, not evidence.
Well, I tried to understand some double-slit corner-cases. If I read some classical Copenhagen-approach quantum physics textbook, it is hard to describe what happens if you install a non-precise particle detector securely protected from experimenter's attempts to ever read it. Of course, in some cases Penrose model and MWI are hard to distinguish, because gravitons are hard to screen off and can cause entanglement over large distances.
Non-traditional notions of causality as in TDT such as causality that runs backwards in time.

People talk about using their 'mental model' of person X fairly often. Is there an actual technique for doing this or is it just a turn of phrase?

It's from psychology: it's where an intelligence develops a model of a thing and then mentally simulates what will happen to it given X. Caledionian crows are capable of developing mental models in solving problems, for example. A mental model of a person is basically where you've acquired enough information of them to approximate their intentions or actions. (Or you might use existing archetypes or your own personality for modeling--put yourself in their shoes.) For example, a lot of the off color jokes by Jimmy Carr would seem malicious from a stranger, misogynist, or racist, whereas you can see with someone like Carr that the intention is to derive amusement from the especially offensive rather than to denigrate a group.
Not a conscious technique. When you get to know a person you form some kind of brain structure which allows you to imagine that person when they aren't around, and which makes some behaviors seem more realistic/in character for them than some other behaviors. This structure is your mental model of that person.
I use this phrase a lot; in my case it's just a turn of phrase. Can't speak for anyone else, though.
Sorta both. Basically your mental model of someone is anything internal that use to predict their (re)actions.

Why are you called OpenThreadGuy instead of OpenThreadPerson?

I'm a guy.

Thanks for a most useful thread, by the way.
Let it be OpenThreadGuy and OpenThreadLady, by turns.

OpenThreadAgent, you speciesist.

Way to be open-minded, you personist.
Hey! My cat counts as an agent, though I'm not sure if he counts as a person. So some of my favourite domesticated dependents are agents that aren't people!
Be thankful that nearest ant colony to you has more useful tasks than asking you how should they be divided w.r.t. agency.
They're unlikely to post here unless they've read GEB.
Well ... a script could open threads like "open thread for a (half of) a month" and "the monthly rationality quotes" and some others automatically, driven only by the calendar.

Does ZF assert the existence of an actual formula, that one could express in ZF with a finite string of symbols, defining a well-ordering on the-real-numbers-as-we-know-them? I know it 'proves' the existence of a well-ordering on the set we'd call the real numbers if we endorsed the statement "V=L". I want to know about the nature of that set, and how much ZF can prove without V=L or any other form of choice.


ZF is consistent with many negations of strong choice. For example, ZF is consistent with Lebesgue measurability of every subset in R. Well-ordering of R is enought to create unmeasurable set.

So, if ZF could prove existence of such a formula, ZF+measurability would prove contradiction, but ZF+neasurability is equiconsistent with ZF and ZF would be inconsistent.

It is very hard to say anything about any well-ordering of R, they are monster constructions...

Does a well-ordering on the constructable version of R provably do this? I fear I can't tell if you've addressed my question yet.
Constructible version of R (if it is inside R and not the whole R) is just like Q: dense, small part of the whole, and usually zero-measure. So, this construction will yield something of measure zero if you define measure on the whole R.
Here's a sort of related argument (very much not a math expert here): Any well ordering on the real numbers must be non-computable. If there was a computable order, you could establish a 1-1 correspondence between the natural numbers and the reals (since each real would be emitted on the nth step of the algorithm). Is that remotely right?
Well, yes, but mostly because most real numbers cannot ever be specified to an algorithm or received back from it. So you are right, ordering of R is about incomputable things. The difference here is that we talk about ZFC formulas which can include quite powerful tricks easily.
Oh right, I forgot that real numbers could be individually non-computable in the first place.
This is true, but not, I think, the corect point to focus on. The big obstacle is that the real numbers are uncountable. Of course, their uncountability is also why there exist uncomputable reals, but let's put that aside for now, because the computability of individual reals is not the point. The point is that computers operate on finite strings over finite alphabets, and there are only countably many of these. In order to do anything with a computer, you must first translate it into a problem about finite strings over a finite alphabet. (And the encoding matters, too -- "compute the diameter of a graph" is not a well-specified problem, because "compute the diameter of a graph, specified as an adjacency matrix" and "compute the diameter of a graph, specified with connectivity lists" are different problems. And if of course I was implicitly assuming here the output was in binary -- if we wanted it in unary, that would again technically be a different problem.) So the whole idea of an algorithm that operates on or outputs real numbers is nonsensical, because there are only countably many finite strings over a given finite alphabet, and so no encoding is possible. Only specified subsets of real numbers with specified encodings can go into computers. Notice that the whole notion of the computability of a real number appears nowhere in the above. It's also a consequence of uncountability, but really not the point here. I'm going to go into your original comment a bit more here: You seem to be under the impression that a countably infinite well-ordering must be isomorphic to omega, or at least that a computable one must be. I don't think that would make for a useful definition of a computable well-ordering. Normally we define a well-ordering to be computable if it is computable as a relation, i.e. there is an algorithm that given two elements of your set will compare them according to your well-ordering. We can then define a well-order type (ordinal) to be computab

So the whole idea of an algorithm that operates on or outputs real numbers is nonsensical

You can work with programs over infinite streams in certain situations. For example, you can write a program that divides a real number by 2, taking an infinite stream as input and producing another infinite stream as output. Similarly, you can write a program that compares two unequal real numbers.

True, I forgot about that. I guess that would allow extending the notion of "computable well-ordering" to sets of size 2^aleph_0. Of course that doesn't necessarily mean any of them would actually be computable, and I expect none of them would. Well, I guess it has to be the case that none of them would, or else we ought to be able to prove "R is well-orderable" without choice, but there ought to be a simpler argument than that -- just as computable ordinals in the ordinary sense are downclosed, I would expect that these too ought to be downclosed, which would immediately imply that none of the uncountable ones are computable. Now I guess I should actually check that they are downclosed...
What does it mean to be downclosed?
I mean closed downwardly -- if one is computable, then so is any smaller one. (And so the Church-Kleene ordinal is the set of all computable ordinals.)
Upvoted on reflection for the part I quoted, which either answers the question or gives us some reason to distinguish R-in-L (the set we'd call the real numbers if we endorsed the statement "V=L") from "the-real-numbers-as-we-know-them".
ZF does not imply well ordering of R, ZFC does. ZF with the Axiom of Choice. Strictly speaking.
Yes. In ZF one can construct an explicit well-ordering of L(alpha) for any alpha; see e.g. Kunen ch VI section 4. The natural numbers are in L(omega) and so the constructible real numbers are in L(omega+k) for some finite k whose value depends on exactly how you define the real numbers; so a well-ordering of L(omega+k) gives you a well ordering of R intersect L. I'm not convinced that R intersect L deserves the name of "the-real-numbers-as-we-know-them", though.
Separate concern: Why constructible real numbers are only finitely higher than Q? Cannot it be that there are some elements of (say) 2^Q that cannot be pinpointed until a much higher ordinal? Of course, there is still a formula that specifies a high enough ordinal to contain all members of R that are actually constructible.
I figured out the following after passing the Society of Actuaries exam on probability (woot!) when I had time to follow the reference in the grandparent: The proof that |R|=|2^omega| almost certainly holds in L. And gjm may have gotten confused in part because L(omega+1) seems like a natural analog of 2^omega. It contains every subset of omega we can define using finitely many parameters from earlier stages. But every subset of omega qualifies as a subset of every later stage L(a>omega), so it can exist as an element in L(a+1) if we can define it using parameters from L(a). As another likely point of confusion, we can show that for each individual subset, a|x|, this says if V=L then 2^omega must stay within or equal L(omega1). The same proof tells us that L satisfies the generalized continuum hypothesis.
Um. You might well be right. I'll have to think about that some more. It's years since I studied this stuff...
While some of the parent turns out not to hold, it helped me to find out what the theory really says (now that I have time).
Let's see. Assume measurability axiom - every subset of R has Lebesgue measure. As we can use the usual construction of unmeasurable set on L intersect R, our only escape option is that it has zero measure. So if we assume measurability, L intersect R is a dense zero-measure subset, just like Q. These are the reals we can know individually, but not the reals-as-a-whole that we know...
Seems reasonable to me.
Is this a correct statement of what a well-ordering of R is? %20\land%20\forall%20B%20(B%20\subseteq%20\mathbb{R}%20\land%20B%20\neq%20\emptyset%20\implies%20\exists%20x%20\in%20B%20\forall%20y%20\in%20B%20((x%20\preceq%20y)%20\in%20A)))%0A%0A\forall%20A%20\forall%20B%20(\text{total-order}(A,%20B)%20\iff%20\forall%20a%20\forall%20b%20\forall%20c(a,%20b,%20c%20\in%20B%20\implies%20(((a%20\preceq%20b)%20\in%20A%20\lor%20(b%20\preceq%20a)%20\in%20A)%20\land%20((a%20\preceq%20b)%20\in%20A%20\land%20(b%20\preceq%20a)%20\in%20A%20\implies%20a%20=%20b)%20\land%20((a%20\preceq%20b)%20\in%20A%20\land%20(b%20\preceq%20c)%20\in%20A%20\implies%20(a%20\preceq%20c)%20\in%20A)))))
Looks OK to me, though I can't guarantee that there isn't a subtle oops I haven't spotted. (Of course it assumes you've got some definition for what sort of a thing (a <= b) is; you might e.g. use a Kuratowski ordered pair {{a},{a,b}} or something.)

Belief update What you do to your beliefs, opinions and cognitive structure when new evidence comes along.

I know what it means to update your beliefs, and opinions is again beliefs. What does it mean to "update your cognitive structure"? Does it mean anything or is it just that whoever wrote it needed a third noun for rhythm purposes?

I wrote that line in the jargon file quoting the first line of the relevant wiki page. The phrase was there in the first revision of that page - put there by an IP. If that IP is still present and could explain ... I could come up with surmises as to what it could plausibly mean, but I'd be making them up and it isn't actually clear to me in April 2012 either.
It's the process of changing your mind about something when new evidence on something comes your way. The different jargon acts as a reminder that the process ought not be an arbitrary one, but (well, in an ideal world anyway) should follow the evidence in a way defined by Bayes theorem. I don't think there's any particular definition of what constitutes, belief, opinion and cognitive structure. It's all just beliefs, although some of it might then be practised habit.

What are the basic assumptions of ultilarianism and how are they justified? I was talking about ethics with a friend and after a bunch of questions like "Why is utilitarianism good?" and "Why is it good for people to be happy?" I pretty quickly started to sound like an idiot.

I like this (long) informal explanation, written by Yvain.
See also.
* Each possible world-state has a value * The value of a world-state is determined by the amount of value for the individuals in it * The function that determines the value of a world state is monotonic in its arguments (we often, but not always, require linearity as well) * The function that determines the value of a world state does not depend on the order of its arguments (a world where you are happy and I am sad is the same as one where I am happy and you are sad) * The rightness of actions is determined wholey by the value of their (expected) consequences. and then either * An action is right iff no other action has better (expected) consequences or * An action is right in proportion to to the goodness of its consequences

I've been aware of the concept of cognitive biases going back to 1972 or so, when I was a college freshman. I think I've done a decent job of avoiding the worst of them -- or at least better than a lot of people -- though there is an enormous amount I don't know and I'm sure I mess up. Less Wrong is a very impressive site for looking into nooks and crannies and really following things through to their conclusions.

My initial question is perhaps about the social psychology of the site. Why are two popular subjects here (1) extending lifespan, including cryog... (read more)

Welcome to LW! You pose an interesting question. I think there is a purely sociological explanation. LW was started by Eliezer Yudkowsky, who is a transhumanist and an AI researcher very concerned about the singularity, and his writings at Overcoming Bias (the blog from which LW was born by splitting) naturally tended to attract people with the same interests. But as LW grows and attracts more diverse people, I don't see why transhumanism/futurism related topics must necessarily stay at the forefront, though they might (path-dependence effect). I guess time will tell. If you have something interesting to say about these topics and the application of rationality to them, by all means do! However, about topic (c) you must bear in mind that there is a community consensus to avoid political discussions, which often translates to severely downvoting any post that maps too closely to an established political/ideological position.
This is factually false. I suspect if you looked through the last 1000 Articles or Discussion posts, you'd find <5% on life extension (including cryonics) and surely <10%. Cryonics does not even command much support; in the last LW survey, 'probability cryonics will work' averaged 21%; 4% of LWers were signed up, 36% opposed, and 54% merely 'considering' it. So if you posted something criticizing cryonics (which a number of my posts could be construed as...), you would be either supported or regarded indifferently by ~90% of LW.
As I wrote in a comment to the survey results post, the interpretation of assignment of low probability to cryonics as some sort of disagreement or opposition is misleading:
Of course not. Why the low probability is important is because it defeats the simplistic non-probabilistic usual accounts of cultists as believing in dogmatic shibboleths; if Bart119 were sophisticated enough to say that 10% is still too much, then we can move the discussion to a higher plane of disagreement than simply claiming 'LW seems obsessed with cryonics', hopefully good arguments like '$250k is too much to pay for such a risky shot at future life' or 'organizational mortality implies <1% chance of cryopreservation over centuries and the LW average is shockingly optimistic' etc. To continue your existential risk analogy, this is like introducing someone to existential risks and saying it's really important stuff, and then them saying 'but all those risks have never happened to us!' This person clearly hasn't grasped the basic cost-benefit claim, so you need to start at the beginning in a way you would not with someone who immediately grasps it and makes a sophisticated counter-claim like 'anthropic arguments show that existential risks have been overestimated'.
Where can I find survey results? I had just been thinking I'd be interested in a survey, also hopefully broken down by frequency of postings and/or karma. But if they've been done, in whatever form, great.
The short answer is that the people who originally created this site (the SIAI, FHI, Yudkowsky, etc) were all people who were working on these topics as their careers, and using Bayesian rationality in order to do those things. So, the people who initially made up the community were made up, in large part, of people who were interested in those topics and rationality. There is a bit more variation in this group now, but it's still generally true.

Why shouldn't I go buy a lottery ticket with quantum-randomly chosen numbers, and then, if I win, perform 1x10^17 rapid quantum decoherence experiments, therefor creating more me-measure in the lottery winning branch and virtually guaranteeing that any given me-observer-moment will fall within a universe where I won?

You're thinking according what pragmatist calls the “old version” of MWI. In the modern understanding, it's not that an universe splits into several when a quantum measurement is performed -- more like the ‘universes’ were ‘there’ all along but the differences between them used to be in degrees of freedom you don't care about (such as the thermal noise in the measurement apparatus).
You-measure is conserved in each branch, I believe. You can't make more of it, only more fragments of it.

Re: utilitarianism: Is there any known way to resolve the Utility Monster issue?

Yes, don't be utilitarian. Seriously. The "utility monster" isn't a weird anomaly. It is just what utilitarianism is fundamentally about, but taken to an extreme where the problem is actually obvious to those who otherwise just equate 'utilitarianism' with 'egalitarianism'.

I obviously do not understand quantum mechanics as well as I thought, because I thought this comment and this comment were saying the same thing, but karma indicates differently. Can someone explain my mistake?

The first comment says that the double slit experiment is feasible under both hypothesis, but the second adds on that it is just as likely with MWI as waveform collapse. Analogy: There are two possible bags arrangements, one filled with 5 green balls and 5 red balls, and the other with 4 green balls and 6 red balls. It's true that drawing a green ball is consistent with both, but it's more likely in the with first bag than the second.
I see what you mean. But I thought "ripples of one wave affected the other wave" was the accepted interpretation of the double slit experiment. In other words, the double slit experiments prove the wave-particle duality. I wasn't aware that the wave-particle duality was considered evidence in favor of MWI.
In Fabric of Reality, David Deutsch claims the double-split experiment is evidence of photons interfering with photons in other worlds.
"Wave-particle duality" pretty much just means particles that obey the schrodinger wave equation, I think. And it could be more evidence for one theory than another if one theory was vague, ambiguous, etc. The more specific theory gets more points if it matches experiment.
It is evidence of (what we now know as) quantum mechanics. MWI is just an interpretation of QM, so there isn't really evidence for MWI that isn't also evidence for the other interpretations according to the people who favor them.
I don't know nearly enough about QM to say whether or not that's true, I was just going off what was said in response to your second comment. However, that doesn't seem to have any upvotes, so it may not be correct either.

I did not understand Wei Dai's explanation of how UDT can reproduce updating when necessary. Can somebody explain this to me in smaller words?

(Showing the actual code that output the predictions in the example, instead of shunting it off in "prediction = S(history)," would probably also be useful. I also don't understand how UDT would react to a simpler example: a quantum coinflip, where U(action A|heads)=0, U(action B|heads)=1, U(action A|tails)=1, U(action B|tails)=0.)

What's 3^^^3?

Is this Knuth's arrow notation?

wait, that was easier to search than I thought.

Yes, it is Knuth's arrow notation.

How comes that in some of the posts which were imported from Overcoming Bias, even if the “Sort By:” setting is locked to “Old”, some of the comments are out of sequence? Same applies to karma scores of -- right now, setting the filter to “This week”, the second comment is at 32 whereas the third is at 33, and sometimes when I set the filter to “Today” I even get a few negative-score comments.

Is there a better place to ask similar questions?

I have a question:

If I'm talking with someone about something that they're likely to disbelieve at first, is it correct to say that the longer the conversation goes on, the more likely they are to believe me? The reasoning goes that after each pause or opportunity to interrupt they can either interrupt and disagree, or don't do anything (perhaps nod their head but it's not required). If they interrupt and disagree then that obviously that's evidence in favor of them disbelieving. However, if they don't, then is that evidence in favor of them believing?

If X is evidence of A, then ~X (not-X) is evidence of ~A. They are two ways of looking at the same thing - it's the same evidence. This is called conservation of expected evidence. So if your premise is true, then your conclusion is necessarily also true. Please note that this says nothing about whether your premise is indeed true. If you have doubts that "not disagreeing indicates belief", that is exactly the same as having doubts that "disagreeing indicates disbelief". The two propositions may sound different, one may sound more correct than the other, but that is an accident of phrasing: from a Bayesian point of view the two are strictly equivalent.
Thanks -- I knew that this was conservation of expected evidence, I just wasn't sure if I was using it correctly.

The standard utilitarian argument for pursuing knowledge, even when it is unpleasan to know, is that greater knowledge makes us more able to take actions that fulfil our desires, and hence make us happy.

However the psychological evidence is that our introspective access to our desires and our ability to predict what circumstances will make us happy is terrible.

So why should we seek additional knowledge is we can't use it to make ourselves happier? Surely we should live in a state of blissful ignorance as much as possible.

Because I'm not utilitarian. I'll care about being happy to whatever extent I damn well please. Which is "a fair bit but it is not the most important thing".

About Decision Theory, specifically DT relevant to LessWrong.

Since there is quite a lot of advanced material already on LW that seem to me as if they would be very helpful if one is one is perhaps near to finishing or beyond an intermediate stage:

Various articles:

And the recent video (and great transcript):

And there are a handful of books that seem relevant to overall decision ... (read more)

LWers are almost all atheists. Me too, but I've rubbed shoulders with lots of liberal religious people in my day. Given that studies show religious people are happier than the non-religious (which might not generalize to LWers but might apply to religious people who give up their religion), I wonder if all we really should ask of them is that they subscribe to the basic liberal principle of letting everyone believe what they want as long as they also live by shared secular rules of morality. All we need is for some humility on their part -- not being total... (read more)

What boat-rocking are you talking about? Do you know a lot of people who "ask of" religious people that they do something?
It seems that implicit in any discussion of the kind is, "What do you think I ought to do if you are right?". For theists, the answer might be something leading to, "Accept Jesus as your personal savior", etc. For atheists, it might be, "Give up the irrational illusion of God." I'm questioning whether such an answer is a good idea if they are at least humble and uncertain enough to respect others' views -- if their goal is comfort and happiness as opposed to placing a high value on literal truth. But do recall, I'm placing this in the "stupid questions" thread because I am woefully ignorant of the debate and am looking for pointers to relevant discussions.
That is implicit in any discussion of this type. But it doesn't go without saying that we should be trying to have a conversation of this type. In fact, it is totally unfair of you to assume that having this conversation is so pressing that it goes without saying. After all, not all theists proselytize. For a more substantive response, I'll say only that I'm not convinced that believing unpleasant but truth things is inherently inconsistent with being happier. But there is a substantial minority in this community that disagrees with me.
I remain quite confused. OK. This seems to imply that there is some serious downside about starting such a conversation. What would it be? It would seem conciliatory to theists, if some (naturally enough) assume that what atheists want is for them to embrace atheism. I hope I've parsed the negatives correctly: Certainly believing unpleasant but true things is highly advantageous to being happier if it leads to effective actions (I sure hope that pain isn't cancer -- what an unpleasant thing to believe... but I'll go to the doctor anyway and maybe there will be an effective treatment). If it means unpleasant things that can't be changed, then that's not inherently inconsistent with being happier either, for instance if your personal happiness function includes that discovering that you are deceiving yourself will make you very unhappy. The question is more whether it is a valid choice for a person to say they value pleasant illusions when there is no effective way to change the underlying unpleasant reality. We object when someone else wants to infringe on our liberties (contraception, consensual sexual practices), and my suggestion was that a mild dose of doubt in one's faith might be enough to defang efforts to restrict other people's liberties. I knew a devout Catholic who was also a devout libertarian, and his position on abortion was that it was a grave sin, but it should not be illegal. I'm not sure if that position required a measure of doubt about the absolute truth of Catholicism, but it seems possible.