If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

New to LessWrong?

Mentioned in
New Comment
128 comments, sorted by Click to highlight new comments since: Today at 7:27 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

More of my research for Luke, this time looking into the polyamory literature.

I read Opening Up and The Ethical Slut; the former was useful, the latter was not. My general impression of the research is that:

  1. it's all hard to get as the journals are marginal and relevant academics have bad habits of publishing stuff as book chapters or prefaces or books period
  2. the studied polyamorists are distinctly white, educated, urban or coastal, professional, older (how odd) middle/upper-class.

    This means there is zero generalizability to whether polyamory would work in other groups and massive selection biases (few other groups are so well-equipped to leave a community not working for them), and even if a survey finds that polyamorists are 'average' in various dysfunctionals or pathologies, one needs to check that the average is the right average (ie. non-amorous educated professional whites).

    These two points do not seem to be appreciated at all by many advocates (eg. the ones saying STDs are not a problem)

  3. the one academic doing good work in the area is Sheffer, who is running a longitudinal survey which may or may not have enough statistical power to rule out particularly dramatic variances
... (read more)

Why did Luke ask you to research polyamory.

You know, I never asked. Maybe someone was thinking about using it as an example of broadened possibilities in a transhumanist utopia along with the usual cat-girl-servants-in-a-volcano-lair examples like augmented senses and wanted to know if there were crippling defeaters to the suggestion?
2Paul Crowley12y
Only just noticed this. If you or Luke would find it useful to talk to Dr Meg Barker about this I can put you in touch.
No, that's not necessary. I think we went as far as was profitable: I wouldn't expect Barker to tell me about any major study which I missed.
Was there any follow-up here?
No idea. Some Google Scholar checks turns up nothing.

After pondering for a while on why I'm so fixated on making meaningless numbers (such as LW karma or Khan Academy points) go up as a result of my actions, I came up with a hypothesis: the brain uses such numbers as a proxy for societal status. An experimental consequence of this idea is that status-seeking people and low-status people try harder (possibly also for longer periods of time) to achieve video game-style "points".

Just a thought, really. If this experiment has actually been done, it'd be cool to read about it if anyone has a link. I don't have the resources to do it myself. Anyone reading this comment who can do it is certainly free to, but I doubt that's the case.

Came up with the idea while responding to a question on Formspring.

Good idea. Maybe net worth comes in the same sense, except less meaninglessly? Other examples: Number of facebook friends High score in Temple Run

Has Michael Vassar published any essays or articles or general things to read anywhere (other than this)? I get the impression that he's this supreme phenomenon from the people who describe his conversational ability. I've watched his Singularity Summit talk, and it was incredible, so I know it's not just in person that he's formidable.

I catch glimpses of his mighty cleverness in comments with phrases like "I always advocate increased comfort with lying." But there's no link to the Michael Vassar essay on All The Very Convincing Reasons You Should Be Comfortable With Lying. I'm assuming that's because it doesn't exist. That's not fair. I want to read his non-existent work.

I thought LWers might be interested in the work of Vi Hart (I did a quick search to make sure she hadn't been mentioned before). I think she is a great resource for recruiting people towards rationality. In terms of finding Joy in the Merely Real, she explains natural phenomena rationally, but in a way that literally can bring tears to my eyes.

Here is an example: Doodling in Math: Spirals, Fibonacci, and Being a Plant- Part 3 of 3

This is the final video of a three-parter, but I think most LWers can infer the background knowledge. If you enjoy it, you can go back and watch the first two parts.

A quote that sums up what her vlogs are often all about: "This is why science and mathematics are so much fun; You discover things that seem impossible to be true, and then get to figure out why it's impossible for them NOT to be."

Those videos are great. That thing you quoted is worthy of a rationality quote.
Thank you! Consider them quoted.
She was mentioned in a quotes thread about a year ago (where I first discovered her), and has recently joined the team at Khan Academy. I likened this for my non-nerd friends to Tom Morello from Rage Against The Machine teaming up with Billy Joel to fight dragons.

FYI: I quit my day job last Monday and flew to San Francisco to start the program described at devbootcamp.com, which starts in full next Monday. Background.

Good luck!

I've pondered setting up some kind of "Mindkiller discussion mailing list" and trying to recruit people.

The idea would be to try to practice discussing these topics in a private forum without getting mindkilled. For example, we could try to have a sensible conversation about politics.

The main thing stopping me is that I think to work it'd need an excellent moderator who'd probably burn out quickly.

I'm mildly frightened by the prospect, because I'm mildly frightened by the possibility that my political beliefs so far might be built entirely on my ability to mindkill other people with clever argument. So, yes, I think this is a good if not vital idea.
Konkvistador has been talking about doing something like that ...
I mentioned it to him on irc, he seemed sympathetic, but didn't mention plans of his own. It'd be interesting whoever does it.
Indeed I am sympathetic, but people have presented pretty good counterarguments against such a mailing list being formed. There was a wide discussion on this and other ideas in the rational romance thread quite some time ago. It seem recently many people have independently come up with many of the same proposals.
If you people do make one and I'm left out, that'd be pretty ugly of you! Please, please, give me membership in the event of you ever getting up to it. I generally disengage from social situations I can't succeed at, but here I'd beg and grovel to be let in. :D

Too bad this isn't part of any sequence else I'd put it up as a rerun:

Can't say no to spending

I'm pretty sure most new posters are not familiar with the data and arguments presented here unless they have started reading LW's sisters site Overcoming Bias (which btw I think more LW users should). In any case an updated discussion of this 4 years later seems appropriate.

Edit: Made a rerun post of this, please discuss it there.

11 points so far says you should make this a rerun post in Discussion.
Well I don't think its technically part of any sequence, but if people here think here it is ok to rerun this and since there was some interest I guess I may as well do it.

How many people knew that evidential decision theory recommends cooperating in a one-shot prisoner's dilemma where the choices of the two agents playing are highly (positively) correlated?

I apparently just independently invented evidential decision theory while bored in my micro class by thinking "why wouldn't you condition your uncertainty about others' on what you choose? The cooperation between rational players in PDs can clearly happen." This sounded suspiciously like what evidential decision theory should be, and lo and behold, after class I found out that it is.

There is a new Stack Exchange Q&A site in public beta. It seems quite relevant to our interests.

Cognitive Sciences beta - Stack Exchange

Social dark arts and plotting are one of the main themes of HPMoR and I think they should have more of a place on LessWrong. "Bad signalling" and negative-from-baseline dark arts failures are often pointed out, but not lost opportunities for manipulation.

What are people's thoughts on this?

These two discussions might be of some interest to you if you haven't read them already.
Also, this and this. But 3 posts and a thread are hardly satisfying, dammit, the Dark Arts are more important than this! We need a community of Dark Wizards! .... unless the PUA guys are a community of Dark Wizards. Do they use their Arts for non-dating-related matters?

Feminism and the Disposable Male

Overall this particular video isn't that well made, but I think the basic argument is more or less correct. 7:00 to 8:00 is especially relevant to ethical thinking.

I agreed with the basic idea, although I did have a slight [citation needed] feel. She jumped around a bit without justifying things as much as I'd like. Although perhaps that's ok for what's basically a youtube rant.
Is there any reason people should watch this rather than read Roy Baumeister's (excellent, IMO) 2010 book Is There Anything Good About Men? (which is available online in the usual sub rosa places)?
If I thought the video spectacular I would have made a separate post in the discussion section. I clearly think its not. So why did I post this? Because, I don't recall this specific topic being discussed on LessWrong, so when I saw this video, I wondered about how posters would respond to it. If you have read the book might I suggest writing a review for this site?
Having been born and raised in Russia, this seems so alien to me. I'd say that here we have a sort of make-do gender equality in many respects, partly a heritage of the USSR.
Oops. On reflection, I misinterpreted her point. I really can't endorse any of the following, except: I do think her description of the actual success female-first-ism is not very accurate. Wrong statements preserved for clarity of thread. ---------------------------------------- Synopsis: Women are more valuable in society than men because women get lifeboat seats and men don't. This different valuation is justified because women can have children and men cannot. First, this is equivalent to saying that women's first social purpose is child-rearing. Second, the Youtube video acknowledges that society geared towards women-as-reproduction-machines requires a lot of restrictions on female autonomy. Would you trade a substantial portion of your autonomy for increased priority of your life being protected in high-risk situations? I wouldn't, for the same reason that I think AI implementing the zeroth law of robotics is not Friendly. Third, one might conclude that restricting female autonomy was necessary for social continuation purposes, but why would individual women want the world to be that way? Society might respond with genuine regret that this is how things must be, but in practice, I've never met anyone who thought that (1) women's autonomy should be restricted, and (2) this was something to regret. In other words, terminal values are not justified by (and don't need justification from) instrumental-value arguments. ---------------------------------------- I also think that the Youtuber's sex-based child-rearing advice is terrible. As she says, we are teaching men and women to be certain ways. Why should we need to teach what is inherently true? Also, I don't think female-autonomy is consistent with the political female-first distribution of benefits she identifies.
Uh, she was describing the child rearing practices in a unsympathetic way quite deliberately. It wasn't advice, it was descriptive. I think you missed the point.
Edit: as discussed below, this is an incorrect interpretation of her comments ---------------------------------------- Wasn't she saying that the child-rearing practices were a net good? She talked about preparing men to be the solitary guardian with the rifle and women to take the lifeboat seat at the cost of her beloved's life. I thought she was saying the sex-based child-rearing techniques (like being more attentive to female than male crying) advanced that goal. From my point of view, no child-rearing advice should suggest treating babies less than a year old differently based on the sex of the child, UNLESS the advice is about diapering.
No. She said it was what we used to need. Her entire video is about how little we value male life and we indoctrinate males to sacrifice themselves for others.
Edit: Yeah, this is all wrong. See my discussion below. ---------------------------------------- She NEVER said we should stop that kind of indoctrination. She barely acknowledged it was indoctrination.
I think you need to re-watch the video. It is not explicitly stated, yet it is very hard to miss.

I looked again.

9:00 to 11:50 - She's saying the child-rearing techniques she describes lead to the "disposable man" attitudes in men and women.

11:50 to 13:10 - Attack on "dismantlers of gender roles" Set-aside programs, women-first policy, etc. reinforce "disposable man."

13:10 to - 14:00 And women-firsters get what they ask for. Feminist ONLY exploits the disposable man dynamic. Feminism = enforced chivalry.

14:00 - 15:00 Society succeeded because women were put first. And we don't need that dynamic any more. Call to action What's the worst that would happen if women no more valuable than men, and men no more valuable than women. If we keep following feminism, society will end by unbalancing.

15:00 - end We should celebrate manhood, and feminists don't want to. Instead, men come in "dead last, every time"

You are correct, in that I misread her call to action. Mostly because I was mindkilled about her definition of feminist. I'm not saying that no one acts how she describes from 11:50 - 14:00, but it's just not an inherent property of feminist to act and believe that way.

For example, I don't want to ignore male victim's of domestic violence, and I doubt most other feminists want to either. I like her call to action, but I think it is a feminist call, and I think her factual assertions from 14:00 to the end (especially "men come in dead last, every time") are almost entirely false.

I've lately been experimenting with taking different amounts of vitamin D. While I have found a definite improvement in mood and energy during the day when taking vitamin D first thing in the morning, I haven't found much impact on my excessive night-owlishness, such that I still don't get enough sleep and mood/energy are not yet optimal. It occurred to me that I might be subverting the effect by spending too much time at the computer in the evenings, since the monitor emits a lot of blue light.

And lo and behold, I've discovered that you can download a f... (read more)

Yes, I'm a fan of an f.lux equivalent for Linux, Redshift. (Have you considered melatonin?) Incidentally, as far as vitamin D goes, I think it may be harmful for sleep when taken in the evening.
Awesome. Dunno how much this is due to placebo, but using it immediately made me feel more sleepy. (Of course, at 8 p.m. it's too early for that; maybe I'll lie to it about my longitude so it lags a few hours.)
Thanks for the download link! I also have "night owl" issues I am currently experimenting to fix. I just installed the f.lux program and will report back on its usefulness by the end of the month. (Immediate reaction- It looked REALLY red for about 5 minutes. Now it just looks a little red.) Another new hack I'm trying is to take a sleep aid when I think I should go to bed soon. My impetus for doing this was a mix of seeing people post about melatonin here, and also realizing that when I was sick, I LIKED being able to take PM meds which would knock me out and force me to go to sleep when I thought I should. (but which I of course do NOT want to take when I am not sick!) Is there a reason LWers recommend melatonin rather than other non-addictive sleep aids? Also, I would like to thank the LessWrong community in general for giving me the idea to even try these types of self-improvement mods.
Here is a link to the update. My general view was that it (f.lux) didn't make a noticeable change (I wasn't recording sleep data at the time though. I am now.), but that the cost was so low (about 2 minutes of time) that it was still worth it for people to try. The sleep aid I had started out trying gave me headaches. I use melotonin now, and it is much better. Still don't manage to get to bed before 1a though.
I tried melatonin for several days and it really didn't seem to do anything for me. Sometimes I'll take Benadryl (but only half the recommended dose for my weight, so it wears off before morning) which does help but seems like not a good thng to be taking long-term.
Tolerance to the sleep-inducing effects of Benadryl builds up fairly quickly. In this double-blind study, people given 50 mg of diphenhydramine (the active ingredient in Benadryl) got really sleepy the first few times, but after doing this for four days in a row, the effects were indistinguishable from a placebo. Benadryl usually comes in 25 mg tablets, so that's two pills per night.

Is programming a bad career to get into? Is it true that you can't work in it more than a couple of decades because all your skills will go obsolete and you'll be replaced by someone younger?

Are you serious? If you have an aptitude for coding/design/software architecture, and no other burning passion, programming is an excellent choice. While indeed changing rapidly, it is an easy discipline to update your skills cheaply and with almost no red tape. Besides, most people change careers on average more often than every 20 years, so no point looking that far ahead.

Just Don't Call Yourself A Programmer.

If this happens (and yes, to some people this happens), then you are doing it wrong. Getting older usually brings some problems, like accumulated bad experience, loss of illusions, less enthusiasm, possible burning out, and starting a family which means that you are less willing to work overtime, etc. But this happens in any profession. What exactly are your programings skills? (The "larger picture" is already mentioned in a shminux's comment, so I focus here only on programming.) If you have memorized a few keywords and function names, then honestly you don't know anything about programming, and a new programming language or technology will make your skills obsolete. Even for a good programmer, having the important keywords in your "memory cache" is useful, but switching to another language is just a matter of time. After "memorizing the keywords" level you get to the real programming -- you design algorithms, understand design patterns (which simply means: you will need to solve thousands of problems, but then you will see that 99% of them can be classified as belonging to one of cca dozen templates, and when you are familiar with the templates, solving these problems will become very easy), and you will see something really new only once in a while (even most of the new things are just reinventing the wheel). And even if you see the new thing, it still helps to have a knowledge about the old things, because you will understand why the new thing was designed this way. You have to develop some meta-skills to make learning easier. For example if you work in multiple programming languages, you often use the same or similar thing with a different syntax. So why not make yourself a cheat-sheet per language per topic? Then if you have to learn a new language, you have to spend one day constructing a new cheat-sheet, and you are fluent in the new language. Using Google and parsing the official documentations are important skills. This can make your learning curve incr
Thanks for the detailed response :)
"Programming" isn't really a coherent vocation any more, and will probably become even less so as time passes. By way of analogy, being a scribe was once a trade in its own right, but any contemporary job you're ever likely to want will demand literacy.
Are you saying that all jobs will soon require coding literacy?

Jobs might not require coding literacy, but knowing how to write rudimentary code (in a scripting language like Python) makes a computer another tool at your disposal (a very very powerful one!). e.g.

  • one can use a regular expression to find all the telephone numbers in a text document
  • if one has a list of 20 files to download, then knowing how to write a 4 or 5 line script that takes the list and downloads the files will make it much faster.
  • [edit] scripts are reusable, so an hour investment of time writing a script that cuts 5 minutes off a common task pays for itself quickly

(Also, being able to clarify ones thoughts enough to convey them unambiguously to a computer is possibly a useful skill in itself.)

Recognizing phone numbers is actually a non-trivial problem, because people write them in so many crazy ways. It's easier if you have a list of phone numbers all formatted in roughly the same way, but that's not always the case.
Ah, good point, but something very general like /[0-9+\-() ]{4,}/ will at least reduce the amount of manual filtering required! In a neat coincidence, I was just reading this article, of which the first 3 paragraphs are most relevant:
This more or less would have been my response. It may not be worth your while becoming a software developer, but it's definitely worth your while learning to code.
That makes sense. Programming as a side dish.
It depends on other things about you that we don't know. What do you want? What's your skill/ability profile like? If you're most interested in money, working as a salaried programmer can take you into the six figure range (the average for Silicon Valley has passed that now). Your skills will obsolesce faster than in other disciplines, and you'll actually be called on it (doctors skills vary a lot by time of graduation, with older being worse, but the patients don't do anything about it), but that's manageable. Unfortunately, as you get older you lose fluid intelligence and so can't learn new skills as easily. You can make much more money in startups in expectation (from tail outcomes) if you're good, but note that one can be an entrepreneur in other fields (software/web startups are nice in terms of low barriers to entry, low capital requirements, etc, but that also means more competition). With a long time horizon if you're smart enough to reliably graduate medical school and find medicine tolerable you'll make more money as a doctor than an engineer. Likewise for elite law schools, if you both have the credentials to get in and go to the high end places (although that carries more risk). Finance (investment banking, hedge funds, etc) has substantially better financial prospects if you can get into it, although again nontrivial risk. Other technically demanding jobs (other types of engineering, actuaries, etc) have similar or better aggregate compensation statistics. In terms of quality of life, some people really like coding, at least compared to the demands of higher-paying fields (risk, self-motivation, management/sales/schmoozing, intense hours, many years of costly schooling, etc). Others don't.

What are the (reasonably) low-cost high-return lifehacks most people probably haven't heard about?

Spaced repetition comes immediately to mind. So do nootropics.

What about speed-reading? It seems to get bad rap or be dismissed as pseudoscience. So... is it real, and if it is, how useful is it?

These are the three I can think of... Are there any more?

(I seem to remember seeing 'mindfullness meditation' mentioned on LW a few times... No idea what it's actually good for, though.)

[Edited to fix weird propositional slip-up. Dismissed as pseudoscience, not by pse... (read more)

Old thread.
(sigh) No Rings of Power on that thread, I'm afraid. Thanks for the link though.

Reading some of Robin Hanson's older writting: If Uploads Come First

What if uploads decide to take over by force, refusing to pay back their loans and grabbing other forms of capital? Well for comparison, consider the question: What if our children take over, refusing to pay back their student loans or to pay for Social Security? Or consider: What if short people revolt tonight, and kill all the tall people?

In general, most societies have many potential subgroups who could plausibly take over by force, if they could coordinate among themselves. But such

... (read more)
Indeed. This is why I have a hard time thinking of ems as "friendly", even as I concede they would be fully human - we have considerable historical precedent as to what happens when one group of humans is much more powerful than a colocated other group of humans. Frankly, humans aren't human-friendly intelligences. As such, it's not clear to me that "human-friendly intelligence" is even a sufficiently coherent concept to make predictions from; much as "God" isn't a coherent concept.

Apparently, Fuyuki City (the setting of Fate/stay night) is based on Kobe.

Also, the setting of Haruhi is based on Nishinomiya.

Kobe is here. And Nishinomiya is here.

Wonder if Eliezer's planning any epic fanfics after MoR...

Maesters of the Citadel from GRRM's world of Ice and Fire are basically crying out to be remade into a Bayesian conspiracy or better yet an organization fighting magic and banishing it from the world in order to reduce existential risk. The destruction of Valyria, the Others beyond the Wall, the sheer destruction that Dragons can wrecked on human lands, the ability and possibly intelligence enhancing capability of the network of Godswoods... if we where living in that universe wouldn't we perhaps do the same? Now that I think about it, maybe I'll rather start writing that.
Fanfics of GRRM's works aren't hosted on Fanfiction.net. Finding readers will be difficult. (Also, the man himself seems likely to send a C&D.)
It seems you are right. Too bad, the universe seemed made for it. Fan fiction is srs bzns apparently.

A new science journal recently published a seriously crackpot paper, this has the abstract a link for the pdf. I first heard about it from Derek Lowe, who has also written two follow-up posts. The first has a couple of links discussing how news of the paper spread, while the second includes a link to the journal making excuses for why they published it.

Moreover, members of the Editorial Board have objected to these papers; some have resigned, and others have questioned the scientific validity of the contributions. In response I want to first state some

... (read more)

I recently learned of the startup Knewton. They're an education company that focuses on developing computer-based courses out of textbooks in a manner that lets each student progress at their own pace and learn with methods that have proven successful for them in the past. This project seems like a good way to grab some low-hanging fruit in the education sphere and to start the process of computer-driven personalization of education, which strikes me as potentially quite powerful.

Some other details: my understanding is that they are creating efficient mea... (read more)

After 40 days and 40 nights (plus a few), I've finished my little randomized double-blind placebo-controlled vitamin D sleep experiment: http://www.gwern.net/Zeo#vitamin-d

Conclusion: it probably hurts sleep when you take it at night.

Followup: it helps morning mood when taken in the morning: http://www.gwern.net/Zeo#vitamin-d-at-morn-helps

Accidental anti-akrasia effect I've recently discovered: I recently set my watch to hourly chime (first time I've used it in over 5 years) so that I could get up at least once an hour and walk around a bit. That's met with some success, but what I've found is that whenever the chime goes off, my sympathetic nervous system takes a jolt, and if I was in the middle of something unproductive, I start to berate myself with statements like "You're going to die someday, what have you got to show for it? Reading your RSS feeds? Writing emails? Come on, ya pan... (read more)

In my twenties (late '80s, early '90s), my friends and I used to talk about having a mid-life crisis every six months. Oh, the angst of Generation X, in the now-lost-to-history last years of the pre-Internet era. (I'm quite enjoying my actual middle age.)

Intellectual Interests Genetically Predetermined? via FuturePundit.

From personality to neuropsychiatric disorders, individual differences in brain function are known to have a strong heritable component. Here we report that between close relatives, a variety of neuropsychiatric disorders covary strongly with intellectual interests. We surveyed an entire class of high-functioning young adults at an elite university for prospective major, familial incidence of neuropsychiatric disorders, and demographic and attitudinal questions. Students aspiring to techn

... (read more)

Our brains are paranoid. The feeling illustrated by this comic is, I must unfortunately admit, pretty familiar.

That's funny. My rationality has conquered my paranoia to the point that I don't fear murders hiding in my house. I fear Cthulhu-ish monstrosities. Such fears have the virtue of being internally consistent, even if they have the vice that they seem inconsistent with our understanding of the physical laws (and thus are extremely improbable). :)

HP:MoR is now not merely "fanfic", but an example of deconstruction-by-example:

So where do you go when all avenues explored with character and theme? You start tearing down the previous work. Good Fanfiction is a model for this. Harry Potter and the Methods of Rationality good example. Responds to the work by telling a new story while analyzing the nature of the old one, in this case by picking apart the nature of Wizard society. Hell, it's what Watchmen did to comics in the first place.

Does anyone know of any studies about the average life expectancy of Native Americans pre-columbus? Or any information at all better than a post on Yahoo Answers.

Hi! This is basically a question about sloppiness. I've recently noticed that I tend not to correct reports I do as part of my work sufficiently, I recently sent one to a coworker/supervisor and he criticised it for having too many careless mistakes. I then remembered that the supervisor for my diploma thesis had the same criticism. It may be connected to overconfidence bias - I noticed that when finishing work, it doesn't occur to me to double-check, I just assume I didn't make any mistakes.

Is there any hack that could help me to consistently remember avo... (read more)

In HPMoR chapter 60, at the very end of the chapter Quirrell is about to tell Harry why he thinks he is different but I cannot find the rest of the text in the subsequent context switches. Where does he answer Harry?

You can find it in chapter 63:

I will say this much, Mr. Potter: You are already an Occlumens, and I think you will become a perfect Occlumens before long. Identity does not mean, to such as us, what it means to other people. Anyone we can imagine, we can be; and the true difference about you, Mr. Potter, is that you have an unusually good imagination. A playwright must contain his characters, he must be larger than them in order to enact them within his mind. To an actor or spy or politician, the limit of his own diameter is the limit of who he can pretend to be, the limit of which face he may wear as a mask. But for such as you and I, anyone we can imagine, we can be, in reality and not pretense. While you imagined yourself a child, Mr. Potter, you were a child. Yet there are other existences you could support, larger existences, if you wished. Why are you so free, and so great in your circumference, when other children your age are small and constrained? Why can you imagine and become selves more adult than a mere child of a playwright should be able to compose? That I do not know, and I must not say what I guess. But what you have, Mr. Potter, is freedom.

Why I think that the MWI is belief in belief: buy a lottery ticket, suicide if you lose (a version of the quantum suicide/immortality setup), thus creating an outcome pump for the subset of the branches where you survive (the only one that matters). Thus, if you subscribe to the MWI, this is one of the most rational ways to make money. So, if you need money and don't follow this strategy, you are either irrational or don't really believe what you say you do (most likely both).

(I'm not claiming that this is a novel idea, just bringing it up for discussion.)... (read more)

That's not many worlds, that's quantum immortality. It's true that the latter depends on the former (or would if there weren't other big-world theories, cf. Tegmark), but one can subscribe to the former and still think the latter is just a form of confusion.
You're correct that with that outcome pump, some copies of you would win the lottery. However, I disagree that you should kill yourself upon noticing that you'd lost. This has been discussed on LW before here.
Seems like more cop-outs, instead of LCPWs: the Failure Amplification does not happen in a properly constructed experiment (it is easy to devise a way to die with enough reliability and fewer side effects in case of a failure). If you only find a 99.9% sure kill, then you can still accept bets up to 1000:1. The Quantum Sour Grapes is a math error: the (implicit) expected utility is taken over all branches in the case of win, instead of only those where you survive, as was pointed out in the comments, though the author refuses to acknowledge it. There are more convenient worlds in some of the comments.
Maybe if you're a particularly silly average utilitarian.
This doesn't make sense. If I'm copied 5 times (in this Everrett branch, nothing about other branches), and one of my copies wins the lottery, I still wouldn't want to kill myself. This doesn't mean that I wouldn't believe my copies existed -- it's just that their existence wouldn't automatically move me to suicide. Why then would I want to kill myself if the copies happen to be located in different Everret branches? What does their location have to do with anything?
I don't follow your example...
Here's an example: let's assume for a sec there's no MWI, there's only one world. Let's assume further that you're copied atom-per-atom 5 times and each copy placed in different cities. One of your copies is guaranteed to win a lottery ticket. A different copy than you wins. Once you find out you lost, do you kill yourself in order to be the one to win the lottery ticket? NO! Killing yourself wouldn't magically transform you to the copy that won the lottery ticket, it would just make you dead. So why should the logic be different when you apply it to copies in different Everett branches, than when you apply it to copies in different cities of the same Everett branches?
I must be still missing your point. 4/5 of you would be dead, but only the branches where you survive matter. No "magical transportation" required.
Did you not read the sentence where my hypothetical is placed in a single world, no "branches"? Can you for the moment answer the question in the world in which there are no branches? In fact forget the multiple copies altogether, think about a pair of twins. Should one twin kill themselves if the other twin won the lottery, just because 1/2 of them would be dead but "only the twin which survives" matters?
Ah, now I understand your setup. Thank you for simplifying it for me. So the issue here is whether to count multiple copies as one person or separate ones, and your argument with twins is pretty compelling... as far as it goes. Now consider the following experiment (just going down the LCPW road to isolate the potential belief-in-belief component of the MWI): The lottery is setup in a way that you either win big (the odds are small, but finite) or you die instantly and painlessly the rest of the time, with very high reliability, to avoid the "live but maimed" cop-out. Would you participate? There is no problem with twins: no live-but-winless copies ever exist in this scenario. Same thing in a fantasy-like setting: there are two boxes in front of you, opening one will fulfill your dreams (in the FAI way, no tricks), opening the other will destroy the world. There is no way to tell which one is which. Should you flip a coin and open a box at random? You value your life (and the world) much higher than simply fulfilling your dreams, so if you don't believe in the MWI, you will not go for it. If you believe the MWI, then the choice is trivial: one regular world before, one happy world after. What would you do? Again, there are many standard cop-outs: "but I only believe in the MWI with 99% probability, not enough to bet the world on it", etc. These can be removed by a suitable tweaking of the odds or the outcomes. The salient feature is that there is no more multiple-copies argument.
I think this is where you're losing people. Why isn't it "one regular world before, 999999 horrifying wastelands and 1 happy world after"? (or, alternately, "one horrifying wasteland with .999999 of the reality fluid and one happy world with .000001 of the reality fluid"?
I'd need to understand how consciousness works, in order to understand if "I" would continue in this sense. Until then I'm playing it cautious, even if MWI was certain. That's not as easy as you seem to think. If I believe in MWI with my current estimation of about 85%, and you think you can do an appropriate scenario for me by merely adjusting the odds or outcomes, then do you think you can do do an appropriate scenario even for someone who only believes in MWI with 10% probability, or 1% probability, or 0.01% probability? What's your estimated probability for the MWI? Plus I think you overestimate my capacity to figure out what I would do if I didn't care if anyone discovered me dead. There probably were times in my life where I would have killed myself if I didn't care about other people discovering me dead, even without hope of a lottery ticket reward.
I agree, certainly 85% is not nearly enough. (1 chance out of 7 that I die forever? No, thanks!) I think this is the main reason no one takes quantum immortality seriously enough to set up an experiment: their (probably implicit) utility of dying is extremely large and negative, enough to outweigh any kind of monetary payoff. Personally, I give the MWI in some way a 50/50 chance (not enough data to argue one way or the other), and a much smaller chance to its literal interpretation of worlds branching out every time a quantum measurement happens, making quantum immortality feasible (probably 1 in a million, but the error bars are too large to make a bet). Unfortunately, you are apparently the first person who admitted to their doubt in the MWI being the reason behind their rejection of experimental quantum suicide. Most other responses are still belief-in-belief.
To who? The branches in which I end up dead one way or another certainly matter to me. (Which is fortunate, since I don't have any real hope of continuing to live for infinity.)
Why do they matter to you?
I'm supposed to know this? My thought process went: 1. The branch in which I currently live (and all its descendants) matters to me; 2. I assign a very low probability to not dying eventually; 3. Believing 2 does not seem to affect 1 at all. Why does your life matter to you?
Lol.... no? If you really believe that, there's no need for a lottery ticket. Just kill yourself in every single world where you're not the richest person in the world. Thus the only branch were you survive will be the one in which you're the richest person in the world. (Assuming an every conceivable outcome is physically realised version of MWI, but then, the lottery ticket gedankenexperiment does that as well.)
Quantum immortality: You kill yourself and die in the vast majority of Everett branches. But you find yourself alive, because you continue to observe only the Everett branches where you survive. Lottery Ticket Win: You kill yourself if you get a losing ticket. By QI, you find yourself alive... with a losing lottery ticket. The branch where you won the lottery diverged from your current branch before you killed yourself. There's no way to transport yourself into that branch. (For the record, I believe that QI is pure BS.)
If killing is more reliable than the odds of winning, in most surviving branches you end up rich.
If the experience of the surviving copies is what's important for you, just do what Aris Katsaris suggests and call it a day. (ie, upload yourself to a million sims, wait to see if one of the copies wins. If none of them does, delete everything and start over. If any of them does, delete all the other copies and then kill yourself. HAPPII ENDO da ze~) Just don't complain if everyone else reacts with "what an idiot". ETA: Noticed shminux's respose to Aris in the sibling. Continuing the discussion there.
Yeah, but once you average net worth over reality fluid volume, you end up poorer than before.
No: if I follow that strategy it makes it more likely that others will follow that strategy; so even if I do successfully end up in a world where I won the lottery, it may also be a world where all my love ones committed suicide.
Note that I said:
When considering this, I thought of another related question. If MWI/Quantum Immortality insists that you not die, would it also insist that you come into existence earlier? If you can't die ever (because that keeps you in more branches), then the earlier you're born, the more branches in which you are alive, therefore MWI/Quantum Immortality indicates that if you exist the most likely explanation is... (I don't know. I seem to be confused.) MWI/Quantum Immortality feels like puddle thinking. But I'm not sure I fully understand puddle thinking either, so me saying MWI/Quantum Immortality feels like puddle thinking feels like me explaining a black box with a smaller black box inside. Given those thoughts, I think my next step is to ask "In what ways is MWI/Quantum Immortality like puddle thinking and in what ways is it not like puddle thinking? Reference to puddle thinking: http://en.wikipedia.org/wiki/Fine-tuned_Universe#In_fiction_and_popular_culture
I believe that my death has negative utility. (Not just because my family and friends will be upset; also because society has wasted a lot of resources on me and I am at the point of being able to pay them back, I anticipate being able to use my life to generate lots of resources for good causes, etc.) Therefore, I believe that the outcome (I win the lottery ticket in one world; I die in all other worlds) is worse than the outcome (I win the lottery in one world; I live in all other worlds) which is itself worse than (I don't waste money on a lottery ticket in any world). Least Convenient Possible World, I assume, would be believing that my life has negative utility unless I won the lottery, in which case, sure, I'd try quantum suicide. What? No! All of the worlds matter just as much, assuming your utility function is over outcomes, not experiences..
The LCPW is the one where your argument fails while mine works: suppose only the worlds where you live matter to you, so you happily suicide if you lose. So any egoist believing the MWI should use quantum immortality early and often if he/she is rational.
An egoist is generally someone who cares only about their own self-interest; that should be distinct from someone who has a utility function over experiences, not over outcomes. But a rational agent with a utility function only over experiences would commit quantum suicide if we also assume there's minimal risk of the suicide attempt failing/ the lottery not really being random, etc. In short, it's an argument that works in the LCPW but not in the world we actually live in, so the absence of suiciding rationalists doesn't imply MWI is a belief-in-belief.
So far most replies are of the type of the invisible dragons in your garage: multiple reasons why looking for them would never work, so one should not even try. This is a classic signature of belief in belief. A mildly rational reply from an MWI adept would sound as follows: "While the MWI-based outcome pump has some issues, the concept is interesting enough to try to refine and resolve them."
Except that you're the only one who's postulating the dragon, while everyone else is going "Of course dragons don't exist, why'd we look for them? We should look for unicorns, dammit, unicorns! Not fire-breathing lizards!"