Open Thread, March 1-15, 2013

by Jayson_Virissimo1 min read1st Mar 2013241 comments

3

Personal Blog

If it's worth saying, but not worth its own post, even in Discussion, it goes here.

239 comments, sorted by Highlighting new comments since Today at 7:09 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Why am I not signed up for cryonics?

Here's my model.

In most futures, everyone is simply dead.

There's a tiny sliver of futures that are better than that, and a tiny sliver of futures that are worse than that.

What are the relative sizes of those slivers, and how much more likely am I to be revived in the "better" futures than in the "worse" futures? I really can't tell.

I don't seem to be as terrified of death as many people are. A while back I read the Stoics to reduce my fear of death, and it worked. I am, however, very averse to being revived into a worse-than-death future and not being able to escape.

I bet the hassle and cost of cryonics disincentivizes me, too, but when I boot up my internal simulator and simulate a world where cryonics is free, and obtained via a 10-question Google form, I still don't sign up. I ask to be cremated instead.

Cryonics may be reasonable for someone who is more averse to death and less averse to worse-than-death outcomes than I am. Cryonics may also be reasonable for someone who has strong reasons to believe they are more likely to be revived in better-than-death futures than in worse-than-death futures. Finally, there may be a fun... (read more)

8Elithrion8ySo are you saying the P(worse-than-death|revived) and the P(better-than-death|revived) probabilities are of similar magnitude? I'm having trouble imagining that. In my mind, you are most likely to be revived because the reviver feels some sort of moral obligation towards you, so the future in which this happens should, on the whole, be pretty decent. If it's a future of eternal torture, it seems much less likely that something in it will care enough to revive some cryonics patients when it could, for example, design and make a person optimised for experiencing the maximal possible amount of misery. Or, to put it differently, the very fact that something wants to revive you suggests that that something cares about a very narrow set of objectives, and if it cares about that set of objects it's likely because they were put there with the aim of achieving a "good" outcome. (As an aside, I'm not very averse to "worse-than-death" outcomes, so my doubts definitely do arise partially from that, but at the same time I think they are reasonable in their own right.)
2lukeprog8yYes. Like, maybe the latter probability is only 10 or 100 times greater than the former probability.
3CarlShulman7yThis seems strangely averse to bad outcomes to me. Are you taking into account that the ratio between the goodness of the best possible experiences and the badness of the worst possible experiences (per second, and per year) should be much closer to 1:1 than the ratio of the most intense per second experiences we observe today, for reasons discussed in this post [http://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html] ?
2Pablo_Stafforini6yWhy should we consider possible rather than actual experiences in this context? It seems that cryonics patients who are successfully revived will retain their original reward circuitry, so I don't see why we should expect their best possible experiences to be as good as their worst possible experiences are bad, given that this is not the case for current humans.
2CarlShulman6yFor some of the same reasons depressed people take drugs to elevate their mood.
1lukeprog7yI like that post very much. I'm trying to make such an update, but it's hard to tell how much I should adjust from my intuitive impressions.
1hairyfigment8yOK, what? When you say "worse-than-death", are you including Friendship is Optimal? What about a variant of Hanson's future where: * versions of you repeatedly come into existence, do unfulfilling work for a while, and cease to exist * no version of you contacts any of the others * none of these future-selves directly contribute to changing this situation, but * your memories do make it into a mind that can act more freely than most or all of us today, and * the experiences of people like your other selves influence the values of this mind, and * the world stops using unhappy versions of you. (Edited for fatigue.)
1lukeprog7yI haven't read Friendship is Optimal, because I find it difficult to enjoy reading fiction in general. Not sure how I feel about the described Hansonian future, actually.
2Synaptic6yI responded to this as a post here: http://lesswrong.com/r/discussion/lw/lrf/can_we_decrease_the_risk_of_worsethandeath/ [http://lesswrong.com/r/discussion/lw/lrf/can_we_decrease_the_risk_of_worsethandeath/]
2MugaSofer7yI ... don't think it does, actually. Well, the bit about "most possible futures are empty" does put you in conflict with Robin Hanson ("More likely than not, most folks who die today didn't have to die!"), I guess, but the actual thesis seems to fall into the category of Eliezer Yudkowsky's "you've stopped believing that human life, and your own life, is something of value" (after a certain point in history.)
1ModusPonies8yWhoa. What? I notice that I am confused. Requesting additional information. Most of the time, if I read something like that, I'd assume it was merely false—empty posturing from someone who didn't understand the implications of what they were writing. In this case, though... everything else I've seen you write is coherent and precise. I'm inclined to believe your words literally, in which case either A) I'm missing some sort of context or qualifiers or B) you really ought to see a therapist or something. Do you mean you're not averse to death decades from now? Does that feel different from the possibility of getting hit by a bus next week? (Only tangentially related, but I'm curious: what's your order of magnitude probability estimate that cryonics would actually work?)

you really ought to see a therapist or something.

No, I'm sorry, but there are simply many atheists who really aren't that scared of non-existence. We don't seek it out, we do prefer continuation of our lives and its many joys, but dying doesn't scare the hell out of us either.

This, in me at least, has nothing to do with depression or anything that requires therapy. I'm not suicidal in the least; even though I'd be scared of being trapped in an SF-style dystopia that didn't allow me to so suicide.

2[anonymous]8yWhat's that quote that says something to the nature of "I didn't exist for billions of years before I was born, and it didn't bother me one bit" ?
6tut8y“I do not fear death. I had been dead for billions and billions of years before I was born, and had not suffered the slightest inconvenience from it.” ― Mark Twain
-2MugaSofer7yThe difference being that those are biased, whereas lukeprog would be expected to see through once the true rejection was addressed, which it has been. I assume. I am not any of the participants in this conversation.
2lukeprog8ySorry, I just meant that I seem to be less averse to death than other people. I'd be very sad to die, and not have the chance to achieve my goals, but I'm not as terrified as death as many people seem to be. I've clarified the original comment.
0James_Miller7yIf there is a high probability of these bad futures happening before you retire, this belief reduces the cost of cryonics to you in terms of the opportunity cost of instead putting money into retirement accounts. In the really bad futures you probably don't experience extra suffering if you sign up for cryonics because all possible types of human minds get simulated.

A new comet from the oort cloud, >10 km wide, has been discovered that is doing a flyby of Mars in October of 2014. The current orbit is rather uncertain, but it is probably passing within 100,000 km and the max likelihood is ~35,000 km. There is a tiny but non-negligable chance this thing could actually hit the red planet, in which case we would get to witness an event on the same order of magnitude as the K-T event that killed off the non-avian dinosaurs! (and lose everything we have on the surface of the planet and in orbit.)

I, for one, hope it hits. That would not be a once in a lifetime opportunity. That would be a ONCE IN THE HISTORY OF HOMINID LIFE opportunity! We would get to observe a large impact on a terrestrial body as it happened and watch the aftermath as it played out for decades!

As is, the most likely situation though is one in which we get to closely sample and observe the comet with everything we have in orbit around Mars. The orbit will be nailed down better in a few months when the comet comes out from the other side of the sun.

And to quote myself towards the end of the last open thread:

I don't know if this has been brought up around here before, b

... (read more)

I saw a mention of that elsewhere, but I didn't realize that the core had a lower bound of 10km. Wow. I really hope it impacts too; we saw some chatter about the need for a space guard with a dinky little thing hitting Chelyabinsk, but imagine the effect of watching a dinosaur-killer hit Mars!

1CellBioGuy8yFor future reference, the JPL small body database entry on the comet: http://ssd.jpl.nasa.gov/sbdb.cgi?sstr=C%2F2013%20A1;orb=1;cov=0;log=0;cad=1;rad=0#cad [http://ssd.jpl.nasa.gov/sbdb.cgi?sstr=C%2F2013%20A1;orb=1;cov=0;log=0;cad=1;rad=0#cad] Different sources seem to have different orbital calculations, this one indicates a most likely close approach of ~100,000 kilometers with the uncertainty wide enough to include a close approach of 0 km. If nothing else, we very well may get pictures from the surface rovers of the head of a comet literally filling the sky.
2Thomas8yI am flabbergasted, I have no explanation for this situation. If this comet is really that big and has approximately said flyby orbit, how frequent are those? If one every thousand years, there were 60000 of those since the TC event. How come we had only one collision of this magnitude? Maybe they are less frequent. How lucky we are then to witness one of them right now? Too lucky, I guess. As on the other hand, it looks we are just too lucky to have no major collision of that kind relatively recently, if they were quite common. Maybe I am missing something odd. Like an unexpected gravity or other effect, by which an actual collision is much more difficult. Something in line with this [http://www.scholarpedia.org/article/Stability_of_the_solar_system]. What makes sense, but only after a careful consideration. Maybe a planet like Mars or Earth repels comets somehow? Dodge them somehow? Some weird effect like this [http://en.wikipedia.org/wiki/Magnus_effect]?
5NancyLebovitz8yI recommend Taleb's The Black Swan. The major premise is that people tend to underestimate the likelihood of weird events. It's not that they can predict any particular weird event, it's about overall likelihood of weird events with large consequences.
2CellBioGuy8yAnother way of stating it in this circumstance: there are so many different things that we would consider ourselves lucky to see or that we would notice as unusual that even if the probability of any one of them is low the probability that we see something isn't that low. I second the book recommendation by the way.
0Thomas5yFlabbergasted no more! There was no collision, of course. Should have known it, immediately!
0CellBioGuy8yIf you are randomly shooting a rock through the solar system, "close approach of mars within 100,000 km" is 870 times as likely as "hitting mars". That brings a 'once in 100 million years (really roughly guessing based on what I know of earth's geological history)' event down to the order of 'once in a hundred thousand years', and the proper reference class of things we would be considering ourselves this lucky to see is probably more like 'close approach of a large comet to a terrestrial body' rather than singling out mars in particular. I don't know enough about distributions of comet orbital energies to consider different likelihoods of comets having parabolic orbits that bring them closer to the center of the solar system versus further away to compare the odds of things going near the different terrestrial planets with different orbits. The gravity of a planet actually slightly increases the fraction of randomly-shot-past-them objects that hit them over just sweeping out their surface area through space, but for something with a relative velocity of 55 km/s (!) that effect is tiny.
2NancyLebovitz8yShould we bring Shoemaker-Levy [http://en.wikipedia.org/wiki/Comet_Shoemaker%E2%80%93Levy_9] into this discussion?
0Thomas8yIf so, we are indeed very lucky to observe an event, which happens every 100 000 years or so. OTOH, I've conclude, that it is in fact less likely for a planet to be hit by a random comet than it is for a big massless balloon of the same size, to be hit by the same comet. Why is that? Roughly speaking, if the comet is heading toward some future geometric meeting point, the planet will accelerate it by its own gravity and the comet will come too early and therefore flies by. It's a very narrow set of circumstances for an actual collision to take place. A bit counter intuitive but it explains why we have so few actual collisions, despite of the heavy traffic. Collisions do happen, but less often than a random chance would suggest. The gravity protects us mostly.

Zeo Inc is almost certainly shutting down.

Zeo users should assume the worst and take action accordingly:

  1. Update your sleep data and then export all your sleep data from the Zeo website as a CSV (the bar on the right hand side, in tiny grey text)
  2. Upgrade your Zeo with the new firmware if you have not already done so, so it will store unencrypted data which can be accessed without the Zeo website.
  3. Depending on how long you plan to use your Zeo, you may want to buy replacement headbands (~$15 each, I think you can get a year's use out of them). Amazon still stocks the original bedside unit's replacement headbands and the cellphone/mobile unit replacement headbands but who knows how many they still have?

I'm sad that they're closing down. I've run so many experiments with my Zeo, and there doesn't seem to be any successor devices on the horizon: all the other sleep devices I've read of are lame accelerometer-based gizmos.

1skjonas8yI'm sad about this as well. The Zeo has been the only QS thing that I've been able to get my girlfriend to use, and it has increased her understanding of her sleep patterns dramatically. I now look back with a twinge of anger at all the times that someone told me that they track their stages of sleep too, but with their iphone app and "it was only a dollar." And to be clear, you can only upgrade the firmware on the Zeo bedside unit, right?
0gwern8yI don't know anything about the mobile unit.
1Qiaochu_Yuan8yWhat about aspiring Zeo users? Is it too late to get in on this?
2gwern8yDepends. If you know that it's shutting down, are willing to handle the data exporting yourself, and also are willing to possibly pay rising costs for a Zeo unit and replacement headbands... I know I don't intend to stop (already bought another 3 replacement headbands on Amazon), but I've already used my Zeo for a long time and seem to be pretty unusual in how much I use it.
1Jayson_Virissimo8yThanks for the heads up.
0confusionobligation8yThe firmware is no longer available on their site. I tried to email them, but I got an automated response telling me that customer service is no longer responding to emails and to check the help on their site. Can anyone share the 2.6.3R firmware? Also, Amazon is sold out of the bedside headbands. Bad timing for me - I only have one left.
0gwern8yI am not sure whether there was not some per-user customization or something, but for what it's worth, here's the copy of my firmware: http://dl.dropbox.com/u/85192141/firmware-v2.6.3R-zeo.img [http://dl.dropbox.com/u/85192141/firmware-v2.6.3R-zeo.img] 2 or 3 days after I went around all Paul Revere-style, I was told that Amazon had run out. So I guess they turned out to not have many at all. (I had 3 left over from previously, and bought another 3, so I figure I should be able to get at least 3 more years out of my Zeo.)
[-][anonymous]8y 12

I wanted to apologize for the post I made on Discussion yesterday. I hope one of the mods deletes it. I should have thought more carefully before posting something controversial like that. I made multiple errors in the process of writing the post. One of the biggest mistakes I made was mentioning the name of a certain organization in particular, in a way that might harm that organization.

In the future, before I post anything, I will ask myself, "Will this post raise or lower the sanity waterline?" The post I made clearly didn't really do much for the former, and could easily have contributed to the latter. For that I am filled with regret.

I have a part-time job, and I will be donating at least $150 of my income to the organization I mentioned and possibly harmed in the previous post I made.

I'm not making this comment for the purpose of gaining back karma; I'm making it because I still want to be taken seriously in this community as a rationalist. I know that this may never happen, now, but if that's the case, I can always just make another account. Less Wrong is amazing, and I like it here.

If you're not making mistakes, you're not taking risks, and that means you're not going anywhere. The key is to make mistakes faster than the competition, so you have more chances to learn and win.

-- John. W. Holt

3[anonymous]8yAgree with the first part but not (the wording of) the second part. If you know beforehand that something would be a mistake, don't be stupid.
3Qiaochu_Yuan8yBut you shouldn't necessarily trust your brain to accurately predict whether things will be mistakes.
1ChristianKl8yThe question is where you cut off. What chance of making a mistake is acceptable?
3[anonymous]8yJOHN HOLT! *makes touchdown signal*
5wedrifid8yBased on your handle I assumed you already had another account. I do suggest making another one now. There is no need to take that baggage with you---leave that kind of shit as anonymous.
1[anonymous]8yThat account has been used regularly or semi-regularly for months, so despite the name it's not exactly a throwaway.
0[anonymous]8yYes, I will be making a new account. Good idea. This is my last comment from this one.
1Kawoomba8yCan we still send you our ... you know ... merchandise?
0[anonymous]8yGreat! I'll explicitly use that heuristic myself from now on (if I remember to).
1Viliam_Bur8yThere could be a plugin for this. Imagine that before sending a post, you have to answer a few questions, such as: "Your certainty that this post will move the sanity waterline in a positive direction". But we are only humans. We would learn very soon to ignore it, and just check the "right" answers automatically. Maybe it would work better if it displayed only randomly, once in a few comments. And then the given comment could be sent to reviewers, who could inflict huge negative karma if they strongly disagree with the estimate. Or perhaps there could be an option to click "I am sure this comment is useful and harmless" when sending a comment. A comment without this option gets +1 karma on upvote and -1 on downvote; a comment with this option gets +2 on upvote and -5 on downvote. This could make people think before posting.
4drethelin8yI like the idea of a questionnaire that pops up randomly when making a comment at a rate of maybe 1-10 percent. possible example questions: * Do you think this comment is funny? * Do you think this comment is useful to the person you're responding to? * Do you think this comment is useful to anyone but the person you're responding to? * Do you think this comment will have positive karma? How much? * Would you make this comment to anyone's face? * etc Displaying one or more of these at a rate that makes you think but not at a rate that would be super annoying would be fun and provide some neat databases. On the other hand I'm sure programming it would be a bitch.
0[anonymous]8yThere could be something preventing you from unthinkingly click "Yes", akin to the option in LeechBlock whereby you have to copy a code of 32/64/128 random characters before being able to change the settings. (But that might backfire, by discouraging people from posting comments even when they would be unobjectionable.) I would love that.

Google Reader is being killed 1 July 2013. Export your OPML and start searching for a new RSS reader...

0Emily8yI finally just started using RSS feeds and it has improved my workflow dramatically. Now they're breaking my system on me?! Thanks for letting me know...
0Tenoke8yDo you suggest any particular RSS readers?
0Risto_Saarelma8yI'm already considering moving to email [http://lesswrong.com/lw/goe/open_thread_february_1528_2013/8igq] and running the whole thing on my home server.
0gwern8yNo. I've seen Newsblur and Netvibes mentioned but I've never used them. Some discussion in * http://www.ghacks.net/2013/03/14/the-ultimate-google-reader-alternatives-list/ [http://www.ghacks.net/2013/03/14/the-ultimate-google-reader-alternatives-list/] * http://blog.superfeedr.com/state-of-readers/ [http://blog.superfeedr.com/state-of-readers/] * http://news.ycombinator.com/item?id=5373538 [http://news.ycombinator.com/item?id=5373538] * http://www.reddit.com/r/technology/comments/1a8ygk/official_google_reader_blog_powering_down_google/ [http://www.reddit.com/r/technology/comments/1a8ygk/official_google_reader_blog_powering_down_google/]
1Tenoke8yMeh, I guess we have a few months to see people's reports on the alternatives.
3[anonymous]8yDon't waste time on researching today's best feed reader. That data will be obsolete in 3 months. - LifeProTips @ Reddit [http://www.reddit.com/r/LifeProTips/comments/1aanp2/lpt_wait_till_july_1st_to_switch_from_google/]
0gwern8yAt least in the case of NewsBlur we'll have to wait to see people's reports, since they are being hammered by all the Reader refugees.
0Jonathan_Graehl8yi'm happy w/ feedly [http://www.feedly.com] and haven't been asked for money yet (have only used for 2 days)
0shminux8yI've imported my feeds into Google Currents, since it can also be used to read regular news, not just feeds, which I do, anyway. Trying it out now, hopefully Google will be improving it, if they want the reader users to stay with Google.
1shminux8yUpdate: So far Google Currents sucks for feeds. Totally unintuitive layout and gestures, does not show new feeds (or I cannot find where it does), the formatting of several items is so poor, I give up and go to the original site. Switching back to Google Reader until something better comes along.

I posted this in the waning days of the last open thread, but I hope no one will mind the slight repeat here.

The last Dungeons and Discourse campaign was very well-received here on Less Wrong, so I am formally announcing that another one is starting in a little while. Comment on this thread if you want to sign up.

A call for advice: I'm looking into cognitive behavioral therapy—specifically, I'm planning to use an online resource or a book to learn CBT methods in hopes of preventing my depression from recurring. It looks like these methods have a good chance of working, although the evidence isn't as strong as for in-person CBT. At this point, I'm trying to decide which resources to learn from. Any recommendations or anecdotes would be appreciated.

4torekp8yMy wife's a psychologist and depression is one of her specialties. Here are her recommendations: Self-Therapy for Your Inner Critic [http://www.amazon.com/Self-Therapy-Your-Inner-Critic-Self-Confidence/dp/0984392718] book Free guided meditations for "The Mindful Way Through Depression" (get some practice before using "working with difficulty" meditation):streamable or downloadable [http://www.guilford.com/cgi-bin/cartscript.cgi?page=add/segal2/audio.html] And the associated book [http://search.barnesandnoble.com/Mindful-Way-through-Depression/Mark-G-Williams/e/9781593851286?cm_mmc=GooglePLA-_-TextBook_InStock_Under26_PT100-_-Q000000633-_-9781593851286&cm_mmca2=pla&r=1] Please let us know how it goes.
2FiftyTwo8yI''v had success with Introducing Cognitive Behavioural Therapy: A Practical Guide [https://www.amazon.co.uk/gp/product/B005DB3J0O/ref=kinw_myk_ro_title]
1David Althaus8yI recommend Feeling Good [http://www.amazon.com/Feeling-Good-The-Mood-Therapy/dp/0380810336/ref=sr_1_1?ie=UTF8&qid=1362594604&sr=8-1&keywords=feeling+good] by David Burns. It'a a very good overview into CBT, covers all types of medication and was also recommended by Lukeprog IIRC.
1coffeespoons8yMind Over Mood [http://www.amazon.co.uk/Mind-Over-Mood-Change-Changing/dp/0898621283] is ace!
1beoShaffer8yI am also interested in learning more about CBT.

For various reasons, I cannot make open threads anymore, ever again.

7gwern8yMessage acknowledged. We appreciate your good work. And godspeed, Grognor. El psy congroo.

Over the past month, I have started taking melatonin supplements, instigated a new productivity system, implemented significant changes in diet and begun a new fitness routine. February is also a month where I anticipate changes in my mood. I find myself moderately depressed and highly irritable with no situational cause, and I have no idea which of these things, if any, are responsible.

This is not ideal.

I'd been considering breaking my calendar down into two-week blocks, and staging interventions in accordance with this. Then the restless spirit of Pau... (read more)

2gwern8ySo it's a web service that would spit out a random Latin square and then run ANOVA on the results for you? I don't think I've heard of such a thing. (Most people who would follow the balanced design and understand the results are already able to do it for themselves in R/Stata/SPSS etc.) Statwing.com might have something useful, they seemed to be headed in that direction of 'making statistics easy'.
2sixes_and_sevens8yI was imagining a site that would look at all the different things you're trying at the moment, look at all the things other people are trying, and give you a macro-schedule for starting them that works towards establishing cyclicality across all users. It could also manage your micro-schedule, (prompt you to take a pill, do twenty sit-ups, squirt cold water in your right ear, etc.), ask for metrics and let users log salient information and observations. Come to think of it, once that infrastructure is already in place, there's no reason you couldn't open it up as a platform for more legitimate and formal trials.
4gwern8yMm. So not just scheduling your own interventions but try to balance across users too... No, I don't know of anything like that. CureTogether actually got some research published, but I don't think randomization or balancing was involved. (And trying to get nootropics or self-help geeks to collectively do something is like trying to herd deaf cats into pushing wet spaghetti...)
1[anonymous]8yWhen I found myself depressed and irritable on a diet, it seemed to be evidence that I was hungry. Is there any food or drink that you can try consuming to stave off that feeling, while still following the diet? As an example my diet allowed me to consume unlimited amounts of unprocessed fruit, so if I felt depressed and irritable, I could eat that until I felt better, and not hurt my diet at all.
0sixes_and_sevens8yI've ruled out hunger/low blood sugar as a simple causal factor. I imagine it's a combination of factors, but I'm annoyed at myself for implementing so many changes at once and not being able to determine efficacy or side-effects as a result.
0[anonymous]8yIf you've ruled out hunger, is there anyone like a spouse, girlfriend, roomate, relative or coworker, who you meet regularly in person? I've found that they can often help you alleviate the symptoms and talk out this kind of problem to determine possible causes. Exception: If they are themselves the cause of the problem, this may not be helpful. This is somewhat trickier over the internet because we don't know you as well, and we can't pick up as easily on emotional cues. People who know you better are more likely to have access to background information to piece together things, and would be able to judge your reactions to proposed ideas better.
1sixes_and_sevens8yI appreciate your concern, though the point of this post was to solicit discussion of intervention management, not my emotional problems :-)
0[anonymous]8yYes, on looking at your original post again, I'm getting somewhat off track, sorry about that. Trying to go back to your original topic, my experience with Quantified Self /Lifehacking style methods is quite limited and appears to have a notable correlative factor, which is social support. All of the lifehacking methods (I can think of two so far) that I used that were accompanied with support from other people currently appear to be working well. The one that I can think of that did not have the support of others didn't. That being said, that isn't much evidence. If this is the case, than I would expect whether or not the people who assign themselves into self-experimental cohorts get to discuss their plans/implementations with other people in their cohorts would substantially affect the results (Unless you specifically had one cohort that allowed for discussion with other cohort members and one cohort that did not.)
0Douglas_Knight8yAs you seem to recognize in your reply to Gwern, this probably cannot function as a stand-alone feature, but needs to sit atop a Quantified Self platform. The minimal system is one that just keeps track of your data, while making data entry easier than existing systems. The next step is to figure out what things you're tracking correspond to what things I'm tracking. This is difficult to combine with the flexibility of allowing the tracking of anything. Why haven't you gotten into the Quantified Self thing? At the very least, they probably have better answers to this question.
0sixes_and_sevens8yQuantified Self seems like one of those things you have to be into, and I'm just not that into it. It seems to me that a lot of the QS-types take an almost recreational pleasure in what they're doing. I understand that. I get a similar sort of pleasure from other things, but not this. I'd like the information, but there's only so much effort I'm prepared to spend on getting it.

It seems plausible to me that traditional financial advice assumes that you have traditional goals (e.g. eventually marrying, eventually owning a house, eventually raising a family, and eventually retiring). Suppose you are an aspiring effective altruist and willing to forgo one or more of these. How does that affect how closely your approach to finances should adhere to traditional financial advice?

2Viliam_Bur8yI would say that at the beginning you have to make a choice -- will you contribute financially or personally? If you want to contribute financially, you simply want to maximize your income, minimize your expenses, and donate the money to effective charities. (You only minimize your expenses to the level where it does not hurt your income. For example if keeping the high income requires you to have a car and expensive clothes, then the car and clothes are necessary expenses. Also you need to protect your health, including your mental health: sometimes you have to relax to avoid burning out.) Focus on your professional skills and networking. If you want to contribute personally, you need to pay your living expenses, either from donated money, or by retiring early (the latter is probably less effective). Focus on social skills and research. The house and family seem unnecessary (at least for the model strawman altruist).

I have been reading up on religious studies (yes, I ignored that generally sound advice never to study anything with the word 'studies' in the name) in order to better understand Chinese religion.

Unexpectedly, I have found the native concepts are useful (perhaps even more useful) outside the realm of religion. That is to say, distinctions like universalist/particularist, conversion/heritage, and concepts like orthodoxy, orthopraxy, reification, etc... are useful for thinking about apparently "non-religious" ideologies (including, to some extent, ... (read more)

2Ritalin8yAny bibliography you would like to recommend? Also, would you care to expand on how precisely you find it useful?
-1ChristianKl8yHow do you know that it's useful? What evidence do you have to support that belief in addition to feeling that it's useful?

So apparently I should be somewhat concerned about dying by poisoning. Any simple tips for avoiding this? It looks like the biggest killers are painkillers and heavy recreational drugs, neither of which I take, so I might be safe.

1beoShaffer8yPut your poisson control center on speeddial?
9gwern8yThey can't do anything but advise you to lower your lambda!

I finished Coursera "Data Analysis" last night. (It started back in January.)

It's basically "applied statistics/some machine learning in R": we get a quick tour of data clean up and munging, basic stats, material on working with linear & logistic models, use of common visualization and clustering approaches, prediction with linear regression and trees and random forests, then uses of simulation such as bootstrapping.

There's a lot of material to cover, and while there's plenty of worked out examples in the lectures, I don't see anyon... (read more)

This comment discusses information hazards, but not in much detail.

"Don't link to possible information hazards on Less Wrong without clear warning signs."
— Eliezer, in the previous open thread.

"Information hazard" is a Nick Bostrom coinage. The previous discussion of this seems to have focused on what Bostrom calls "psychological reaction hazard" — information that will make (at least some) people unhappy by thinking about it. Going through Bostrom's paper on the subject, I wonder if these other sorts of information hazards sh... (read more)

Another thing that seems to fit this pattern that I have seen elsewhere is a Trigger Warning, which is used before people discuss something like rape, discrimination, etc... which can remind people who have experienced those about it, causing some additional trauma from the event.

3ModusPonies8yHas anyone here ever decided not to read something because it had a trigger warning? I can't imagine doing so myself, but that may be the typical mind fallacy. EDIT: People do use the warnings. Good to know.
6TheOtherDave8yI have chosen not to consume media (including but not limited to text) because of an explicit trigger warning. Not often, though; most trigger warnings relate to topics I don't have trauma about. More often, I have chosen to defer consuming media because of an explicit trigger warning, to a time and place when/where emotional reactions are more appropriate. I have consumed media in the absence of such warnings that, had such a warning been present, I would have likely chosen to defer. In some cases this has had consequences I would have preferred to avoid.
5tut8yI haven't, but I think that were trigger warnings are appropriate is in things that hurt a few people disproportionately. If something hurts everyone that reads it you shouldn't write it at all, and if it hurts no one more than it is worth it isn't a case for trigger warnings. But if it is something that needs to be said to many people, and there is a significant group (perhaps those that have had a certain experience) who would suffer a lot from reading it, then you put a trigger warning that would be recognized by that group at the top. TLDR If most people never care about trigger warnings, then they might work as intended.
-4Kawoomba8yTrigger warnings are stupid in general, I think they do more harm than good. Even people who fear being negatively affected will mostly read the content, if only because forbidden fruit are the sweetest and because they are curious. The trigger warning will then already have put them in a frame of mind in which they expect a bad emotional impact of some sort - clearly predisposing them to react much worse than if there had been no trigger warning in the first place. I concede that some people may in fact heed trigger warnings and not read the content, but an overall utility calculation would probably favor no trigger warnings at all.
0[anonymous]8yProbably, people for whom that is true (while constituting probably the majority of regular Internet users) are not the same people as those for whom trigger warnings are written. See e.g. this discussion [http://lesswrong.com/lw/82g/on_the_openness_personality_trait_rationality/] about the relationship between the openness personality trait and the memetic analogue of parasite load.
2erratio8yI have chosen not to Google something that I was warned would involve seeing particularly horrific images. I imagine that if said topic was put in blog post form with a trigger warning up the top, I would probably choose not to read it. EDIT: It's probably worth adding that I adopted this policy after discovering the hard way that there are things out there I would really prefer not to see/hear about.
2torekp8yI've decided not to listen to some radio segments because of such warnings. Similar principle.
1Qiaochu_Yuan8yHave you had an experience that might cause you to be triggered by the kind of thing that gets trigger warnings?
0[anonymous]8yI haven't, but I have never experienced a serious trauma that I don't want to be reminded to me, so I'm not the kind of person that people who write trigger warnings are thinking about.
0Decius8yI know a person who chose not to read something (MAX Punisher #1) based on my warning of explicit sexual violence. Anecdotal and incomplete, but most of an example case...
3fubarobfusco8yAgreed — Bostrom's classification "psychological reaction hazard" seems like it should include "trigger" as a subset — both the original sense of "PTSD trigger" and the more general sense that seems popular today, which might be expanded as "information that will remind you of something that it hurts to be reminded of."
8Alejandro18yAs for distraction hazards, I have often seen links to TvTropes been posted with a warning sign, both here and in other sites. (Sometimes a plain "Warning: TvTropes link", sometimes a more teasing "Warning: do not click link unless you have hours to spare today".)
2David_Gerard8yOr "Warning: Daily Mail" (or other sites working on the click-troll business model): linking to a site your readers may object to feeding even with a click. Knowledge hazard, in that even when such sites are more right than their usual level they tend to be wrong.
1[anonymous]8yI wish links to Cracked.com also had a similar warning. (Well, now that I have LeechBlock installed that's no longer so much of an issue, but still.)
7shminux8yWhy stop there? Employment hazard (NSFW), Copyright hazard (link to torrent, sharing site or a paper copied from behind a paywall), Relationship hazard (picture of a gorgeous guy/girl), dieting hazard (discussion of what goes well with bacon)...
3fubarobfusco8yWell, the ones I mentioned are drawn from Bostrom's paper (although they aren't all of his categories). Eliezer seemed to be specifically discouraging a class of psychological reaction hazards while using the more general term "information hazard" to do it; I thought to inquire into what folks thought of other classes of information hazard.
[-][anonymous]8y 4

So you guys remember soylent? I was thinking I could get similar benefits blending simple foods and adding a good multivitamin to fill in any gaps.

So I've worked on it on and off for a couple of days, and here is a shot at what a whole food soylent might contain:

http://nutritiondata.self.com/facts/recipe/2786310/2

So um if anybody wants to confirm or critique this, that would be cool

1Viliam_Bur8yI like this approach more, because... I would be more likely to try that at home. Most of the items are easy to buy anywhere. I would have most inconvenience getting the following: Body fortress whey protein, Jamba Juice beverage, source of life liquid gold. Could they be replaced with something more generic? Also, eating the raw egg feels like a bad idea. Without these ingredients, I would be very likely to try it now.
1[anonymous]8yPhone isn't letting me press edit- I'll probably cook the egg. Don't want the raw whites to bind to the biotin
0[anonymous]8yI had body fortress at home, and jamba juice was on the website. Just use some kind of wheatgrass and whey protein. Doesn't have to be source of life either as long as its high quality. I've seen ortho core and orange triad recommended on body building forums . The whole recipe is suggestions anyway. I also see no reason not to, say, use kale and raspberries instead of spinach and blueberries; maybe that will help if I get bored with the taste. I hope you keep me posted if you try this.
1[anonymous]8yI was taking a friend's word on how amazingly beneficial wheatgrass juice is, until he claimed I could get everything I needed from wheatgrass indefinitely, which seemed outright crazy. So I researched it myself and I didn't find compelling evidence it's any more beneficial than normal vegetables. I have some in my freezer so I'm going to use it but unless you have a cheap source I don't think it's worth it, given that it tends to be expensive and taste like lawn clippings. This is embarassing.
0Adele_L8yApproximately how much does this cost per day? How does it taste?
1[anonymous]8yI'll let you know in a little bit by editing this to answer your question, because I haven't tried it yet
0[anonymous]8yI'll make sure to let you know when I try it in a few days
[-][anonymous]8y 4

I've just noticed that hovering the mouse pointer over a post's or comment's score now displays a balloon pop-up with information how large percentage of votes was positive. New feature or am I just really bad at noticing black stuff appearing suddenly on my screen?

Anyway, it's pretty nice. You can, for example, upvote a comment from 0 to 1, notice that the positive vote ratio changes only by a few percent and suddenly realize that there's a war going on in there.

2Qiaochu_Yuan8yNew feature.

Does anyone know which of the books on the academic side of CFAR's recommended reading list are likely to be instrumentally useful to someone who's been around here a couple years and has read most of the Sequences? It seems likely that there's some useful material in there, but I'd rather avoid reviewing a bunch of stuff.

Gamification of productivity:https://habitrpg.com/splash.html

I haven't signed up yet because I'm still assessing whether the overhead of filling it out is going to be too much of a trivial inconvenience, but thought some others might be interested. From poking around, it looks like it has a lot of potential but is still a little raw. It has the core game elements firmly in place but lacks the public status/accountability elements of good games (through acheivements/badges) and Fitocracy (through community/public accountability).

UPDATE: signed up, will report back next month

1ModusPonies8yI've been using it for something like a week and am finding it moderately useful. Its two big advantages are that it hijacks my pathological desire to watch my numbers go up, and the near-complete lack of customization. (When using a calendar, I have to think of when the task is due. When using beeminder, I have to think about how frequently I'll be doing the task. With this, for any possible task, there are no fiddly bits to get in the way of just shutting up and putting it in the list.) The drawbacks are the weak enforcement and the near-complete lack of customization.
0FiftyTwo8yI've been looking for something like this for a while after success with fitocracy. (I tried to make one myself, but failed due to lack of relevant skills and interest). Will try it for a week and report back.

Math and reading gaps between boys and girls

However, even in countries with high gender equality, sex differences in math and reading scores persisted in the 75 nations examined by a University of Missouri and University of Leeds study. Girls consistently scored higher in reading, while boys got higher scores in math, but these gaps are linked and vary with overall social and economic conditions of the nation.

2[anonymous]8yLink to original paper [http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0057988]

Saving the world though ECONOMICS

In a world of magic and fantasy, there exist two worlds: the Human World and the Demon World of fantasy creatures. Fifteen years ago, the "War of the Southern Kingdoms” broke out between both sides, each intending to conquer the other. Both sides were locked in a stalemate, until a young male human decides to do something about it. Known as the Hero, he is a skilled and powerful warrior who has traveled to the Demon World to end their evil by killing their leader, the Demon Queen.

But what surprised the Hero when he s

... (read more)
0gwern8yI found the premise really cool, but I'm still waiting for the season to finish and the anime bloggers sum up whether it managed to deliver a good plot arc or not. (It may turn out to be one of those series where you're better off just reading the novels or whatever.)

Link: This Story Stinks: article on a study showing that reader's perception of a blog post is changed when they read comments. In particular, any comments involving ad hominens or being generally rude polarize people's views. Full paper link.

I've been trying out the brain-training software from Posit science. I've definitely gotten better at some of their training material (tracking objects in a crowd of identical objects and seeing briefly shown motion), but I'm not sure whether it's improving my life.

Have any of you tried Posit's BrainHQ? If so, how has it worked out for you?

The training exercises look like they're only available as expensive software, but if you do their free exercises, they'll offer a $10/month option.

I found out about Posit from this video-- Merzenich clearly has somethi... (read more)

5gwern8yBrain training doesn't usually transfer. The Posit studies haven't been much better than any others.
2John_Maxwell8yEven working memory training?
1gwern8yLooks like it.
0Elithrion8yOkay, I played most of the free exercises, and apparently I'm like the ultimate boss at spotting different birds (aka "Brain speed - Hawk eye"), never making a single mistake at even the highest available speed, and merely mediocre/slightly above average at other things. I also noticed while playing the object tracking one that what allowed me to do better is I came up with new "algorithms" for tracking things. First time I did it, I tracked up to three objects easily, but then failed miserably at more. After practice, I learned to imagine lines between the objects, which let me track four correctly most of the time, and five occasionally. Which, setting aside me not be that good at this one, seems like a case of other-optimising. I really doubt learning to imagine lines between objects generalises well. So, from personal experience, I'm sceptical it's useful, but at the same time, listening to the video in the background (which may have reduced my performance on some of these), it does sound like there's some research to support this.

I'm having a motivation block that I'm not sure how to get around. Basically whenever I think about performing an intellectual activity, I have a sudden negative reaction that I'm incapable of succeeding. This heavily lowers my expectation that doing these activities will pay off, most destructively so the intellectual activity of figuring out why I have these negative reactions.

In particular, I worry about my memory. I feel like it's slipping from what it used to be, and I'm only 24. It's like, if only I could keep the details of the memory tricks in ... (read more)

3Qiaochu_Yuan8ySpecifically regarding memory, things don't need to be in your head for you to remember them. Start writing stuff down. All the stuff. Doesn't matter where. Anywhere is better than nowhere. I recommend Workflowy [https://workflowy.com/].
2Viliam_Bur8yYou are not specific enough about the memory. If you start forgetting your own name or something like this, you should visit a doctor. But if you only forget some details from what you learned at school, that means that you already have learned many things; so many that your day simply is not long enough to review all of them (and you also have to focus on many other different things). You have to develop the art of note taking. The more you have to know, the more critical this skill becomes. It is an illusion to try keeping everything in your head just because that strategy worked when you only knew a little. The difficulty of succeeding may mean that you have already picked most of the low-hanging fruit. Just like in a computer game, the higher levels get more difficult. The difficulty does not mean that you are less powerful; it means that you are powerful enough to work on the more difficult tasks. Also, some tasks require time and discipline; you simply cannot master them at your first attempt. I think you have to apply two kinds of fixes: psychological and organizational. Don't ignore either of them. It is important to make yourself feel better. And it is also important to use better tools. Without better tools your success is limited. But your mind remains the most important of your tools.
1CoffeeStain8yMany thanks. My memory issue certainly isn't any sort of disorder, and indeed the sort of success I'd like to have with it are of a high level. There has been a decline in the last few years of my (formerly exceptional) abilities here, and I need to find ways to increase my attention to it as a graspable and controllable challenge/problem. Generally my ability to deal with attention, focus, and memory issues correlates to my day-to-day mood and self-confidence. I've found a coach through the community here to help me find ways to combat these slightly more fundamental issues. It is good, though, to see the wide variety of talk here about improving focus, overcoming "Ugh fields," and the like. Fundamentally, my issue is one of keeping a particular skill in practice, and so I appreciate your practical suggestions. University offers an environment that more constantly practices skills such as learning, remembering, and new-paradigm thinking. My work environment offered similar challenges for a year or so, but I've since gained an expertise that is more valuable to use than to grow. Today I gave a presentation to a group of 50 software developers in my company, and I was pleasantly surprised at my abilities. Apparently all of my on-the-fly speaking skills (which I had presumed dead since school) were just latent, if out of practice until the adrenaline kicked it back online. This was in no small part due, I suppose, than some mental tricks I've learned here into convincing myself of my future success, based on previous successes. Just typing for my own benefit now. Thank you very much for your advice!
2Viliam_Bur8yGlad to be useful. In similar situations I often don't know how much the advice I would have given to myself also applies to other people [http://wiki.lesswrong.com/wiki/Other-optimizing]. For me, the greatest memory-related shock was about 1 year after finishing university. I found my old paper with notes for the final exam, and I realized I didn't understand half of the questions. Not only was I unable to answer them, but I had problem finding any related association. For the whole year in my job I was doing something completely different, and I forgot many things without even being aware that it happened. (The problem is, despite having studied computer science and working as a programmer, I never use 95-99% of what I learned at school. I know a lot of theory, I should be able to invent a new programming language and write a compiler with some basic optimization; but in real life I mostly do web interfaces for databases, over and over again.) Now I am sorry I didn't make better notes at university. But at the time, I was so proud that I understand everything. I didn't have experience with what happens when you simply never think about a topic for years. If you are 24, this may be already happening or going to happen to you, too. A few years forward, my programming career was progressing: I wrote code for two years in Java, then seven years in something else. Then I returned to Java and was like: oh, here is the forgetting again! This time I was lucky, because I simply downloaded the official documentation [http://docs.oracle.com/javase/specs/jls/se7/html/index.html], read it from the beginning to the end, and most forgotten memories returned quickly. (I didn't have the note-making skill yet, but I already had the habit of always looking at the authoritative documentation first.) But then I realized that "learning to forget" is a stupid strategy when it comes to really useful things, so I started to make notes. (First I spent a lot of time trying to find a good
1[anonymous]8yReassure yourself when you flinch and celebrate even the minor successes.

Apparently conscientiousness correlates strongly with a lot of positive outcomes. But unfortunately I seem to be very low on it.* Is there anything I can do to train it?

*Standard disclaimers about self assessment apply.

1beoShaffer8yYou can get actual big five tests online (see the latest LW survey for an example). The big 5 tend to be pretty stable, but putting your self in a social group that has the trait you want is relatively effective. Also, there is a whole lot of YMMV on which one(s) to use, but organizational/productivity tools like Getting Things Done can allow you to act in usefully conscientious ways without changing your personality per se.
0Barry_Cotter8yYeah, me too. I found getting a relatively structured job with standards so low that my (minimal) natural levels of professionalism exceeded their requirements successful. Conscientiousness is really, really difficult to train, but you can move further from your current base by changing the people you hang out with or work with. Industriousness, OTOH, is trainable. Last comment I saw about this, good paper linked. [http://lesswrong.com/lw/cs7/reaching_young_mathcompsci_talent/6qne] You can do better but having low conscientiousness still blows.
0beoShaffer8yLink missing.
1Barry_Cotter8yNo longer true. Cheers for the heads up.

So, I notice some of the top contributors have the "Overview" page that appears when you click on their name display their LW wiki user page instead of the standard recent comments/posts summary (gwern, for example). Is that only for super-awesome people or is there some way to enable it that I failed to find?

[edit:] Okay, apparently patience is the key. It started working for me somewhere between 24 and 48 hours after I made the wiki page for my username.

2wedrifid8yI find this feature really damn annoying. I don't want to see people's wiki profile. If I click on the name it is because I want to see the posts and comments. It would be great if this 'feature' could be disabled.
1Elithrion8yAw, but I wrote something relevant on mine! (Although most people don't seem to, admittedly.) I guess it'd be ideal if there were an option to enable/disable it for yourself and also to enable/disable skipping that page and going to comments when viewing.
1[anonymous]8yMine used to say that I was the same username on LW-wiki as on LW-Main, but I cleared it because it became redundant with this feature. Unfortunately I don't have rights to delete pages on the wiki, which is also mildly annoying for me if I want to look at my own comments.
2[anonymous]8yMake an userpage with the same name on the wiki, for example User:Gwern [http://wiki.lesswrong.com/wiki/User:Gwern].
2Elithrion8yI did [http://wiki.lesswrong.com/wiki/User:Elithrion]! That was my first guess. Does it take some time to update or something? (It's been ~20h)

It occurs to me that there is a roadblock for an AI to go foom; namely that it first has to solve the same "keep my goals constant while rewriting myself" problem that MIRI is trying to solve.

Otherwise the situation would be analogous to Gandhi being offered a pill that has a chance of making him into anti-Gandhi, and declining it.

If the superhuman - but not yet foomed! - AI is not yet orders of magnitude smarter than a hoo-mon, it may be a while before it is willing to power-up / go foom, since it would not want to jeapardize its utility function along the way.

Just because it can foom does not imply it'll want to foom (because of the above).

1drethelin8yThis is interesting, though I think it's less relevant for an entity made out of readable code. In the pill situation, if ghandi fully understood both his own biochemistry and the pill, all chance would be removed from the equation.
0Kawoomba8yedit: More relevant reply: A human researcher would see all of the AI's code and the "pill" (the proposed change), yet even without that element of "chance" it is not yet a solved problem what the change would end up doing. If the first human-programmed foom-able AI is not yet orders of magnitude smarter than a human - and it's doubtful it would be, given that it's still human-designed, then the AI would have no advantage in understanding its own code that the human researcher wouldn't have. If the human researcher cannot yet solve keeping the utility function steady under modifications, why should the similar-magnitude-of-intelligence AI (both have full access to the code-base)? Just remember that it's the not-yet-foomed AI that has to deal with these issues, before it can go weeeeeeeeeeeeeeeeKILLHUMANS (foom).

I've just moved to the Bay Area, and, as I'm unsubscribing from all my DC-area theatre/lecture/fun event listservs, I am sad I don't yet know what to replace them with!

What mailing lists will tell me about theatre, lectures, book clubs, social dance, costuming, etc in Berkeley and environs?

Does anyone know if there any negative effects of drinking red bull or similar energy drinks regularly?

I typically use tea (caffeine) as my stimulant of choice on a day to day basis, but the effects aren't that large. During large Magic: the Gathering tournaments, I typically drink a red bull or two (depending on how deep into the tournament I go) in order stay energetic and focused - usually pretty important/helpful since working on around 4 hours of sleep is the norm for these things.

Red bull works so well that I'm considering promoting it to semi-daily ... (read more)

[-][anonymous]8y 2

What is the purpose of the monthly quotes thread? (To post quotes, obviously.) But it seems to me that a lot of the time, it's just an excuse for applause lights.

2Qiaochu_Yuan8yBest case, someone finds a quote that expresses a rationality idea that I agree with but couldn't articulate as eloquently as the quote. This is particularly nice when it comes from an unexpected source; when I see good rationality coming from places I didn't expect, it's evidence that the corresponding ideas are good ideas rather than just, say, ideas popular on LW.
2TimS8yPrevious discussion of this issue [http://lesswrong.com/lw/g66/open_thread_january_115_2013/86pz]

How to instantly know, which articles I have already read on LW (or elsewhere)?

Well, if I have a camera on my computer, it could trace my eyes and displayed article and do some time based guesses what had actually has been read by me. Then it should be displayed with a yellowish background next time.

Just a suggestion.

P.S.

Or at least, it should be a I HAVE READ IT! button somewhere. With an personal marks how good it was. Independent of the up/down vote thumbs.

0FiftyTwo8yPresumably if you have browsing history stored on your computer you could have an indicator if a web address had been accessed before? (Presumably using the same function that makes links blue/purple.)
2Thomas8yI have several computers, as most people do. The user should trigger this history, not the computer.
4Qiaochu_Yuan8yChrome Sync will sync your history across devices. I am skeptical that most people have several computers.

I think it'd be nice to have a (probably monthly) "Ideas Feedback Thread", where one would be able to post even silly and dubious ideas for critique without fear of karma loss. Rules could be that you upvote top level idea comments if they sound interesting (even if wrong), and downvote only if you're really sure that it's very easy to find out they're bad (e.g. covered in core sequences). Could also be used for getting feedback on draft posts and whatnot.

The plan being that questionable ideas are put into their own thread for feedback, instead o... (read more)

5Qiaochu_Yuan8yI think open threads are in practice already this. Excessively encouraging such things could breed crackpots.
0Elithrion8yNot that I have noticed. Open Threads seem to primarily be "here's a cool thing I'd like to let you know about". If I want to post something like "The 'you are cloned and play prisoner's dilemma against yourself' example against CDT is actually pretty bad. Solving it doesn't require UDT/TDT so much as self-modification, with which even CDT would be able to easily solve it." (with a few more lines of explanation), for example, my model of Open thread predicts that if I'm wrong, I'll be downvoted a few times, and may or may not get good feedback. Also that Open Thread is meant for things that are more of interest to everyone, rather than being fairly specific. Which is why I'm not posting anything like that, even though I'm 80% sure that particular example is correct and may be of interest to at least some people. In what way? I doubt existing visitors to Less Wrong would be significantly more likely to generate crackpot ideas because of the existence of a thread, and I doubt even more that more crackpots would come to Less Wrong to participate in one thread. It may, admittedly, reduce conformity if people find unexpected support for non-mainstream ideas, however I'm not sure that most would consider that a bad thing.
4Qiaochu_Yuan8yI think downvotes would depend on how you present your idea. If you present your idea as if you're already convinced you're right, and you're not, I think that would lead to downvotes. But if you preface your idea with "hey, here's something I thought of, dunno if it works, would appreciate feedback," I think that would be fine. What people respond negatively to, I think, is not wrongness so much as arrogant wrongness. (Or at least that appears to be what I respond negatively to.) My model of the median LessWronger is closer to a crackpot than yours, maybe. Not that I think this is uniformly a bad thing; I have a vague suspicion that the brains of crackpots and the brains of curious, successful thinkers are probably pretty similar (e.g. because of stuff like this post [http://lesswrong.com/lw/j8/the_crackpot_offer/]). But it's easy to read the Sequences and think "man, I totally understand decision theory and also quantum mechanics now, I'm going to go off and have a bunch of ideas about them" and to be honest I don't want to encourage this.
1John_Maxwell8yI like this proposal. In the past, people (including me) have complained that LW doesn't get enough posts on topics where there's likely to be a lot of controversy or high variance in an item's score, 'cause people don't like getting downvoted more than they like getting upvoted.

I have a small site feature question! What are those save buttons and what do they do, if anything? (They seem to not do what I think they should do.)

2John_Maxwell8yLooks to me like you can view your saved stuff at http://lesswrong.com/saved/ [http://lesswrong.com/saved/]
1jooyous8yOhh! Awesome. Yeah, that's what I was looking for! I never expected to find that link where it turned out to be. =/ Thank you!

Recent experiences have suggested to me that there is a positive correlation between rationality and prosopagnosia. One hypothesis is that dealing with prosopagnosia requires using Bayes to recognize people, so it naturally provides a training ground for Bayesian reasoning. But I'm curious about other possible hypotheses as well as additional anecdotal evidence for or against this conclusion.

1NancyLebovitz8yWhat were the recent experiences?
1Qiaochu_Yuan8yI learned that a surprising number of people involved with CFAR / MIRI have prosopagnosia. (Well, either that or I'm miscalibrated about the prevalence of prosopagnosia.)
3beoShaffer8yHow prevalent do you think it is?
4Qiaochu_Yuan8yI know 4 (I think?) people with prosopagnosia and maybe 800 people total, so my first guess is 0.5%. Wikipedia says 2.5% and the internet says it's difficult to determine the true prevalence because many people don't realize they have it ( generalizing from one example [http://lesswrong.com/lw/dr/generalizing_from_one_example/], I assume). The observed prevalence in CFAR / MIRI is something like 25%? So another plausible hypothesis is that rationalists are unusually good at diagnosing their own prosopagnosia and the actual base rate is higher than one would expect based on self-reports.
0beoShaffer8yThat is a big difference.
1erratio8yTheory off the top of my head: The causation is in the wrong direction. People who are rational are far more likely to be very systems-oriented, have limited social experiences as children (by having different interests and/or being too dang smart), be highly introverted, and other factors that correlate with being around other people a lot less than your typical person. There's nothing wrong with our hardware per se, it's just that we missed out on critical training data during the learning period, Anecdotal: I have mild prosopagnosia. I have a lot of trouble recognising people outside their expected context, I make heavy use of non-facial cues. I'm pretty good at putting specific names to specific faces on demand when it feels important enough, although see prev point about expected context. I don't feel like I use anything resembling Bayesian reasoning, I feel like I have the same sense of recognition that I imagine most people have, it's just less dependent on seeing their face and more on other traits (most typically voice and manner of movement).

Has anyone indexed the set of Five-Second Skill posts on Less Wrong? E.g. Get Curious, the Algorithm for Beating Procrastination, Value of Information etc.

I've been working on a little project compiling Touhou music statistics. One major database may be unavailable to me from anywhere but Toranoana, and the total cost of reshipping will be ~$25 and take several weeks to get to me. This would be annoying, expensive, and slow.

In case my other strategies fail, are there any LWers in Japan who either owe me a favor or are willing to do me a favor in buying a CD off Tora and sending me the spreadsheets etc on it? (I'd be happy to cover the purchase cost with Paypal or a donation somewhere or something.)

Does anyone have sources on active steps that can be taken to improve gender diversity in organisations?

There is a lot of writing on the subject, but I'm finding it difficult to find sources that compare the effectiveness of different measures, with figures showing change, controlling for variables etc.

0Viliam_Bur8yI would like to see the results too, but I doubt they exist (beyond the obvious: if you want to have 50% male and 50% female employees, make an internal rule to hire 50% men and 50% women). Beyond evidence... my heuristic would be to start the organization with gender diversity. It should be easier to find e.g. 3 men and 3 women to start an organization, then to have an organization of 100 men and later think about how to make it more friendly for women. EDIT: Also, you should not have a bottom line already written that having 50:50 ratio is improving. People do have different preferences. A ratio other than 50:50 might reflect the true level of interest in the base population.
1wedrifid8yTo be precise: Hire in the direction of 50% men and 50% women. Depending on retention rates this may need to be skewed in either direction.
0Qiaochu_Yuan8yIt's unclear to me that much can be said about this subject across all organizations. Do you have a particular organization in mind?

Quick clarification of Eliezer's Mixed Reference, intended for me from twelve hours ago:

'External reality' is assumed to mean the stuff that doesn't change when you change your mind about it. This is a pretty good fit to what people mean when they say something like "exists" and didn't preface it with "cogito ergo." It's what can be meaningfully talked about if the minds talking are close enough that "change your mind" is close to "change which mind."

External reality can be logical, because the trillionth digit of ... (read more)

Can anyone tell me the name of this subject or direct me to information on it:

Basically, I'm wondering if anyone has studied recent human evolution - the influence of our own civilized lifestyle on human traits. For example: For birth control pills to be effective, you have to take one every day. Responsible people succeed at this. Irresponsible people may not. Therefore, if the types of contraceptives that one can forget to use are popular enough methods of birth control, the irresponsible people might outnumber responsible people in a very short peri... (read more)

7NancyLebovitz8yI have a notion that driving selects for having prudence and/or fast reflexes.
0Curiouskid8yIt's also one of the leading killers of young people , so it probably is one of the strongest selection pressures, though I'm not sure how strong.
0NancyLebovitz8yYes, that's why I was thinking about it. I'm not sure what other selective pressures are in play on people before they're finished reproducing.
7Kaj_Sotala8yThe 10,000 Year Explosion [http://lesswrong.com/lw/28k/the_psychological_diversity_of_mankind/] discusses the effects that civilization has had on human evolution in the last 10,000 years. (There's also this QA with its authors [http://lesswrong.com/lw/28t/qa_with_harpending_and_cochran/].) Not sure whether you'd count that as "recent".
3Jayson_Virissimo8yGregory Clark [http://www.econ.ucdavis.edu/faculty/gclark/]'s work A Farewell to Alms [http://www.amazon.com/Farewell-Alms-Economic-History-Princeton/dp/0691141282/ref=sr_1_1?ie=UTF8&qid=1362215502&sr=8-1&keywords=farewell+to+alms] discusses human micro-evolution taking place within the last few centuries, but is highly controversial (or so I hear).
0CellBioGuy8yTo almost anyone who knows much about evolutionary biology its not controvertial but positively laughable.
6gwern8yCites?
1Barry_Cotter8yYeah, that's like saying you could domesticate foxes in less than a human generation, or have adult lactose tolerance increase from 0% to 99.x% in some populations in under 4,000 years. Does this guy think we're completely credulous?
1[anonymous]8y-Cellbioguy, elsewhere in thread. I suspect you've misidentified his contention here; he clearly doesn't seem to think humans haven't evolved within the Holocene.
1NancyLebovitz8yDoes it look at possible effects of arranged marriages?
1Kaj_Sotala8yI don't remember it doing so, but it's two years since I read it and I did so practically in one sitting, so I don't remember much that I wouldn't have written down in the post.
0Costanza8yThe infamous Steve Sailer has written a lot about cousin marriage [http://isteve.blogspot.com/search/label/cousin%20marriage] , which, in practice, seems to be correlated with arranged marriage in many cultures (including the European royals in past centuries). Perhaps a lot of arranged marriages in practice may lead to inbreeding, with the genetic dangers that follow. I'm also wondering about the effects of anonymous sperm banks, where relatively well-off women may pay to choose a biological father on the basis of -- whatever available information they may choose to consider. What factors, in a man they will never meet, do they choose for their offspring?
0Epiphany8yWow. The article was fascinating. I devoured the whole thing. Thanks, Kaj. Do you know of additional information sources on the neurological changes?
1Kaj_Sotala8yNot offhand, but if you get the book, it has a list of references.
3Qiaochu_Yuan8yWild guess: try "human microevolution"? I'm not a domain expert, but my standing assumption is that even the last few hundred years of human history were just too short to have a noticeable effect on allele frequencies. I would be very interested to hear evidence to the contrary, though.
0Epiphany8yHuman microevolution, ooh. That sounds like a good guess. Google is showing me some results... it will take a while to parse them. Well the first thing that comes to mind is the incredibly horrible failure rate of common contraceptives, and the unplanned pregnancy rate and birth rate that goes with them. Evidence: In not even four years, about 25% of people using condoms became pregnant. Birth control pills were similar.http://www.jfponline.com/Pages.asp?AID=2603 [http://www.jfponline.com/Pages.asp?AID=2603] "49% of pregnancies in the United States were unintended" http://www.cdc.gov/reproductivehealth/UnintendedPregnancy/index.htm [http://www.cdc.gov/reproductivehealth/UnintendedPregnancy/index.htm] "These pregnancies result in 42 million induced abortions and 34 million unintended births" (world population growth was 78 million for contrast) http://www.arhp.org/publications-and-resources/contraception-journal/september-2008 [http://www.arhp.org/publications-and-resources/contraception-journal/september-2008] If there's any trait at all that's connected with this - inability to afford more expensive methods, not caring about reliability enough to get an IUD or something more effective, dexterity level too low to correctly apply the product, impulse control issues / inability to think under pressure or when excited, forgetfulness, inability to resist temptation, etc. those traits are likely to reproduce faster than their counterparts. Considering that half our population growth is unintended, I'm pretty concerned about it. The situation could be that (if a genetic irresponsibility trait exists and is responsible for a large portion of unintended pregnancies that go full term) even if the responsible portion of the population is larger, that the irresponsible portion begins it's generations sooner, and it's growth outstrips that of the responsible portion of the population, overpowering it in a short time. We're also doing things like removing sociopaths out
1Qiaochu_Yuan8yBy "evidence" I mean evidence that allele frequencies have noticeably changed. These are all hypotheses about things that might be affecting allele frequencies but, again, my standing assumption is that the timescales are too short.
2CellBioGuy8yNot only is the timescale too short (human societies change drastically over single-digit generation times, far too short for strong evolution) but all these traits are horrifically polygenic and dependant upon the exact combination of thousands of loci all around your genome that interact. There is also the extremly strong case against genetic determinism in most human behavior. The traits that I am aware of that show strong evolution all have had thousands of years to be selected for, like lactose tolerance in people descended from herders, resistance to high altitude with a hemoglobin change in Tibet, apparent sexual selection for blue eyes in Europeans and thick hair in East Asians, smaller stature in basically all long-term agriculturalist populations... I think I read about a particular immune system polymorphism in Europe that was selected for a few hundred years ago though because it conveyed partial resistance to the black death.
2Douglas_Knight8yI can see a couple interpretations of this. One is that given observed changes in behavior, it is hard to distinguish cultural change from genetic change. The other is that the cultural environment changes rapidly, so one might not expect the direction of its selective pressure to be maintained for long enough to produce "strong evolution." Depending on the definition of "strong evolution," that is tautologous. But why did you introduce the vague qualifier "strong"? "almost anyone who knows much about evolutionary biology" would know that this does not interfere with the potential for selection, but that excludes virtually all cell biologists. Learn some quantitative genetics in the kingdom of the blind. It's true that no single allele will shift much, but an aggregate shift in thousands of genes can be measured. I have never seen a useful use of the phrase "genetic determinism," but only ever seen it used as a straw man or a sleight of hand. How much of your comments apply to height? Things that are easier to observe are observed before things that are harder to observe. A selective sweep at a single locus is the easiest thing to observe, though the faster and more recent the sweep, the easier to observe.
0Epiphany8yThis really depends on your concept of "strong evolution". If that is jargon meant to refer to a conglomeration of changes that makes the organism different all over, I would agree. If we're just talking about this in terms of "Is it possible that something of critical importance could significantly change in a few generations?" then I say "Yes, it is possible." I assume you consider responsibility to be an important trait. Even if a change to the trait of responsibility alone may not qualify as "strong evolution" to you, would you say that it would be of critical importance to prevent humanity from losing the genes required for responsibility in even half it's population? In a world where 40% of the people get here by accident, and we can tell that a lot of their parents failed to use their contraceptives consistently, are you unconcerned that there could be a relationship between irresponsible use of birth control and irresponsible genes being reproduced more rapidly than responsible genes? But today's situation is not the same. We have technologies now that could result in much more powerful unintended consequences just as it results in powerful intended ones. Birth control pills, for instance, didn't exist thousands of years ago. Our lives and environments are so different now (and are continuing to change rapidly) that we should not assume that our present and future selection pressures will match the potency of the selection pressures in the past. To do so would be to make an appeal to history.
1Epiphany8yI haven't found any evidence that allele frequencies have changed - I just started to look into this, and didn't even have a search term when I started. Due to that, I thought it was obvious that I didn't have anything on micro-evolution, so I gave you the evidence I do have which, even though does not do anything to support the idea that allele frequencies are being influenced, does support the idea that there's potential for a lot of influence. Hmm. A contraceptive and unplanned pregnancy survey by 23andme would be so interesting... I wonder if they do things like that... If I get a useful response to my request for a credible source on their accuracy, I will investigate this. (I want to get their service anyway but am demanding a credible source first.)
2Kaj_Sotala8yhttp://www.sciencedirect.com/science/article/pii/S016028961000005X [http://www.sciencedirect.com/science/article/pii/S016028961000005X]
2Larks8yYou might be interested in Evolution, Fertility and the Ageing Population [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2208886], which does some modelling on this.
0maia8yDepending on how recent you want... I recalled hearing that a major evolutionary shift in the past few thousand years was lactose tolerance; a quick Google search turned up this: http://www.nytimes.com/2006/12/10/science/10cnd-evolve.html?_r=0 [http://www.nytimes.com/2006/12/10/science/10cnd-evolve.html?_r=0] Also, maybe a selection for particular types of earwax, which could be related to body odor: http://blogs.discovermagazine.com/gnxp/2010/10/east-asians-dry-earwax-and-adaptation/#.UTGlU9H2QgQ [http://blogs.discovermagazine.com/gnxp/2010/10/east-asians-dry-earwax-and-adaptation/#.UTGlU9H2QgQ]
0Epiphany8yThanks, Maia, but my interest in this is from the perspective of an altruist who wants to know whether humanity will improve or disintegrate. I am interested in things that might create selection pressures that affect things like ethical behavior and competence. It seems like you've read about this subject so I'm wondering if you know of any research on micro-evolution affecting traits that are important to humanity having a good future.
0Costanza8yPersonally, I'm desperately hoping for a near-term Gattaca [http://en.wikipedia.org/wiki/Gattaca] solution, by which ordinary or defective parents can, by genetic engineering, cheaply optimize their children's tendencies towards all good things, at least as determined by genotype, including ethical behavior and competence, in one generation. Screw this grossly inefficient and natural selection nonsense. I know the movie presented this as a dystopia, in which the elite were apparently chosen mostly to be tall and good-looking. Ethan Hawke's character, born naturally, was short and was supposedly ugly. Only in the movies, Ethan. But he had gumption and grit and character, which (in the movie) had no genetic component, enabling him to beat out all his supposed superiors. I call shenanigans on that philosophy. I suspect that gumption and grit and character do have a genetic component, which I would wish my own descendants to have.
1Epiphany8yI am also hoping that all parents in the future have the ability to make intentional genetic improvements to their children, and I also agree with you that this would not necessarily result in some horrible dystopia. It might actually result in more diversity because you wouldn't have to wait for a mutation in order to add something new. I wonder if anyone has considered that. I doubt that this would solve all the problems in one generation. Some people would be against genetic enhancement and we'd have to wait for their children to grow up and decide for themselves whether to enhance themselves or their offspring. Some sociopaths would probably see sociopath genes as beneficial and refuse to remove them from their offspring... which means we may have to wait multiple generations before those genes would disappear (or they may never completely vanish). We also have to consider that we'd be introducing this change into a population with x number of irresponsible people who may do things like give the child a certain eye color but fail to consider things like morality or intelligence. Then we will also have the opposite problem - some people will be responsible enough to want to change the child's intelligence, but may lack the wisdom to endow the child with an appropriate intelligence level. Jacking the kid's IQ up to 300 or would result in something along the lines of: The parents become horrified when they realize that the child has surpassed them at age three. As the child begins providing them adult level guidance on how to live and observing that their suggestions are actually better than their parents could come up with, the child has a mental breakdown and identity crisis - because they are no longer a child but are stuck in a toddler's body, and because they no longer have a relationship with anyone that can realistically be considered to play the role of a parent. If the parents are really unwise they'll continue to treat that person as a toddler, discou
3Viliam_Bur8yBy the way, evolution would still work in a world of genetical engineering. If someone modified their children to have a desire to have as many children as possible (well, assuming that such genes exist), that modification would spread like a wildfire. Or imagine a religious faith that requires you to modify your child for maximum religiousness; including a rule that it is ok (or even encouraged) to marry a person from a different faith as long as they agree that all your children will have this faith and this modification. The point is, some modifications may have the potential to spread exponentially. So it's not just one pair of parents making the life of their child suboptimal, but a pair of parents possibly starting a new global problem. (Actually, you don't even need a pair of parents; one women with donated sperm is enough.)
0maia8ySorry, but I'm actually not too knowledgeable on the subject. I happened to have heard of those two evolutionary trends, and since your original post wasn't too specific, I thought you might be interested. You could try consulting some resources on evolutionary psychology. Though I haven't read it (yet - the copy is sitting on my bookshelf), I've heard good things about The Moral Animal.

I remember seeing something about Islamic law and the ability to will money to charities meant to exist in perpetua and now I can't find it. Does anyone know what I'm talking about.

5gwern8yYo: http://www.gwern.net/The%20Narrowing%20Circle#islamic-waqfs [http://www.gwern.net/The%20Narrowing%20Circle#islamic-waqfs]
1beoShaffer8yThank you.

From the wikipedia page, it seems that coffee has a lot of good long term medical benefits, with only a few long term side effects if consumed in moderation, meaning less than 4 cups a day.

(http://en.wikipedia.org/wiki/Health_effects_of_caffeine#Long-term_effects)

This includes possible reduced risk of prostate cancer, Alzheimers, dementia, Parkinson's disease, heart disease, diabetes, liver disease, cirrhosis, and gout.

It has also been taken off the list for a risk factor in heart disease, and acts as an antidepressant.

Caffeine is not the cause of all of t... (read more)

1gwern8yHave you considered tea? Seems to be cheaper and the health benefits seem equal or superior in my very casual overviews of the topic.
3David_Gerard8yGreen tea is hugely beneficial in that your coworkers are less likely to nick it.
2falenas1088yInterestingly, if you go to the main wiki page on tea, it lists many benefits, including "significant protective effects of green tea against oral, pharyngeal, oesophageal, prostate, digestive, urinary tract, pancreatic, bladder, skin, lung, colon, breast, and liver cancers, and lower risk for cancer metastasis and recurrence." However, looking at the studies cited shows the ones they cite are in animals or in vitro. (http://en.wikipedia.org/wiki/Tea#Health_effects [http://en.wikipedia.org/wiki/Tea#Health_effects]) If you look on the main page of Health effects of Tea, it says the FDA and Nation Cancer Institute say there are most likely no effects to reduce cancer, and the page doesn't list any other major benefits. There are also many drawbacks listed on that page. ( http://en.wikipedia.org/wiki/Health_effects_of_tea [http://en.wikipedia.org/wiki/Health_effects_of_tea]) But, the FDA announcement they cite was in 2005, and I don't know if there have been major important studies since then. A quick google scholar search doesn't appear to show studies in humans, though I didn't do a detailed enough search to say anything conclusive. Bottom line, I'm not sure if tea is better, or even beneficial at all.
3gwern8yI think a better search would've helped. For example, doing a date limit to 2007 or so and searching tea human longevity OR lifespan OR mortality pulls up 2 correlational studies (what, you were expecting large RCTs? dream on). You could probably get even better results doing a human-limited search on Pubmed.
0Matt_Simpson8yAhem. (Mediocre, but it took me two minutes. I'm satisfied.)
0Emily8yYou might also take into account any possible downside from becoming caffeine dependent, ie unable to function optimally without it once you've gained tolerance. Caffeine dependence goes away again pretty quickly if you abstain though, so you can undo that if you don't like it.
0Elithrion8yAre you sure you trust the research in question? Without reading the literature at all, it seems to me like there may be a lot of confounding factors (e.g. maybe richer people drink more coffee). I'm especially sceptical because you list a large range of dubiously related diseases (so, richness would affect them all, but caffeine/whatever affecting them all is less expected). Beyond that, you also need to check the magnitude of effects - if it's a minuscule change, it may well not be worth bothering with (and is even more likely to be because of noise). So, yeah, very sceptical that these effects are real and worth acting on, although I suppose they could be. In theory.

I am in Berkeley for a few days, primarily Thursday march 28th. Please text me at 203-710-5337 if you'd like to catch up or have any ideas for a thing I shouldn't miss.

If computer hardware improvement slows down, will this hasten or delay AGI?

My naive hypothesis is that if hardware improvement slows, more work will be put into software improvement. Since AGI is a software problem, this will hasten AGI. But this is not an informed opinion.

0gwern8yAre you familiar with the hardware overhang argument?
0MileyCyrus8yNo, and Google is failing me. Is there somewhere I can read about it?
-1gwern8yReally? For me, the first 4 hits for "hardware overhang argument" seem relevant. Tossing in relevant keywords like "Lesswrong" make them even more so.
[-][anonymous]8y 0

ignore me; testing retraction

[This comment is no longer endorsed by its author]Reply

I've just learned that if it is July or a later month, it is more probable that the current year has begun with Friday, Sunday, Tuesday or Wednesday. If it is June or an earlier month, it is more probable that the current year has begun with Monday, Saturday or Thursday.

For the Gregorian calendar, of course.

2drethelin8yThis just in: Anthropics is still useless!
1Tenoke8yHow come
1Thomas8yThere are more days in July to December, than in January to June. So it is a little more likely for a random observer to find himself in the later 6 months. But if he finds himself before the July, it is more likely that it is a leap year, with an additional day, than otherwise would be. This increased probability for a leap year skews the probability distribution for the first day of the year also. This is how it comes.
0[anonymous]8yCorrect me if I'm wrong, but isn't the probability of a year being a leap year approximately 25%, completely independent of what month it is? (This seems like one of those unintuitive-but-correct probability puzzles...)
2Kawoomba8yFor all intents and purposes, yes. Well, for nearly all intents and purposes, since there is in fact a very slight difference: Imagine the year only had 2 months, PraiseKawoombaMonth, and KawoombaPraiseMonth, each of those having 30 days. However, every other year the first month gets cut to 1 day to compensate for some unfortunate accident involving shortening the orbital period. Still, for any given year the probability of being a leap year is 50%. Now you get woken from cryopreservation (high demand for fresh slaves) and, asking what time it is, only get told it's PraiseKawoombaMonth (yay!). This is evidence that strongly informs you that you are probably in one of the equi-month years, since otherwise it would be very unlikely for you to find yourself in PraiseKawoombaMonth. Snap, back to reality: Same thing if you're told it's August, the chance of being in August at any given time is lower in a leap year, since the fraction of August per year is lower. There's just more February to go around! Sorry for the quality of the explanation. It's the only way I can explain things to my kids.

One day Clippy will sit in a re-cohered bar with its fellow superintelligences from around the MWI-block, each sipping on their own reality-fluid, but what a stale, static beverage it will have become for everyone. Except the superintelligence making everything bubbly. Also, at that point, Clippy's architecture will be implemented using paperclips as a substrate.