If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Open thread, September 2-8, 2013
New Comment
379 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

There has recently been some speculation that life started on Mars, and then got blasted to earth by an asteroid or something. Molybdenum is very important to life (eukaryote evolution was delayed by 2 billion years because it was unavailable), and the origin of life is easier to explain if Molybdenum is available. The problem is that Molybdenum wasn't available in the right time frame on Earth, but it was on Mars.

Anyway, assuming this speculation is true, Mars had the best conditions for starting life, but Earth had the best conditions for life existing, and it is unlikely conscious life would have evolved without either of these planets being the way they are. Thus, this could be another part of the Great Filter.

Side note: I find it amusing that Molybdenum is very important in the origin/evolution of life, and is also element 42.

5curiousepic
As someone pointed out to me when mentioning this to them, to be a candidate for Great Filter there would need to be something intrinsic about how planets are formed that cause these two types of environments to be mutually exclusive, else it seems like there isn't sufficient reduction in probability of their availability. Is this actually the case? Perhaps user:CellBioGuy can elucidate.
3[anonymous]
Well it took a while for that summoning to actually take effect. The point I was making that that is not necessarily a strong contender (as in many orders of magnitude) because of two things. One, all stars slowly increase in brightness as they age after settling down into their main sequence phase and their habitable zones thus sweep through the inner system over time (though this effect is much stronger for larger stars because they age and change faster). Secondly, its probable that most young smaller planets tend to have much more in the way of atmospheres than later in their lives just due to the fact that they are more geologically active then and haven't had time for the light molecules to be blasted away. And if terrestrial planets follow any sort of a power rule in their distribution of masses, there should be multiple Mars-size planets for every Earth sized planet. At this point I would say that any speculation on the exact place and time of the origin of life is premature, that there's nothing to suggest that it didn't happen on Earth, but that there is little to suggest that it couldn't have happened elsewhere within our own solar system either even if we have little reason to think it had to (besides adding the necessity that it later moved to the very clement surface of the Earth, the only place in the solar system that can support a big high-bimoass biosphere like ours). I honestly don't know much about the proposed molybdenum connection. Some cursory looking about the internet suggests that molybdenum is necessary for efficient fixation of nitrogen from the air into organic molecules by nitrogenase, the enzyme that does most of that biological activity on Earth. I would be surprised though if that were the only way it could go, rather than just the way it went here... EDIT: upon further looking around, I am worried that the proposed molybdenum connection could be correlation rather than causation. Most sources claiming that the presence of lots o
-2Will_Newsome
(Are you Adele Lack Cotard?)
0Adele_L
Like in Synecdoche, New York? No... it is an abbreviation of my real name.
[-]Metus230

The ancient Stoics apparently had a lot of techniques for habituation and changing cognitive processes. Some of those live on in the form of modern CBT. One of the techniques is to write a personal handbook with advice and sayings to carry around at all times as to never be without guidance from a calmer self. Indeed, Epictet advises to learn this handbook by rote to further internalisation. So I plan to write such a handbook for myself, once in long form with anything relevant to my life and lifestyle, and once in a short form that I update with things that are difficult at that time, be it strong feelings or being deluded by some biases.

In this book I intend to include a list of all known cognitive biases and logical fallacies. I know that some biases are helped by simply knowing them, does anyone have a list of those? And should I complete the books or have a clear concept of their contents, are you interested in reading about the process of creating one and possible perceived benefits?

I'm also interested in hearing from you again about this project if you decide to not complete it. Rock on, negative data!

0Metus
Though lack of motivation or laziness is not a particularly interesting answer.

Though lack of motivation or laziness is not a particularly interesting answer.

I have found "I thought X would be awesome, and then on doing X realized that the costs were larger than the benefits" to be useful information for myself and others. (If your laziness isn't well modeled by that, that's also valuable information for you.)

(mild exaggeration) Has anyone else transitioned from "I only read Main posts, to I nearly only read discussion posts, to actually I'll just take a look at the open threat and people who responded to what I wrote" during their interactions with LW?

To be more specific, is there a relevant phenomenon about LW or is it just a characteristic of my psyche and history that explain my pattern of reading LW?

I read the sequences and a bunch of other great old main posts but now mostly read discussion. It feels like Main posts these days are either repetitive of what I've read before, simply wrong or not even wrong, or decision theory/math that's above my head. Discussion posts are more likely to be novel things I'm interested in reading.

4CAE_Jones
This describes how my use of LW has wound up pretty accurately.
[-]tgb150

Selection bias alert: asking people whether they have transitioned to reading mostly discussion and then to mostly just open threads in an open thread isn't likely to give you a good perspective on the entire population, if that is in fact what you were looking for.

2private_messaging
There would be far more selection bias if he asked about it outside an open thread, though.
0blashimov
Really? Why?
6private_messaging
Because he's asking about people who only read the open thread. Here he could get response from the people who do read LW in general, inclusive of the open thread, and people who read only the open thread (he'll miss the people who don't read the open thread). Outside the open thread, he gets no response at all from people who only read the open thread.
4blashimov
I see what you mean.
9RolfAndreassen
I read the Sequences as they were posted; Main posts now rarely hold my interest the same way. Eliezer's writing is just better than most people's.
[-]Shmi450

Honestly, I don't know why Main is even an option for posting. It should really be just an automatically labeled/generated "Best of LW" section, where Discussion posts with, say, 30+ karma are linked. This is easy to implement, and easy to do manually using the Promote feature until it is. The way it is now, it's mostly by people thinking that they are making an important contribution to the site, which is more of a statement about their ego than about quality of their posts.

4diegocaleiro
I predict that some people will have been through the sequences, which are Main posts, but then mainly cared about discussion. I suspect it has to do with Morning Newspaper Bias - the bias of thinking that new stuff is more relevant, when actually it is just pointless to read most of the time, only scrambles your mind, and loses value very quickly.
1David_Gerard
The lower the barrier to entry, the more the activity. Thus, more posts are on Discussion. My hypothesis is that this has worked well enough to make Discussion where stuff happens. c.f. how physics happens on arXiv these days, not in journals. (OTOH, it doesn't happen on viXra, whose barrier to entry may be too low.)
1Username
I've definitely noticed this in my use of LW. I find that the open threads/media threads with their consistent high-quality novelty in a wide range of subject areas are far more enjoyable than the more academic main threads. Decision theory is interesting, but it's going to be hard to hold my attention for a 3,000 word post when there are tasty 200-word bites of information over here.
2David_Gerard
Well, chat's always more fun.
0niceguyanon
I'll admit that much of the main sequence are too heavy to understand without prior knowledge, so I find discussions much easier to take in, and many times I end up reading a sequence because it was posted in a discussion comment. For me discussion posts are like the gateway to Main.
0ygert
My experience is similar. I read the sequences as they were published on OB, then when the move over to LW happened I just subscribed to the RSS feed and only read Promoted posts for quite a few years. Only about a year ago I actually signed up for an account here and started posting and reading Discussion and the Open Thread.
[-]tgb180

Background: "The genie knows, but doesn't care" and then this SMBC comic.

The joke in that comic annoys me (and it's a very common one on SMBC, there must be at least five there with approximately the same setup). Human values aren't determined to align with the forces of natural selection. We happen to be the product of natural selection, and, yes, that made us have some values which are approximately aligned with long-term genetic fitness. But studying biology does not make us change our values to suddenly become those of evolution!

In other words, humans are a 'genie that knows, but doesn't care'. We have understood the driving pressures that created us. We have understood what they 'want', if that can really be applied here. But we still only care about the things which the mechanics of our biology happened to have made us care about, even though we know these don't always align with the things that 'evolution cares about.'

(Please if someone can think of a good way to say this all without anthropomorphising natural selection, help me. I haven't thought enough about this subject to have the clarity of mind to do that and worry that I might mess up because of such metaphors.)

3Vladimir_Nesov
For more on this topic, see for example these posts: * Evolutionary Psychology * Thou Art Godshatter * Rebelling Within Nature * The Gift We Give To Tomorrow

Anyone tried to use the outside view on our rationalist community?

I mean, we are not the first people on this planet who tried to become more rational. Who were our predecessors, and what happened to them? Where did they succeed and where they failed? What lesson can we take from their failures?

The obvious reply will be: No one has tried doing exactly the same thing as we are doing. That's technically true, but that's a fully general excuse against using outside view, because if you look into enough details, no two projects are exactly the same. Yet it is experimentally proved that even looking at sufficiently similar projects gives better estimates than just using the inside view. So, if there was no one exactly like us, who was the most similar?

I admit I don't have data on this, because I don't study history, and I have no personal experience with Objectivists (which are probably the most obvious analogy). I would probably put Objectivists, various secret societies, educational institutions, or self-help groups into the reference class. Did I miss something important? The common trait is that those people are trying to make their thinking better, avoid some frequent faults, and t... (read more)

6Vaniver
I've seen a few attempts, mostly from outsiders. The danger involved there is an outsider has difficult picking the right reference class- you don't know how much they know about you, and how much they know about other things. The things that the outside view has suggested we should be worried about that I remember (in rough order of frequency): * Being a cult. * Being youth-loaded. * Optimizing for time-wasting over goal-achieving. Here are two critiques I remember from insiders that seem to rely on outside view thinking: Yvain's Extreme Rationality: It's Not That Great, patrissimo's Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality. Eliezer's old posts on Every Cause Wants To Be A Cult and Guardians of Ayn Rand also seem relevant. (Is there someone who keeps track of our current battle lines for cultishness? Do we have an air conditioner on, and are we optimizing it deliberately?) One of the things that I find interesting is in response to patrissimo's comment in September 2010 that LW doesn't have enough instrumental rationality practice, Yvain proposed that we use subreddits, and the result was a "discussion" subreddit. Now in September 2013 it looks like there might finally be an instrumental rationality subreddit. That doesn't seem particularly agile. (This is perhaps an unfair comparison, as CFAR has been created in the intervening time and is a much more promising development in terms of boosting instrumental rationality, and there are far more meetups now than before, and so on.) There's also been a handful of "here are other groups that we could try to emulate," and the primary one I remember was calcsam's series on Mormons (initial post here, use the navigate-by-author links to find the others). The first post will be particularly interesting for the "outside view" analysis because he specifically discusses the features of the LDS church and LW that he thinks puts them in the same reference class (for that se

The reason why I asked was not just "who can we be pattern-matched with?", but also "what can we predict from this pattern-matching?". Not merely to say "X is like Y", but to say "X is like Y, and p(Y) is true, therefore it is possible that p(X) is also true".

Here are two answers pattern-matching LW to a cult. For me, the interesting question here is: "how do cults evolve?". Because that can be used to predict how LW will evolve. Not connotations, but predictions of future experiences.

My impression of cults is that they essentially have three possible futures: Some of them become small, increasingly isolated groups, that die with their members. Others are viral enough to keep replacing the old members with new members, and grow. The most successful ones discover a way of living that does not burn out their members, and become religions. -- Extinction, virality, or symbiosis.

What determines which way a cult will go? Probably it's compatibility of long-term membership with ordinary human life. If it's too costly, if it requires too much sacrifice from members, symbiosis is impossible. The other two choices probably depend on how much ... (read more)

6Vaniver
Agreed. One of the reasons why I wrote a comment that was a bunch of links to other posts is because I think that there is a lot to say about this topic. Just "LW is like the Mormon Church" was worth ~5 posts in main. A related question: is LessWrong useful for people who are awesome, or just people who want to become awesome? This is part of patrissimo's point: if you're spending an hour a day on LW instead of an hour a day exercising, you may be losing the instrumental rationality battle. If someone who used to be part of the LW community stops posting because they've become too awesome, that has unpleasant implications for the dynamics of the community. I was interested in that because "difference between the time a good idea is suggested and the time that idea is implemented" seems like an interesting reference class.
8Viliam_Bur
Isn't this a danger that all online communities face? Those who procrastinate a lot online get a natural advantage against those who don't. Thus, unless the community is specifically designed against that (how exactly?), the procrastinators will become the elite. (It's an implication: Not every procrastinator becomes a member of elite, but all members of elite are procrastinators.) Perhaps we could make an exception for Eliezer, because for him writing the hundreds of articles was not procrastination. But unless writing a lot of stuff online is one's goal, then procrastination is almost a necessity to get a celebrity status on a website. Then we should perhaps think about how to prevent this effect. A few months ago we had some concerned posts against "Eternal September" and stuff. But this is more dangerous, because it's less visible, it is a slow, yet predictable change, towards procrastination.
8Vaniver
Yes, which is I think a rather good support for having physical meetups. Agreed. Note that many of the Eternal September complaints are about this, though indirectly: the fear is that the most awesome members of a discussion are the ones most aggravated by newcomers, because of the distance between them and newcomers is larger than the difference between a median member and a newcomer. The most awesome people also generally have better alternatives, and thus are more sensitive to shifts in quality.
4linkhyrule5
Supporting this, I'll note that I don't see many posts from, say, Wei Dai or Salamon in recent history - though as I joined all of a month of ago take that with a dish of salt. I wonder if something on the MIRI/CFAR end would help? Incentives on the actual researchers to make occasional (not too many, they do have more important things to do) posts on LessWrong would probably alleviate the effect.
2Viliam_Bur
Perhaps to some degree, different karma coefficients could be used to support what we consider useful on reflection (not just on impulse voting). For example, if a well-researched article generated more karma than a month of procrastinating while writing comments... There is some support for this: articles in Main get 10× more karma than comments. But 10 is probably not enough, and also it is not obvious what exactly belongs to Main; it's very unclearly defined. Maybe there could be a Research subreddit where only scientific-level articles are allowed, and there the karma coefficient could be pretty high. (Alternatively, to prevent karma inflation, the karma from comments should be divided by 10.)
0ChristianKl
I don't think that "sounding good" is a accurate description of how people in the personal development field succeed. Look at Tony Robbins who one of the most successful in the business. When you ask most people whether walking on hot coal is impressive they would tell you that it is. Tony manages to get thousands of people in a seminar to walk over hot coals. Afterwards they go home and tell there friends about who they walked about hot coals. That impresses people and more people come to his seminars. It not only that his talk sounds good but that he is able to provide impressive experiences. On the other hand his success is also partly about being very good at building a network marketing structure that works. But in some sense that not much different than the way universities work. They evidence that universities actually make people successful in life isn't that strong. I don't think so. If you are a scientologist and believe in Xenu that reduces your compatibility with ordinary human life. At the same time it makes you more committed to the group if you are willing something to belong. Opus Dei members wear cilice to make themselves uncomfortable to show that they are committed. I think the fact that you don't see where many people as members of groups that need a lot of commitment is a feature of 20th century where mainstream society with institution such as television that are good at presenting a certain culture which everyone in a country has a shared identity. At the moment all sort of groups like the Amnish or LDS that require more commitment of their members seem to grow. It could be that we have a lot more people as members of groups that require sacrifice in a hundred years than we have now.
1Armok_GoB
I'd like to here point out a reference class that includes if I understood things right: the original buddhism movement, the academic community of france around the revolution, and the ancient greek philosophy tradition. More examples and a name for this reference class would be welcome. I include this mainly to counterbalance the bias towards automatically looking for the kind of cynical reference classes typically associated with and primed by the concept of the outside view. Looking at the reference class examples I came up with, there seems to be a tendency towards having huge geniuses at the start that nobody later could compete with, wich lead after a few centuries to dissolving into spinoff religions dogmatically accepting or even perverting the original ideals. DISCLAIMER: This is just one reference class and the conclusions reached by that reference class, it was reached/constructed by trying to compensate for a predicted bias with tends to lead to even worse bias on average.
2Viliam_Bur
Thank you! This is the reference class I was looking for, so it is good to see someone able to overcome the priming done by, uhm, not exactly the most neutral people. I had a feeling that something like this is out there, but specific examples did not come into my mind. The danger of not being able to "replicate" Eliezer seems rather realistic to me. Sure, there are many smart people in CFAR, and they will continue doing their great work... but would they be able to create a website like LW and attract many people if they had to start from zero or if for whetever reasons the LW website disappeared? (I don't know about how MIRI works, so I cannot estimate how much Eliezer would be replaceable there.) Also, for spreading x-rationality to other countries, we need local leaders there, if we want to have meetups and seminars, etc. There are meetups all over the world, but they are probably not the same thing as the communities in Bay Area and New York (although the fact that two such communities exist is encouraging).
1ChristianKl
I think the cult view is valuable to look at some issues. When you have someone asking whether he should cut ties with his nonrational family members is valuable to keep in mind that's culty behavior. Normal groups in our society don't encourage their members to cut family ties. Bad cults do those things. That doesn't mean that there's never a time where one should rationally advice someone to cut those ties, but one should be careful. Given the outside view of how cults went to a place where people literally drunk kool aid I think it's important to encourage people to keep ties to people who aren't in the community. Part of why Eliezer banned the basilisk might have been that having it more actively around would push LessWrong in the direction of being a cult. There's already the promise for nearly eternal life in a FAI moderated heaven. It always useful to investigate where cult like behaviors are useful and where they aren't.
[-]gwern140

To maybe help others out and solve the trust bootstrapping involved, I'm offering for sale <=1 bitcoin at the current Bitstamp price (without the usual premium) in exchange for Paypal dollars to any LWer with at least 300 net karma. (I would prefer if you register with #bitcoin-otc, but that's not necessary.) Contact me on Freenode as gwern.

EDIT: as of 9 September 2013, I have sold to 2 LWers.

5linkhyrule5
Pardon me, but - what is the trust boostrapping involved?
6gwern
Paypal allows clawbacks for months, hence it's difficult to sell for Paypal to anyone who is not already in the -otc web of trust; but by restricting sales to high-karma LWers, I am putting their reputation here at risk if they scam me, which enables me to sell to them. Hence, they can acquire bitcoins & get bootstrapped into the -otc web of trust based on LW.
1wadavis
Thank you. All I need is hand held spray thermos to make Australia a viable working vacation. I have a strong irrational aversion to spiders. This is much more acceptable than the home made flamer.

Abstract

What makes money essential for the functioning of modern society? Through an experiment, we present evidence for the existence of a relevant behavioral dimension in addition to the standard theoretical arguments. Subjects faced repeated opportunities to help an anonymous counterpart who changed over time. Cooperation required trusting that help given to a stranger today would be returned by a stranger in the future. Cooperation levels declined when going from small to large groups of strangers, even if monitoring and payoffs from cooperation were invariant to group size. We then introduced intrinsically worthless tokens. Tokens endogenously became money: subjects took to reward help with a token and to demand a token in exchange for help. Subjects trusted that strangers would return help for a token. Cooperation levels remained stable as the groups grew larger. In all conditions, full cooperation was possible through a social norm of decentralized enforcement, without using tokens. This turned out to be especially demanding in large groups. Lack of trust among strangers thus made money behaviorally essential. To explain these results, we developed an evolutionary model. When behavior in society is heterogeneous, cooperation collapses without tokens. In contrast, the use of tokens makes cooperation evolutionarily stable.

http://www.pnas.org/content/early/2013/08/21/1301888110

[-]tut160

Does this also work with macaques, crows or some other animals that can be taught to use money, but didn't grow up in a society where this kind of money use is taken for granted?

1Alsadius
Not strictly the same, but there have been monkey money experiments. And the results are hilarious. www.zmescience.com/research/how-scientists-tught-monkeys-the-concept-of-money-not-long-after-the-first-prostitute-monkey-appeared/

Who is this and what has he done with Robin Hanson?

The central premise is in allowing people to violate patents if it is not "intentional". While reading the article the voice in my head which is my model of Robin Hanson was screaming "Hypocrisy! Perverse incentives!" in unison with the model of Eliezer Yudkowsky which was also shouting "Lost Purpose!". While the appeal to total invasive surveillance slightly reduced the hypocrisy concerns it at best pushes the hypocrisy to a higher level in the business hierarchy while undermining the intended purpose of intellectual property rights.

That post seemed out of place on the site.

This may be an odd question, but what (if anything) is known on turning NPCs into PCs? (Insert your own term for this division here, it seems to be a standard thing AFAICT.)

I mean, it's usually easier to just recruit existing PCs, but ...

7mare-of-night
I suspect that finding people on the borderline between the categories and giving them a nudge is part of the solution to this problem. What do you need PCs to do that NPCs cannot do? Zeroing in on the exact quality needed may make the problem easier.
3blashimov
Take the leadership feat, and hope your GM is lazy enough to let you level them. More practically, is it a skills problem or as I would guess an agency problem? Can impress on them the importance of acting vs not? Lend them the Power of Accountability? 7 habits of highly effective people? Can you compliment them every time they show initiative? etc. I think the solution is too specific to individuals for general advice, nor do I know a general advice book beyond those in the same theme as those mentioned.
1MugaSofer
Heh. Agency. I've just noticed how many people I interact with are operating almost totally on cached thoughts, and getting caught up in a lot of traps that they could avoid if they were in the correct frame of mind (ie One Of Us.) But you have to be ... motivated correctly, I guess, in order to turn to rationalism or some other brand of originality. Goes my reasoning. Yeah, could be. I figure it's always possible someone already solved this, though, so I'd rather find there's already a best practice than kick myself much later for reinventing the wheel ( or worse, giving up!)
2ChristianKl
Sometimes I even think that I would profit from having some cached thoughts that give me effective habits that I fulfill at every occasion without thinking too much. When the alarm bell rings it would be good if I would have a cached thought that would make me automatically get up without thinking the decision through. I don't think the state of being paralysed because you killed all cached thought is particulary desirable. I think I spent too much time in that state in the last year ;) I think it's more a question of focusing your energy on questioning those cached thoughts that actually matter. When it comes to agency I think there are some occasions where I show a lot but others where I show little. Expecially when you compare me to an average person the domains in which I show my agency are different. I can remember one occasion where I took more responsibility for a situation after reading the transition of McGonneral from NPC to PC in HPMOR. I think that HPMOR is well written when it comes to installing the frame of mind you are talking about.
0MugaSofer
Oh, we evolved them for a reason. Heck, your brain almost certainly couldn't function without at least some. But when people start throwing type errors whenever something happens and a cached though doesn't kick in, they could probably do with a little more original thought. That said, there's more to agency and PC-ness than cached thoughts. It was just particularly striking to see people around me fishing around for something familiar they knew how to respond to, and that's what prompted me to wonder how much we knew about the problem.

The Travelling Salesman Problem

Powell’s biggest revelation in considering the role of humans in algorithms, though, was that humans can do it better. “I would go down to Yellow, we were trying to solve these big deterministic problems. We weren’t even close. I would sit and look at the dispatch center and think, how are they doing it?” That’s when he noticed: They are not trying to solve the whole week’s schedule at once. They’re doing it in pieces. “We humans have funny ways of solving problems that no one’s been able to articulate,” he says. Operations

... (read more)
7Vaniver
In loading trucks for warehouses, some OR guys I know ran into the opposite problem- they encoded all the rules as constraints, found a solution, and it was way worse than what people were actually doing. Turns out it was because the people actually loading the trucks didn't pay attention to whether or not the load was balanced on the truck, or so on (i.e. mathematically feasible was a harder set of constraints than implementable because the policy book was harder than the actuality). (I also don't think it's quite fair to call the OR approach 'punting', since we do quite a bit of optimization using heuristics.)

If anyone wants to teach English in China, my school is hiring. The pay is higher than the market rate and the management is friendly and trustworthy. Must have a Bachelor's degree and a passport from and English speaking country. If you are at all curious, PM me for details.

I have updated on how important it is for Friend AI to succeed (more now). I did this by changing the way I thought about the problem. I used to think in terms of the chance of Unfriendly AI, this lead me to assign a chance of whether a fast, self-modifying, indifferent or FAI was possible at all.

Instead of thinking of the risk of UFAI, I started thinking of the risk of ~FAI. The more I think about it the more I believe that a Friendly Singleton AI is the only way for us humans to survive. FAI mitigates other existential risks of nature, unknowns, hu... (read more)

0Lumifer
So you want a god to watch over humanity -- without it we're doomed?
4niceguyanon
As of right now, yes. However, I could be persuaded otherwise.
-1ChristianKl
I find it unlikely that you are well calibrated when you put your credence at 99% for a 1,000 year forecast. Human culture changes over time. It's very difficult to predict how humans in the future will think about specific problems. We went in less than 100 years from criminalizing homosexual acts to lawful same sex marriage. Could you imagine that everyone would adopt your morality in 200 or 300 hundred years? If so do you think that would prevent humanity from being doomed? If you don't think so, I would suggest you to evaluate your own moral beliefs in detail.

Is there a name for, taking someone being wrong on A as evidence as being wrong on B? Is this a generally sound heuristic to have? In the case of crank magnetism; should I take someone's crank ideas, as evidence against an idea that is new and unfamiliar to me?

It's evidence against them being a person whose opinion is strong evidence of B, which means it is evidence against B, but it's probably weak evidence, unless their endorsement of B is the main thing giving it high probability in your book.

3Salemicus
I don't know if there's a name for this, but I definitely do it. I think it's perfectly legitimate in certain circumstances. For example, the more B is a subject of general dispute within the relevant grouping, and the more closely-linked belief in B is to belief in A, the more sound the heuristic. But it's not a short-cut to truth. For example, suppose that you don't know anything about healing crystals, but are aware that their effectiveness is disputed. You might notice that many of the same people who (dis)believe in homeopathy also (dis)believe in healing crystals, that the beliefs are reasonably well-linked in terms of structure, and you might already know that homeopathy is bunk. Therefore it's legitimate to conclude that healing crystals are probably not a sound medical treatment - although you might revise this belief if you got more evidence. On the other hand, note that reversed stupidity is not truth - healing crystals being bunk doesn't indicate that conventional medicine works well. The place where I find this heuristic most useful is politics, because the sides are well-defined - effectively, you have a binary choice between A and ~A, regardless of whether hypothetical alternative B would be better. If I stopped paying attention to current affairs, and just took the opposite position to Bob Crow on every matter of domestic political dispute, I don't think I'd go far wrong.
3Shmi
I don't know if there is a name for it, but there ought to be one, since this heuristic is so common: the reliability prior of an argument is the reliability of the arguer. For example, one reason I am not a firm believer in the UFAI doomsday scenarios is Eliezer's love affair with MWI.
2A1987dM
Yes, but in many cases it's very weak evidence. Overweighing it leads to the “reversed stupidity” failure mode.
1Adele_L
Bayes' theorem to the rescue! Consider a crank C, who endorses idea A. Then the probability of A being true, given that C endorses it equals the probability of C endorsing A, given that A is true times the probability that A is true over the probability that C endorses A. In equations: P(A being true | C endorsing A) = P(C endorsing A | A being true)*P(A being true)/P(C endorsing A). Since C is known to be a crank, our probability for C endorsing A given that A is true is rather low (cranks have an aversion to truth), while our probability for C endorsing A in general is rather high (i.e. compared to a more sane person). So you are justified in being more skeptical of A, given that C endorses A.
0David_Gerard
It's a logical fallacy, but is something humans evolved to do (or didn't evolve not to do), so may in fact be useful when dealing with humans you know in your group.
0satt
Somewhat related: The Correct Contrarian Cluster.
0[anonymous]
Horrifically misnamed.
-1Douglas_Knight
ad hominem Not that there's anything wrong with that.

Are old humans better than new humans?

This seems to be a hidden assumption of cryonics / transhumanism / anti-deathism: We should do everything we can to prevent people from dying, rather than investing these resources into making more or more productive children.

The usual argument (which I agree with) is that "Death events have a negative utility". Once a human already exists, it's bad for them to stop existing.

[-]twanvl100

So every human has a right to their continued existence. That's a good argument. Thanks.

4diegocaleiro
Complement it with the fact that it costs about 800 thousand dollars to raise a mind, and an adult mind might be able to create value at rates high enough to continue existing. . Makaulay Culkin and Haley Joel Osmend (or whatever spelling) notwithstanding, that is a good argument against children.
4twanvl
An adult, yes. But what about the elderly? Of course this is an argument for preventing the problems of old age. Is it? It just says that you should value adults over children, not that you should value children over no children. To get one of these valuable adult minds you have to start with something.
2Mestroyer
How does that negative utility vary over time though? Because if it stays the same (or increases) then if we know now it's impossible to live 3^^^3 years, then disutility from death sooner than that is counterbalanced (or more than that) by averted disutility from dying later, meaning decisions made are basically the same as if you didn't disvalue death (or as if you valued it).
8Oscar_Cunningham
I think that part of the badness of death is the destruction of that person's accumulated experience. Thus the negative utility of death does indeed increase over time. However this is counterbalanced by the positive utility of their continued existence. If someone lives to 70 rather than 50 then we're happy because the 20 extra years of life were worth more than the worsening of the death event.
0Armok_GoB
In this case, it seems like the best policy is cryopreserving then letting them stay dead but extracting those experiences and inserting them in new minds. Which sounds weird when you say it like that, but is functionally equivalent to many of the scenarios you would intuitively expect and find good, like radically improving minds and linking them into bigger ones before waking them up since anything else would leave them unable to meaningfully interact with anything anyway and human-level minds are unlikely to qualify for informed consent.
0Mestroyer
So if Bob is cryopreserved, and I can res him for N dollars, or create a simulation of a new person and run them quickly enough to catch up a number of years equal to Bob's age at death, for N - 1 dollars, I should spend all available dollars on the latter? Edit: to clarify why I think this is implied by your answer, what this is doing is trading such that you gain a death at Bob's current age, but gain a life of experience up to Bob's current age. If a life ending at Bob's current age is net utility positive, this has to be net utility positive too.
3drethelin
broadly: yes, though all available dollars is actually all available dollars (for making people), and you're ignoring considerations like keeping promises to people unable to enforce them such as the cryopreserved or asleep or unconscious etc.

Assuming Rawls's veil of ignorance, I would prefer to be randomly born in a world where a trillion people lead billion-year lifespans than one in which a quadrillion people lead million-year lifespans.

I agree, but is this the right comparison? Isn't this framing obscuring the fact that in the trillion-people world, you are much less likely to be born in the first place, in some sense?

Let us try this framing instead: Assume there are a very large number Z of possible different human "persons" (e.g. given by combinatorics on genes and formative experiences). There is a Rawlsian chance of 1/Z that a new created human will be "you". Behind the veil of ignorance, do you prefer the world to be one with X people living N years (where your chance of being born is X/Z) or the one with 10X people living N/10 years (where your chance of being born is 10X/Z)?

I am not sure this is the right intuition pump, but it seems to capture an aspect of the problem that yours leaves out.

6A1987dM
Rawls's veil of ignorance + self-sampling assumption = average utilitarianism, Rawls's veil of ignorance + self-indication assumption = total utilitarianism (so to speak)? I had already kind-of noticed that, but hadn't given much thought to it.
9Mestroyer
Doesn't Rawls's veil of ignorance prove too much here though? If both worlds would exist anyway, I'd rather be born into a world where a million people lived 101 year lifetimes than a world where 3^^^3 people lived 100 year lifetimes.
2TrE
So then, Rawls's veil has to be modified such that you are randomly chosen to be one of a quadrillion people. In scenario A, you live a million years. In scenario B, one trillion people live for one billion years each, the rest are fertilized eggs which for some reason don't develop. I'd still choose B over A.
0ShardPhoenix
Would you? A million probably isn't enough to sustain a modern economy, for example. (Although in the 3^^^3 case it depends on the assumed density since we can only fit a negligible fraction of that many people into our visible universe).
6Mestroyer
If the economies would be the same, then yes. Don't fight the hypothetical.
2ShardPhoenix
I think "fighting the hypothetical" is justified in cases where the necessary assumptions are misleadingly inaccurate - which I think is the case here.
5Creutzer
But compared to 3^^^3, it doesn't matter whether it's a million people, a billion, or a trillion. You can certainly find a number that is sufficient to sustain an economy and is still vastly smaller than 3^^^3, and you will end up preferring the smaller number for a single additional year of lifespan. Of course, for Rawls, this is a feature, not a bug.
9Izeinwinter
Existing people take priority over theoretical people. Infinitely so. This should be obvious, as the reverse conclusion ends up with utter absurdities of the "Every sperm is sacred" variety. Mad grin Once a child is born, it has as much claim on our consideration as every other person in our light cone, but there is no obligation to have children. Not any specific child, nor any at all. Reject this axiom and you might as well commit suicide over the guilt of the billions of potentials children you could have that are never going to be born. Right now. Even if you stay pregnant till you die/never masturbate, this would effectively not help at all - each conception moves one potential from the space of "could be" to to the space of "is", but at the same time eliminates at least several hundred million other potential children from the possibility space - that is just how human reproduction works. TL:DR; yes, yes they are. It is a silly question.
8twanvl
Does this mean that I am free to build a doomsday weapon that kills everyone born after September 4th 2013 100 years from now, if that gets me a cookie? Not necessarily. It would merely be your obligation to have as many children as possible, while still ensuring that they are healthy and well cared for. At some point having an extra child will make all your children less well of. Why is there a threshold at birth? I agree that it is a convenient point, but it is arbitrary. Why should I commit suicide? That reduces the number of people. It would be much better to start having children. (Note that I am not saying that this is my utility function).
3Eliezer Yudkowsky
The "infinitely so" part seems wrong, but the idea is that 4D histories which include a sentient being coming into existence, and then dying, are dispreferred to 4D world-histories in which that sentient being continues. Since the latter type of such histories may not be available, we specify that continuing for a billion years and then halting is greatly preferable to continuing for 10 years then halting. Our degree of preference for such is substantially greater than the degree to which we feel morally obligated to create more people, especially people who shall themselves be doomed to short lives.
3Alejandro1
The switch from consquentialist language ("4D histories which include… are dispreferred") to deontological language ("…the degree to which we feel morally obligated to create more people") is confusing. I agree that saving the lives of existing people is a stronger moral imperative than creating new ones, at the level of deontological rules and virtuous conduct which is a large part of everyday human moral reasoning. I am much less clear than when evaluating 4D histories I assign higher utility to one with few people living long lives than to one with more people living shorter lives. Actually, I tend towards the opposite intuition preferring a world with more people who live less (as long as the their lives are still well worth living, etc.)
0Armok_GoB
Not sure what part of this comment tree this belongs so just posting it here where it's likely to be seen: It struck me with an image that it's not at all necessary that these tradeoffs are actually a thing once you dissolve the "person" abstraction; it's possible that something like the following is optimal: half the universe is dedicated to search the space of all experiences in order starting with the highest utility/most meaningful/lowest hanging fruit. This is then aggregated and metadata added and sent to the other half which is tiled with minimal context-experiencing units equivalent to individual peoples subjective whatever. in the end, you end up with equivalent to if you had half the number of individual people as if that was your only priority, each having the utility as a single person with the entire future history of half the universe dedicated to it, including context of history. Thats the best case scenario. It's pretty certain SOME aspect or another of the fragile godshatter will disallow it obviously. Yea, this was basically pseud tangential musings.
0A1987dM
If by “old humans” you mean healthy adults, yes. If you mean this, no. (IMO -- YMMV.)
0Alsadius
Death isn't just a negative for the dead person - it also causes paperwork and expenses, destruction of relationships, and grief among the living.
2MugaSofer
This is true, but in my experience usually used to massage models that don't consider death a disutility into giving the right answers. I can't think of ever hearing this argument used for any other reason, in fact, in meatspace. (Replying to this comment out of context on the Recent Comments.)
0Alsadius
The context is someone asking whether it's better to stop existing people from dying or just make new people.
0MugaSofer
Hmm. I guess I'm going to cautiously say "called it!"
0drethelin
Yes.
1twanvl
Because?
7drethelin
a level 5 character is more valuable than a level 1 character. A person who is older has more to give the world and has been more invested in than a baby. they're a lot less replaceable. also i like em more.

Is there a good way to avoid HPMOR spoilers on prediction book?

8gwern
Since PB users' calibrations are not yet good enough to see the future, you can easily avoid MoR spoilers by subscribing to the email or RSS alerts for new chapters & reading them as appropriate.
5Adele_L
This is the obvious solution, but I want to reread what I've currently read, and have some time to think about the story and try creating an accurate causal model of events and such in the story as I read new!Adele material (Eliezer says it's supposed to be a solvable puzzle). I don't have time to do this right now, so in the meantime, I try to avoid spoilers.
2Douglas_Knight
I used feed43 to create an rss feed out of recent predictions. Then I used feedrinse to filter out references to hpmor resulting in a safe feed. (Update: chaining unreliable services makes something even less reliable.) You could do the same for the pages of recently judged or future or users you follow. I think feedrinse offers to merge feeds (into a "channel") before or after doing the filtering. But if you find someone new and just want to click on the username, you'll leave the safe zone. Even if you see someone you have processed, the username will take you to the unsafe page. A better solution would be to write a greasemonkey script that modified each predictionbook page as you look at it. The final feedrinse feed works in a couple of my browsers, but not chrome. Probably sending it through feedburner would fix it. feed43 was finicky. The item search pattern was: {_} {_}{%}{%} The regexp I used in feedrinse was /hp.?mor/ It is case insensitive and manages to eliminate "HP MoR:", "[HPMOR]", etc. It won't work if they spell it out, or just predict "Harry is orange" without indicating which story they're predicting about. In that case, someone will probably leave a hpmor comment, but this doesn't see such comments.
2Jayson_Virissimo
If you are skilled in the art of Ruby, then yes. Otherwise, maybe. People (myself included) have been complaining about the lack of tagging/sorting system on PB for quite some time, but so far, no one has played the hero.

The following query is sexual in nature, and is rot13'ed for the sake of those who would either prefer not to encounter this sort of content on Less Wrong, or would prefer not to recall information of such nature about my private life in future interactions.

V nz pheeragyl va n eryngvbafuvc jvgu n jbzna jub vf fvtavsvpnagyl zber frkhnyyl rkcrevraprq guna V nz. Juvyr fur cerfragyl engrf bhe frk nf "njrfbzr," vg vf abg lrg ng gur yriry bs "orfg rire," juvpu V ubcr gb erpgvsl.

Sbe pynevsvpngvba, V jbhyq fnl gung gur trareny urnygu naq fgnovy... (read more)

7Locaha
To be honest, you sound a bit like a person who made a billion dollars and now tries to crowd-source a way to make ten billions. :-)

Well, I'm flattered that you think my position is so enviable, but I also think this would be a pretty reasonable course of action for someone who made a billion dollars.

2Sabiola
This book pbhyq uryc jvgu gur fgnzvan. Vg jbexrq sbe zl uhfonaq, jura ur gevrq vg n srj lrnef ntb.
0Desrtopa
Are the instructions anything simple enough that I could replicate them without needing to buy the entire book?
0Sabiola
Maybe, but then I'd have to read it to find out, and I have many other books I'd like to read. Maybe you can find it in the library?
0Desrtopa
I'll check; I'm pretty sure my own library doesn't have a Sex section, but it might be in network. Asking to order it would be pretty embarrassing, I have to admit, especially at my own library where a lot of the people who work there know me by name.
0khafra
Dewey Decimal number 613.96, IIRC from my internet-deprived adolescence.
0Douglas_Knight
If you're too cheap to spend $4 at amazon, pirate it.
2NancyLebovitz
Slow Sex seems to help move at least some people to move from good to great.
1Desrtopa
Does that entail sex literally done slowly? We could try it out, but that doesn't seem to be to her preferences.
0NancyLebovitz
It involves learning to pay more attention as a meditative practice, but not (I think) a recommendation to always go slowly.
1drethelin
Practice makes perfect. I think a lot of good sex is intuitively reading your partner's signals and ramping things up/down with good timing in response to them. I think this is something you might be able to learn via logos but I think it's much more likely to be something you need to experience before you can get good at it. When to pull hair, when to thrust deeper, etc. In general I and whoever I'm with have had more fun when I felt I had a good idea of what they wanted in the moment, which I think I've gotten better at mainly through practice.
1Desrtopa
I suspect that I can continue to improve with practice, but I'd like to be able to set out every option available to me on the table. Even if I can attain the status of "best" without taking such extraordinary measures, this is something I'm genuinely competitive on, which at least to me means that simply taking first place isn't sufficient if I can still see avenues to top myself.