This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

July Part 1

New Comment
768 comments, sorted by Click to highlight new comments since: Today at 3:19 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It seems to me that "emergence" has a useful meaning once we recognize the Mind Projection Fallacy:

We say that a system X has emergent behavior if we have heuristics for both a low-level description and a high-level description, but we don't know how to connect one to the other. (Like "confusing", it exists in the map but not the territory.)

This matches the usage: the ideal gas laws aren't "emergent" since we know how to derive them (at a physics level of rigor) from lower-level models; however, intelligence is still "emergent" for us since we're too dumb to find the lower-level patterns in the brain which give rise to patterns like thoughts and awareness, which we have high-level heuristics for.

Thoughts? (If someone's said this before, I apologize for not remembering it.)

No, I want my definition of "emergent" to say that the ideal gas laws are emergent properties of molecules. Why not just say We say that a system X has emergent behavior if we have heuristics for both a low-level description and a high-level description
The high-level structure shouldn't be the same as the low level structure, because I don't want to say a pile of sand emerges from grains of sand.
ISTM that the actual present usage of "emergent" is actually pretty well-defined as a cluster, and it doesn't include the ideal gas laws. I'm offering a candidate way to cash-out that usage without committing the Mind Projection Fallacy.
The fallacy here is thinking there's a difference between the way the ideal gas laws emerge from particle physics, and the way intelligence emerges from neurons and neurotransmitters. I've only heard "emergent" used in the following way: A system X has emergent behavior if we have heuristics for both a low-level description and a high-level description, and the high-level description is not easily predictable from the low-level description For instance, gliders moving across the screen diagonally is emergent in Conway's Life. The "easily predictable" part is what makes emergence in the map, not the territory.
Er, did you read the grandparent comment?
Yes. My point was that emergence isn't about what we know how to derive from lower-level descriptions, it's about what we can easily see and predict from lower-level descriptions. Like Roko, I want my definition of emergence to include the ideal gas laws (and I haven't heard the word used to exclude them). Also see this comment.
For what it's worth, Cosma Shalizi's notebook page on emergence has a very reasonable discussion of emergence, and he actually mentions macro-level properties of gas as a form of "weak" emergence: To define emergence as it is normally used, he adds the criterion that "the new property could not be predicted from a knowledge of the lower-level properties," which looks to be exactly the definition you've chosen here (sans map/territory terminology).
Let's talk examples. One of my favorite examples to think about is Langton's Ant. If we taboo "emergence" what do we think is going on with Langton's Ant?
We have one description of the ant/grid system in Langton's Ant: namely, the rules which totally govern the behavior of the system. We have another description of the system, however: the recurring "highway" pattern that apparently results from every initial configuration tested. These two descriptions seem to be connected, but we're not entirely sure how (The only explanation we have is akin to this: Q: Why does every initial configuration eventually result in the highway pattern? A: The rules did it.) That is, we have a gap in our map. Since the rules, which we understand fairly well, seem on some intuitive sense to be at a "lower level" of description than the pattern we observe, and since the pattern seems to depend on the "low-level" rules in some way we can't describe, some people call this gap "emergence."
I recall hearing, although I can't find a link, that the Langton Ant problem has been solved recently. That is, someone has given a formal proof that every ant results in the highway pattern.
It's worth checking on the Stanford Encyclopedia of Philosophy when this kind of issue comes up. It looks like this view - emergent=hard to predict from low-level model - is pretty mainstream. The first paragraph of the article on emergence says that it's a controversial term with various related uses, generally meaning that some phenomenon arises from lower-level processes but is somehow not reducible to them. At the start of section 2 ("Epistemological Emergence"), the article says that the most popular approach is to "characterize the concept of emergence strictly in terms of limits on human knowledge of complex systems." It then gives a few different variations on this type of view, like that the higher-level behavior could not be predicted "practically speaking; or for any finite knower; or for even an ideal knower." There's more there, some of which seems sensible and some of which I don't understand.
Many thanks!
It seems problematic that as soon as you work out how to derive high-level behavior from low-level behavior, you have to stop calling it emergent. It seems even more problematic that two people can look at the same phenomenon and disagree on whether it's "emergent" or not, because Bob knows the relevant derivation of high level behavior from low level behavior, but Alice doesn't, even if Alice nows that Bob knows. Perhaps we could refine this a little, and make emergence less subjective, but still avoid mind-projection-fallacy. We say that a system X has emergent behavior if there exists an exact and simple low-level description and an inexact but easy-to-compute high-level description, and the derivation of the high-level laws from the low-level ones is much more complex than either. [In the technical sense of kolmogorov complexity] (Like "Has chaotic dynamics", it is a property of a system)
I dunno, I kind of like the idea that as science advances, particular phenomena stop being emergent. I'd be very glad if "emergent" changed from a connotation of semantic stop-sign to a connotation of unsolved problem.
By your definition, is the empirical fact that one tenth of the digits of pi are 1s emergent behavior of pi? I may not understand the work that "low-level" and "high-level" are doing in this discussion. On the length of derivations, here are some relevant Godel cliches: System X (for instance, arithmetic) often obeys laws that are underivable. And it often obeys derivable laws of length n whose shortest derivation has length busy-beaver-of-n. (Uber die lange von Beiwessen is the title of a famous short Godel paper. He revisits the topic in a famous letter to von Neumann, available here:
Just a pedantic note: pi has not been proven normal. Maybe one fifth of the digits are 1s.
I'll stick to it. It's easier to perform experiments than it is to give mathematical proofs. If experiments can give strong evidence for anything (I hope they can!), then this data can give strong evidence that pi is normal: Maybe past ten-to-the-one-trillion digits, the statistics of pi are radically different. Maybe past ten-to-the-one-trillion meters, the laws of physics are radically different.
The later case seems more likely to me.
I was just thinking about the latter case, actually. If g equalled G (m1 ^ (1 + (10 ^ -30)) (m2 ^ (1 + (10 ^ -30))) / (r ^ 2), would we know about it?
Well, the force of gravity isn't exactly what you get from Newton's laws anyways (although most of the easily detectable differences like that in the orbit of Mercury are better thought of as due to relativity's effect on time than a change in g). I'm not actually sure how gravitational force could be non-additive with respect to mass. One would have the problem of then deciding what constitutes a single object. A macroscopic object isn't a single object in any sense useful to physics. Would for example this calculate the gravity of Earth as a large collection of particles or as all of them together? But the basic point, that there could be weird small errors in our understanding of the laws of physics is always an issue. To use a slightly more plausible example, if say the force of gravity on baryons is slightly stronger than that on leptons (slightly different values of G) we'd be unlikely to notice. I don't think we'd notice even if it were in the 2nd or 3rd decimal of G (partially because G is such a very hard constant to measure.)
IMO, that would be emergent behaviour of mathematics, rather than of pi. Pi isn't a system in itself as far as I can see.
I have in mind a system, for instance a computer program, that computes pi digit-by-digit. There are features of such a computer program that you can notice from its output, but not (so far as anyone knows) from its code, like the frequency of 1s.
If you had some physical system that computed digit frequencies of Pi, I'd definitely want to call the fact that the fractions were very close to 1 emergent behavior. Does anyone disagree?
I can't disagree about what you want but I myself don't really see the point in using the word emergent for a straightforward property of irrational numbers. I wouldn't go so far as to say the term is useless but whatever use it could have would need to describe something more complex properties that are caused by simpler rules.
This isn't a general property of irrational numbers, although with probability 1 any irrational number will have this property. In fact, any random real number will have this property with probability 1 (rational numbers have measure 0 since they form a countable set). This is pretty easy to prove if one is familiar with Lebesque measure. There are irrational numbers which do not share this property. For example, .101001000100001000001... is irrational and does not share this property.
True enough. it would seem that irrational number is not the correct term for the set I refer to.
The property you are looking for is normalness to base 10. See normal number. ETA: Actually, you want simple normalness to base 10 which is slighly weaker.
Any irrational number drawn from what distribution? There are plenty of distributions that you could draw irrational numbers from which do not have this property, and which contain the same number of numbers in them. For example, the set of all irrational numbers in which every other digit is zero has the same cardinality as the set of all irrational numbers.
I'm presuming he's talking about measure, using the standard Lebesgue measure on R
Yes, although generally when asking these sorts of questions one looks at the standard Lebesque measure on [0,1] or [0,1) since that's easier to normalize. I've been told that this result also holds for any bell-curve distribution centered at 0 but I haven't seen a proof of that and it isn't at all obvious to me how to construct one.
Well, the quick way is to note that the bell-curve measure is absolutely continuous with respect to Lebesgue measure, as is any other measure given by an integrable distribution function on the real line. (If you want, you can do this by hand as well, comparing the probability of a small bounded open set in the bell curve distribution with its Lebesgue measure, taking limits, and then removing the condition of boundedness.)
Excellent, yes that does work. Thanks very much!
The only problem with that seems to be that when people talk about emergent behavior they seem to be more often than not talking about "emergence" as a property of the territory, not a property of the map. So for example, someone says that "AI will require emergent behavior"- that's a claim about the territory. Your definition of emergence seems like a reasonable and potentially useful one but one would need to be careful that the common connotations don't cause confusion.
I agree. But given that outsiders use the term all the time, and given that they can point to a reasonably large cluster of things (which are adequately contained in the definition I offered), it might be more helpful to say that emergence is a statement of a known unknown (in particular, a missing reduction between levels) than to refuse to use the term entirely, which can appear to be ignoring phenomena.

Why are Roko's posts deleted? Every comment or post he made since April last year is gone! WTF?

Edit: It looks like this discussion sheds some light on it. As best I can tell, Roko said something that someone didn't want to get out, so someone (maybe Roko?) deleted a huge chunk of his posts just to be safe.


I've deleted them myself. I think that my time is better spent looking for a quant job to fund x-risk research than on LW, where it seems I am actually doing active harm by staying rather than merely wasting time. I must say, it has been fun, but I think I am in the region of negative returns, not just diminishing ones.

So you've deleted the posts you've made in the past. This is harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable.

For example, consider these posts, and comments on them, that you deleted:

I believe it's against community blog ethics to delete posts in this manner. I'd like them restored.

Edit: Roko accepted this argument and said he's OK with restoring the posts under an anonymous username (if it's technically possible).

And I'd like the post of Roko's that got banned restored. If I were Roko I would be very angry about having my post deleted because of an infinitesimal far-fetched chance of an AI going wrong. I'm angry about it now and I didn't even write it. That's what was "harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable." That's what should be against the blog ethics.

I don't blame him for removing all of his contributions after his post was treated like that.

It's also generally impolite (though completely within the TOS) to delete a person's contributions according to some arbitrary rules. Given that Roko is the seventh highest contributor to the site, I think he deserves some more respect. Since Roko was insulted, there doesn't seem to be a reason for him to act nicely to everyone else. If you really want the posts restored, it would probably be more effective to request an admin to do so.
I didn't insult Roko. The decision, and justification given, seem wholly irrational to me (which is separate from claiming a right to demand that decision altered).
It's ironic that, from a timeless point of view, Roko has done well. Future copies of Roko on LessWrong will not receive the same treatment as this copy did, because this copy's actions constitute proof of what happens as a result. (This comment is part of my ongoing experiment to explain anything at all with timeless/acausal reasoning.)
What "treatment" did you have in mind? At best, Roko made a honest mistake, and the deletion of a single post of his was necessary to avoid more severe consequences (such as FAI never being built). Roko's MindWipe was within his rights, but he can't help having this very public action judged by others. What many people will infer from this is that he cares more about arguing for his position (about CEV and other issues) than honestly providing info, and now that he has "failed" to do that he's just picking up his toys and going home.
I just noticed this. A brilliant disclaimer!
Parent is inaccurate: although Roko's comments are not, Roko's posts (i.e., top-level submissions) are still available, as are their comment sections minus Roko's comments (but Roko's name is no longer on them and they are no longer accessible via /user/Roko/ URLs).

Not via user/Roko or via /tag/ or via /new/ or via /top/ or via / - they are only accessible through direct links saved by previous users, and that makes them much harder to stumble upon. This remains a cost.

Could the people who have such links post them here?
I don't really see what the fuss is. His articles and comments were mediocre at best.

I understand. I've been thinking about quitting LessWrong so that I can devote more time to earning money for paperclips.


I'm deeply confused by this logic. There was one post where due to a potentially weird quirk of some small fraction of the population, reading that post could create harm. I fail to see how the vast majority of other posts are therefore harmful. This is all the more the case because this breaks the flow of a lot of posts and a lot of very interesting arguments and points you've made.

ETA: To be more clear, leaving LW doesn't mean you need to delete the posts.

I am disapointed. I have just started on LW, and found many of Roko's posts and comments interesting and consilient with my current and to be a useful bridge between aspects of LW that are less consilient. :(

Allow me to provide a little context by quoting from a comment, now deleted, Eliezer made this weekend in reply to Roko and clearly addressed to Roko:

I don't usually talk like this, but I'm going to make an exception for this case.

Listen to me very closely, you idiot.

[paragraph entirely in bolded caps.]

[four paragraphs of technical explanation.]

I am disheartened that people can be . . . not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.

This post was STUPID.

Although it does not IMHO make it praiseworthy, the above quote probably makes Roko's decision to mass delete his comments more understandable on an emotional level.

In defense of Eliezer, the occasion of Eliezer's comment was one in which IMHO strong emotion and strong language might reasonably be seen as appropriate.

If either Roko or Eliezer wants me to delete (part of all of) this comment, I will.

EDIT: added the "I don't usually talk like this" paragraph to my quote in repsonse to criticism by Aleksei.

I'm not them, but I'd very much like your comment to stay here and never be deleted.

Your up-votes didn't help, it seems.
Woah. Thanks for alerting me to this fact, Tim.
Out of curiosity, what's the purpose of the banning? Is it really assumed that banning the post will mean it can't be found in the future via other means or is it effectively a punishment to discourage other people from taking similar actions in the future?
Does not seem very nice to take such an out-of-context partial quote from Eliezer's comment. You could have included the first paragraph, where he commented on the unusual nature of the language he's going to use now (the comment indeed didn't start off as you here implied), and also the later parts where he again commented on why he thought such unusual language was appropriate.
I'm still having trouble seeing how so much global utility could be lost because of a short blog comment. If your plans are that brittle, with that much downside, I'm not sure security by obscurity is such a wise strategy either...
The major issue as I understand it wasn't the global utility problem but the issue that when Roko posted the comment he knew that some people were having nightmares about the scenario in question. Presumably increasing the set of people who are nervous wrecks is not good.
I was told it was something that, if thought about too much, would cause post-epic level problems. The nightmare aspect wasn't part of my concept of whatever it is until now. I also get the feeling Eliezer wouldn't react as dramatically as an above synopsis implies unless it was a big deal (or hilarious to do so). He seems pretty ... rational, I think is the word. Despite his denial of being Quirrell in a parent post, a non-deliberate explosive rant and topic banning seems unlikely. He also mentions that only a certain inappropriate post was banned, and Roko said he deleted his own posts himself. And yet the implication going around that it was all deleted as administrative action. A rumor started by Eliezer himself so he could deny being "evil," knowing some wouldn't believe him? Quirrell wouldn't do that, right? ;)

I see. A side effect of banning one post, I think; only one post should've been banned, for certain. I'll try to undo it. There was a point when a prototype of LW had just gone up, someone somehow found it and posted using an obscene user name ("masterbater"), and code changes were quickly made to get that out of the system when their post was banned.

Holy Cthulhu, are you people paranoid about your evil administrator. Notice: I am not Professor Quirrell in real life.

EDIT: No, it wasn't a side effect, Roko did it on purpose.

Notice: I am not Professor Quirrell in real life.

Indeed. You are open about your ambition to take over the world, rather than hiding behind the identity of an academic.

Notice: I am not Professor Quirrell in real life.

And that is exactly what Professor Quirrell would say!

Professor Quirrell wouldn't give himself away by writing about Professor Quirrell, even after taking into account that this is exactly what he wants you to think.

Of course as you know very well. :)

A side effect of banning one post, I think;

In a certain sense, it is.

Of course, we already established that you're Light Yagami.
I'm not sure we should believe you.
-4JamesAndrix14y Note to reader: This thread is curiosity inducing, this is affecting your judgement. You might think you can compensate for this bias but you probably won't in actuality. Stop reading anyway. Trust me on this. Edit: Me, and Larks, and ocr-fork, AND ROKO and [some but not all others] I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot. Maybe it will stay dangerous for a long time, I don't know, but there seems to be a decent chance that you'll find out about it soon enough. Don't assume it's Ok because you understand the need for friendliness and aren't writing code. There are no secrets to intelligence in hidden comments. (Though I didn't see the original thread, I think I figured it out and it's not giving me any insights.) Don't feel left out or not smart for not 'getting it' we only 'got it' because it was told to us. Try to compensate for your ego. if you fail, Stop reading anyway. Ab ernyyl fgbc ybbxvat. Phevbfvgl erfvfgnapr snvy.
Technically, you didn't say "for now".

Cryo-wives: A promising comment from the NYT Article:

As the spouse of someone who is planning on undergoing cryogenic preservation, I found this article to be relevant to my interests!

My first reactions when the topic of cryonics came up (early in our relationship) were shock, a bit of revulsion, and a lot of confusion. Like Peggy (I believe), I also felt a bit of disdain. The idea seemed icky, childish, outlandish, and self-aggrandizing. But I was deeply in love, and very interested in finding common ground with my then-boyfriend (now spouse). We talked, and talked, and argued, and talked some more, and then I went off and thought very hard about the whole thing.

Part of the strength of my negative response, I realized, had to do with the fact that my relationship with my own mortality was on shaky ground. I don't want to die. But I'm fairly certain I'm going to. Like many people, I've struggled to come to a place where I can accept the specter of my own death with some grace. Humbleness and acceptance in the face of death are valued very highly (albeit not always explicitly) in our culture. The companion, I think, to this humble acceptance of death is a humble (and painful) acce

... (read more)

That is really a beautiful comment.

It's a good point, and one I never would have thought of on my own: people find it painful to think they might have a chance to survive after they've struggled to give up hope.

One way to fight this is to reframe cryonics as similar to CPR: you'll still die eventually, but this is just a way of living a little longer. But people seem to find it emotionally different, perhaps because of the time delay, or the uncertainty.

6Eliezer Yudkowsky14y
I always figured that was a rather large sector of people's negative reaction to cryonics; I'm amazed to find someone self-aware enough to notice and work through it.
That's more comparable to being in a long coma with some uncertain possibility of waking up from it, so perhaps it could be reframed along those lines; some people probably do specify that they should be taken off of life support if they are found comatose, but to choose to be kept alive is not socially disapproved of, as far as I know.
Hopefully this provides incentive for people to kick Eliezer's ass at FAI theory. You don't want to look cultish, do you?
To me, the most appealing aspect of #lesswrong is that my comments will not be archived for posterity. This is also an interesting quote. Edit: I obviously missed the "only" in your note there.

I thought Less Wrong might be interested to see a documentary I made about cognitive bias. It was made as part of a college project and a lot of the resources that the film uses are pulled directly from Overcoming Bias and Less Wrong. The subject of what role film can play in communicating the ideas of Less Wrong is one that I have heard brought up, but not discussed at length. Despite the film's student-quality shortcomings, hopefully this documentary can start a more thorough dialogue that I would love to be a part of.

The link to the video is Here:

Pen and paper interviews would almost certainly be more accurate. The problem being that images of people writing on paper are especially un-cinematic. The participants were encouraged to take as much time as they needed, many of which took several minutes before responding on some questions. However, the majority of them were concerned with how much time the interview would take up, and their quick responses were self imposed. Whether the evidence is too messy to draw firm conclusions from, I agree that it is. This is an inherent problem with documentaries. Omissions of fact are easily justified. Also, just like in fiction films, a higher degree of manipulation over the audience is more sought after than accuracy.
I just posted a comment over there noting that the last interviewee rediscovered anchoring and adjustment.

Geoff Greer published a post on how he got convinced to sign up for cryonics: Insert Frozen Food Joke Here.

This is really an excellent, down to the earth, one minute teaser, to go that route. Excellent writing. It would wish I had a follow up move for those who get interested after that points, but raised doubts, be it philosophical, religious, moral, scientific (the last one probably the easiest). I know those issues had been discussed already, but how could one react in a five minute coffee-break, when the co-worker responds (standard phrases to go): "But death gives meaning to live. And if nobody died, there would be too many people around here. Only the rich ones could get the benefits. And ultimately, whatever end the universe takes, we will all die, you know science, don't ya?" I know the sequence answers, but I utterly fail to give any non-embarrassing answer at such questions. It does not help to not being signed up for cryonics oneself.
If they think that we'll all eventually die even with cryonics and they think that death gives meaning to life then they don't need to worry about cryonics removing meaning since it is just pushing the amount of time until death up. (I wouldn't bother addressing the death giving meaning to life claim except to note that it seems to be a much more common meme among people who haven't actually lost loved ones.) As to the problem of too many people, overpopulation is a massive problem whether or not a few people get cryonicly preserved. As to the problem of just the rich getting the benefits, patiently explain that there's no reason to think that the rich now will be treated substantially different from the less rich who sign up for cryonics. And if society ever has the technology to easily revive people from cryonic suspension then the likely standard of living will be so high compared to now that even if the rich have more it won't matter.
I talk about it as something I'm thinking about, and ask what they think. That way, it's not you trying to persuade someone, it's just a conversation. "Yeah, we'll all die eventually, but this is just a way of curing aging, just like trying to find a cure for heart disease or cancer. All those things are true of any medical treatment, but that doesn't mean we shouldn't save lives."
... "and like any medical treatment, initially only the rich will benefit, but they'll help bring down the price for everyone else. Infact, for just a small weakly payment..."
This is off-topic but I'm curious: How did you stumble on my blog?
Google alert on "Eliezer Yudkowsky". (Usually brings up articles about Friendly AI, SIAI and Less Wrong.)

Are any LWer's familiar with adversarial publishing? The basic idea is that two researchers who disagree on some empirically testable proposition come together with an arbiter to design an experiment to resolve their disagreement.

Here's a summary of the process from an article (pdf) I recently read (where Daniel Kahneman was one of the adversaries).

  1. When tempted to write a critique or to run an experimental refutation of a recent publication, consider the possibility of proposing joint research under an agreed protocol. We call the scholars engaged in such an effort participants. If theoretical differences are deep or if there are large differences in experimental routines between the laboratories, consider the possibility of asking a trusted colleague to coordinate the effort, referee disagreements, and collect the data. We call that person an arbiter.
  2. Agree on the details of an initial study, designed to subject the opposing claims to an informative empirical test. The participants should seek to identify results that would change their mind, at least to some extent, and should explicitly anticipate their interpretations of outcomes that would be inconsistent with their theoret
... (read more)

Since I assume he doesn't want to have existential risk increase, a credible threat is all that's necessary.

Perhaps you weren't aware, but Eliezer has stated that it's rational to not respond to threats of blackmail. See this comment.

(EDIT: I deleted the rest of this comment since it's redundant given what you've written elsewhere in this thread.)

This is true, and yes wfg did imply the threat.

(Now, analyzing not advocating and after upvoting the parent...)

I'll note that wfg was speculating about going ahead and doing it. After he did it (and given that EY doesn't respond to threats speculative:wga should act now based on the Roko incident) it isn't threat. It is then just a historical sequence of events. It wouldn't even be a particularly unique sequence events.

Wfg is far from the only person who responded by punishing SIAI in a way EY would expect to increase existential risk. ie. Not donating to SIAI when they otherwise would have.or by updating their p(EY(SIAI) is a(re) crackpot(s)) and sharing that knowledge. The description RationalWiki would be an example.

I don't think he was talking about human beings there. Obviously you don't want a reputation for being susceptable to being successfully blackmailed, but IMHO, maximising expected utilily results in a strategy which is not as simple as never responding to blackmail threats.
I think this is correct. Eliezer's spoken from The Strategy of Conflict before, which goes into mathematical detail about the tradeoffs of precommitments against inconsistently rational players. The "no blackmail" thing was in regards to a rational UFAI.
These are really interesting points. Just in case you haven't seen the developments on the thread, check out the whole thing here. I'm not sure that blackmail is a good name to use when thinking about my commitment, as it has negative connotations and usually implies a non-public, selfish nature. I'm also pretty sure it's irrational to ignore such things when making decisions. Perhaps not in a game theory sense, but absolutely in the practical life-theory sense. As an example, our entire legal system is based on these sorts of credible threats. If EY feels differently I'm not sure what to say except that I think he's being foolish. I see the game theory he's pretending exempts him from considering others reactions to his actions, I just don't think it's rational to completely ignore new causal information. But like I said earlier, I'm not saying he has to do anything, I'm just making sure we all know that an existential risk reduction of 0.0001% via LW censorship won't actually be a reduction of 0.0001%. (and though you deleted the relevant part, I'd also be down to discuss what a sane moderation system should be like.)

Suppose I were to threaten to increase existential risk by 0.0001% unless SIAI agrees to program its FAI to give me twice the post-Singuarity resource allocation (or whatever the unit of caring will be) that I would otherwise receive. Can see why it might have a policy against responding to threats? If Eliezer does not agree with you that censorship increases existential risk, he might censor some future post just to prove the credibility of his precommitment.

If you really think censorship is bad even by Eliezer's values, I suggest withdrawing your threat and just try to convince him of that using rational arguments. I rather doubt that Eliezer has some sort of unfixable bug regarding censorship that has to be patched using such extreme measures. It's probably just that he got used to exercising strong moderation powers on SL4 (which never blew up like this, at least to my knowledge), and I'd guess that he has already updated on the new evidence and will be much more careful next time.

I do not expect that (non-costly signalling by someone who does not have significant status) to work any more than threats would. A better suggestion would be to forget raw threats and consider what other alternatives wfg has available by which he could deploy an equivalent amount of power that would have the desired influence. Eliezer moved the game from one of persuasion (you should not talk about this) to one about power and enforcement (public humiliation, censorship and threats). You don't take a pen to a gun fight.
2Wei Dai14y
I don't understand why, just because Eliezer chose to move the game from one of persuasion to one about power and enforcement, you have to keep playing it that way. If Eliezer is really so irrational that once he has exercised power on some issue, he is no longer open to any rational arguments on that topic, then what are we all doing here? Shouldn't we be trying to hinder his efforts (to "not take over the world") instead of (however indirectly) helping him?
Good questions, these were really fun to think about / write up :) First off let's kill a background assumption that's been messing up this discussion: that EY/SIAI/anyone needs a known policy toward credible threats. It seems to me that stated policies to credible threats are irrational unless a large number of the people you encounter will change their behavior based on those policies. To put it simply: policies are posturing. If an AI credibly threatened to destroy the world unless EY became a vegetarian for the rest of the day, and he was already driving to a BBQ, is eating meat the only rational thing for him to do? (It sure would prevent future credible threats!) If EY planned on parking in what looked like an empty space near the entrance to his local supermarket, only to discover that on closer inspection it was a handicapped-only parking space (with a tow truck only 20 feet away), is getting his car towed the only rational thing to do? (If he didn't an AI might find out his policy isn't iron clad!) This is ridiculous. It's posturing. It's clearly not optimal. In answer to your question: Do the thing that's actually best. The answer might be to give you 2x the resources. It depends on the situation: what SIAI/EY knows about you, about the likely effect of cooperating with you or not, and about the cost vs benefits of cooperating with you. Maybe there's a good chance that knowing you'll get more resources makes you impatient for SIAI to make a FAI, causing you to donate more. Who knows. Depends on the situation. (If the above doesn't work when an AI is involved, how about EY makes a policy that only applies to AIs?) In answer to your second paragraph I could withdraw my threat, but that would lessen my posturing power for future credible threats. (har har...) The real reason is I'm worried about what happens while I'm trying to convince him. I'd love to discuss what sort of moderation is correct for a community like less wrong -- it sounds amazing

I'm not sure that blackmail is a good name to use when thinking about my commitment, as it has negative connotations and usually implies a non-public, selfish nature.

More importantly, you aren't threatening to publicize something embarrassing to Eliezer if he doesn't comply, so it's technically extortion.

8Wei Dai14y
I think by "blackmail" Eliezer meant to include extortion since the scenario that triggered that comment was also technically extortion.
That one also has negative connotation, but it's your thinking to bias as you please :p
Technical analysis does not imply bias either way. Just curiosity. ;)
To be precise, not respond when whether or not one is 'blackmailed' is counterfactually dependent on whether one would respond, which isn't the case with the law. (Of course, there are unresolved problems with who 'moves first', etc.)
Fair enough, so you're saying he only responds to credible threats from people who don't consider if he'll respond to credible threats?
Yes, again modulo not knowing how to analyze questions of who moves first (e.g. others who consider this and then make themselves not consider if he'll respond).
To put that bit about the legal system more forcefully: If EY really doesn't include these sorts of things in his thinking (he disregards US laws for reasons of game theory?), we have much bigger things to worry about right now than 0.0001% censorship.

There's a course "Street Fighting Mathematics" on MIT OCW, with an associated free Creative Commons textbook (PDF). It's about estimation tricks and heuristics that can be used when working with math problems. Despite the pop-sounding title, it appears to be written for people who are actually expected to be doing nontrivial math.

Might be relevant to the simple math of everything stuff.

For a teaser, the part about singing logarithms looks cool.

From a recent newspaper story:

The odds that Joan Ginther would hit four Texas Lottery jackpots for a combined $21 million are astronomical. Mathematicians say the chances are as slim as 1 in 18 septillion — that's 18 and 24 zeros.

I haven't checked this calculation at all, but I'm confident that it's wrong, for the simple reason that it is far more likely that some "mathematician" gave them the wrong numbers than that any compactly describable event with odds of 1 in 18 septillion against it has actually been reported on, in writing, in the history of intelligent life on my Everett branch of Earth. Discuss?

It seems right to me. If the chance of one ticket winning is one in 10^6, the chance of four specified tickets winning four drawings is one in 10^24. Of course, the chances of "Person X winning the lottery week 1 AND Person Y winning the lottery week 2 AND Person Z winning the lottery week 3 AND Person W winning the lottery week 4" are also 10^24, and this happens every four weeks.
From the article (there is a near invisible more text button) And she was the only person ever to have bought 4 tickets (birthday paradoxes and all)... I did see an analysis of this somewhere, I'll try and dig it up. Here it is. There is hackernews commentary here. I find this, from the original msnbc article, depressing
Is it depressing because someone with a Ph.D. in math is playing the lottery, or depressing because she must've have figured out something we don't know, given that she's won four times?
The former. It is also depressing because it can be used in articles on the lottery in the following way, "See look at this person good at maths, playing the lottery, that must mean it is a smart thing to play the lottery".
Depressing because someone with a Ph.D. in math is playing the lottery. I don't see any reason to think she figured out some way of beating the lottery.
It's also far more likely that she cheated. Or that there is a conspiracy in the Lottery to make she win four times.
The most eyebrow-raising part of that article:
Hm. Have you looked at the multiverse lately? It's pretty apparent that something has gone horribly weird somewhere along the way. Your confidence should be limited by that dissonance. It's the same with MWI, and cryonics, and moral cognitivism, and any other belief where your structural uncertainty hasn't been explicitly conditioned on your anthropic surprise. I'm not sure to what extent your implied confidence in these matters is pedagogical rather than indicative of your true beliefs. I expect mostly pedagogical? That's probably fine and good, but I doubt such subtle epistemic manipulation for the public good is much better than the Dark Arts. (Added: In this particular case, something less metaphysical is probably amiss, like a math error.)
So let me try to rewrite that (and don't be afraid to call this word salad): (Note: the following comment is based on premises which are very probably completely unsound and unusually prone to bias. Read at your own caution and remember the distinction between impressions and beliefs. These are my impressions.) You're Eliezer Yudkowsky. You live in a not-too-far-from-a-Singularity world, and a Singularity is a BIG event, decision theoretically and fun theoretically speaking. Isn't it odd that you find yourself at this time and place given all the people you could have found yourself as in your reference class? Isn't that unsettling? Now, if you look out at the stars and galaxies and seemingly infinite space (though you can't see that far), it looks as if the universe has been assigned measure via a universal prior (and not a speed prior) as it is algorithmically about as simple as you can get while still having life and yet seemingly very computationally expensive. And yet you find yourself as Eliezer Yudkowsky (staring at a personal computer, no less) in a close-to-Singularity world: surely some extra parameters must have been thrown into the description of this universe; surely your experience is not best described with a universal prior alone, instead of a universal prior plus some mixture of agents computing things according to their preference. In other words, this universe looks conspicuously like it has been optimized around Eliezer-does-something-multiversally-important. (I suppose this should also up your probability that you're a delusional narcissist, but there's not much to do about that.) Now, if such optimization pressures exist, then one has to question some reductionist assumptions: if this universe gets at least some of its measure from the preferences of simulator-agents, then what features of the universe would be affected by those preferences? Computational cost is one. MWI implies a really big universe, and what are the chances that you would
It all adds up to normality, damn it!
What whats to what? More seriously, that aphorism begs the question. Yes, your hypothesis and your evidence have to be in perfectly balanced alignment. That is, from a Bayesian perspective, tautological. However, it doesn't help you figure out how it is exactly that the adding gets done. It doesn't help distinguish between hypotheses. For that we need Solomonoff's lightsaber. I don't see how saying "it (whatever 'it' is) adds up to (whatever 'adding up to' means) normality (which I think should be 'reality')" is at all helpful. Reality is reality? Evidence shouldn't contradict itself? Cool story bro, but how does that help me?
This is rather tangential to your point, but the universe looks very computationally cheap to me. In terms of the whole ensemble, quantum mechanics is quite cheap. It only looks expensive to us because we measure by a classical slice, which is much smaller. But even if we call it exponential, that is very quick by the standards of the Solomonoff prior.
Hm, I'm not sure I follow: both a classical and quantum universe are cheap, yes, but if you're using a speed prior or any prior that takes into account computational expense, then it's the cost of the universes relative to each other that helps us distinguish between which universe we expect to find ourselves in, not their cost relative to all possible universes. I could very, very well just be confused. Added: Ah, sorry, I think I missed your point. You're saying that even infinitely large universes seem computationally cheap in the scheme of things? I mean, compared to all possible programs in which you would expect life to evolve, the universe looks hugeeeeeee to me. It looks infinite, and there are tons of finite computations... when you compare anything to the multiverse of all things, that computation looks cheap. I guess we're just using different scales of comparison: I'm comparing to finite computations, you're comparing to a multiverse.
No, that's not what I meant; I probably meant something silly in the details, but I think the main point still applies. I think you're saying that the size of the universe is large compared to the laws of physics. To which I still reply: not large by the standards of computable functions.
1Eliezer Yudkowsky14y
Er, sorry, I'm guessing my comment came across as word salad? Added: Rephrased and expanded and polemicized my original comment in a reply to my original comment.
Yeah I didn't get it either.
Hm. It's unfortunate that I need to pass all of my ideas through a Nick Tarleton or a Steve Rayhawk before they're fit for general consumption. I'll try to rewrite that whole comment when I'm less tired.
Illusion of transparency: they can probably generate sense in response to anything, but it's not necessarily faithful translation of what you say.
Consider that one of my two posts, Abnormal Cryonics, was simply a narrower version of what I wrote above (structural uncertainty is highly underestimated) and that Nick Tarleton wrote about a third of that post. He understood what I meant and was able to convey it better than I could. Also, Nick Tarleton is quick to call bullshit if something I'm saying doesn't seem to be meaningful, which is a wonderful trait.
Well, that was me calling bullshit.
Thanks! But it seems you're being needlessly abrasive about it. Perhaps it's a cultural thing? Anyway, did you read the expanded version of my comment? I tried to be clearer in my explanation there, but it's hard to convey philosophical intuitions.
I find myself unable to clearly articulate what's wrong with your idea, but in my own words, it reads as follows: "One should believe certain things to be probable because those are the kinds of things that people believe through magical thinking."
The problem with that idea is that there is no default level of belief. You are not allowed to say What is the difference between hesitating to assign negligible probability vs. to assign non-negligible probability? Which way is the certainty, which way is doubt? If you don't have good understanding of why you should believe one way or the other, you can't appoint a direction where safe level of credence lies and stay there pending the enlightenment. Your argument is not strong enough to shift the belief of one in septillion up to something believable, but your argument must be that strong to do it. You can't appeal to being hesitant to believe otherwise, it's not a strong argument, but a statement about not having one.
Was your point that Eliezer's Everett Branch is weird enough already that it shouldn't be that surprising if universally improbable things have occurred?
Erm, uh, kinda, in a more general sense. See my reply to my own comment where I try to be more expository.
I'm afraid it is word salad.

Would people be interested in a description of someone with high-social skills failing in a social situation (getting kicked out of a house)? I can't guarantee an unbiased account, as I was a player. But I think it might be interesting, purely as an example where social situations and what should be done are not as simple as sometimes portrayed.


Would people be interested in a description of someone with high-social skills failing in a social situation (getting kicked out of a house)?

I'm not sure it's that relevant to rationality, but I think most humans (myself included!) are interested in hearing juicy gossip, especially if it features a popular trope such as "high status (but mildly disliked by the audience) person meets downfall".

How about this division of labor: you tell us the story and we come up with some explanation for how it relates to rationality, probably involving evolutionary psychology.

He is not high status as such, although he possibly could be if he didn't waste time being drunk. Okay here goes the broad brush description of characters. Feel free to ask more questions to fill in details that you want. Dramatis Personae Mr G: Me. Tall scruffy geek. Takes little care of appearance. Tidy in social areas. Chats to everyone, remembers details of peoples lives, although forgets peoples names. Not particularly close (not facebook friends with any of the others). Doesn't bring girls/friends home. Can tell a joke or make a humorous observation but not a master, can hold his own in banter though. Little evidence of social circle apart from occasional visits to friends far away. Accommodating to peoples niggles and competent at fixing stuff that needs fixing. Does a fair amount of house work, because it needs doing. Has never suggested going out with the others, but has gone out by himself to random things. Is often out doing work at uni when others are at home. Shares some food with others, occasionally. Miss C: Assertive, short, fairly plump Canadian supply teacher. Is mocked by Mr S for canadianisms, especially when teaching the children that the British idiom is wrong. For example saying that learnt is not a word. Young, not very knowledgeable about current affairs/world. Boyfriend back home. Has smoked pot. Drinks and parties on the weekend, generally going out with friends from home. Facebook friends with the other 2 (I think). Fairly liberal. Came into the house a week before Mr G. Watches a lot of TV in the shared area. Has family and friends visit occasionally. Miss B: Works in digital marketing (does stuff on managing virals). Dry sense of humour. Boyfriend occasionally comes to visit, boyfriend is teacher who wants to be a stand up comedian. Is away most weekends, visiting family or boyfriend. Gets on with everyone on a surface level. Fairly pretty although not a stunner. Can banter a bit, but not much. Plays up to the "ditzy" personae some
This description seems very British and I'm not quite clear on some of it. For instance, I had no idea what a strop is. Urban Dictionary defines it as sulking, being angry, or being in a bad mood. Some of the other things seem like they would only make sense with more cultural context, specifically the emphasis on bantering and making witty remarks. I wouldn't say that this guy has great social skills, given his getting drunk and stealing food, slamming doors and walking around naked, and so forth. Pretty much the opposite, in fact. As to why he got kicked out, I guess people finally got tired of the way he acted, or this group of people was less tolerant of it.
By social skills I meant what people with Aspergers lack naturally. Magnetism/charisma, etc. It is hard to get that across in a textual description. People with poor social skills here know not to get drunk and wander around naked, but can't charm the pants off a pretty girl. The point of the story is that having charisma is in itself not a get out of jail free card that is sometimes described here. Sorry for the british-ness. It is hard to talk about social situations without thinking in my native idiom. I'll try and translate it tomorrow.
You're conflating a few different things here. There's seduction ability, which is its own unique set of skills (it's very possible to be good at seduction but poor at social skills; some of the PUA gurus fall in this category). There's the ability to pick up social nuances in real-time, which is what people with Aspergers tend to learn slower than others (though no one has this "naturally"; it has to be learned through experience). There's knowledge of specific rules, like "don't wander around naked". And charisma or magnetism is essentially confidence and acting ability. These skillsets are all independent: you can be good at some and poor at others. Well, of course not. For instance, if you punch someone in the face, they'll get upset regardless of your social skills in other situations. What this guy did was similar (though perhaps less extreme). Understood, and thanks for writing that story; it was really interesting. The whole British way of thinking is foreign to this clueless American, and I'm curious about it. (I'm also confused by the suggestion that being Facebook friends is a measure of intimacy.)
Interesting, I wouldn't have said that they were as independent as you make out. I'd say it is unusual to be confidant with good acting ability and not be able to read social nuances (how do you know how you should act?). And confidance is definately part of the PUA skillset. Apart from that I'd agree, there are different levels of skill. When sober he was fairly good at everything. He would steer the conversations where he wanted, generally organise the flat to his liking and not do anything stupid like going around naked. If you looked at our interactions as a group, he would have appeared the Alpha. His excuse for wandering around naked was that he thought he was alone and that he should have the right to go into the kitchen naked if he wanted to. I.e. he tried to brazen it out. That might give you some idea of his attitude, what he expected to get away with and that he had probably gotten away with it in the past. Apart from the lack of common sense (when very drunk), I think his main problem was underestimating people or at least not being able to read them. He was too reliant on his feeling of being the Alpha to realise his position was tenuous. No one was relying upon the flat as their main social group, so no one cared about him being Alpha of that group. You might get upset but still not be able to do anything against the Guy. See Highschool. People use Facebook in a myriad of different ways. Some people friend everyone they come across, which means their friends lists gives little information. Mine is to keep an eye on the doings of people I care about. People I don't care about just add noise. So mine is more informative than most. Mr S. is very promiscuous with over 700 friends, I'm not sure about the other two.
I just assumed that for the sake of brevity he covered the other aspects under "etc". I would add in "intuitive aptitude for Machiavellian social politics".
Do I correctly interpret this to say that both Miss C and Miss B goes out (drinking?) on the weekends, but not together?
Yup. Sorry, that wasn't clear.
Yes. And do not hesitate to use many many words.

Heh, that makes Roko's scenario similar to the Missionary Paradox: if only those who know about God but don't believe go to hell, it's harmful to spread the idea of God. (As I understand it, this doesn't come up because most missionaries think you'll go to hell even if you don't know about the idea of God.)

But I don't think any God is supposed to follow a human CEV; most religions seem to think it's the other way around.


Daniel Dennett and Linda LaScola on Preachers who are not believers:

There are systemic features of contemporary Christianity that create an almost invisible class of non-believing clergy, ensnared in their ministries by a web of obligations, constraints, comforts, and community. ... The authors anticipate that the discussion generated on the Web (at On Faith, the Newsweek/Washington Post website on religion, link) and on other websites will facilitate a larger study that will enable the insights of this pilot study to be clarified, modified, and expanded.

Paul Graham on guarding your creative productivity:

I'd noticed startups got way less done when they started raising money, but it was not till we ourselves raised money that I understood why. The problem is not the actual time it takes to meet with investors. The problem is that once you start raising money, raising money becomes the top idea in your mind. That becomes what you think about when you take a shower in the morning. And that means other questions aren't. [...]

You can't directly control where your thoughts drift. If you're controlling them, they're not drifting. But you can control them indirectly, by controlling what situations you let yourself get into. That has been the lesson for me: be careful what you let become critical to you. Try to get yourself into situations where the most urgent problems are ones you want think about.

So my brother was watching Bullshit, and saw an exorcist claim that whenever a kid mentions having an invisible friend, they (the exorcist) tell the kid that the friend is a demon that needs exorcising.

Now, being a professional exorcist does not give a high prior for rationality.

But still, even given that background, that's a really uncritically stupid thing to say. And it occurred to me that in general, humans say some really uncritically stupid things to children.

I wonder if this uncriticality has anything to do with, well, not expecting to be criticized... (read more)

Probably not very, because we can't actually imagine what that hypothetical person would say to us. It'd probably end up used as a way to affirm your positions by only testing strong points.
While I have difficulty imagining what someone far smarter than myself would say, what I can do is imagine explaining myself to a smart person who doesn't have my particular set of biases and hangups; and I find that does sometimes help.
I do it sometimes, and I think it helps.
I do it too - using some of the smarter and more critical posters on LW, actually - and I also think it helps. I think this diffuses some of LucasSloan's criticisms below - if it's a real person, you can to a reasonable extent imagine how they might reply. I think it works because placing yourself in a conflict (even an imaginary one) narrows and sharpens your focus as the subconscious processes get activated that try to 'win' it. The risk is though, that like any opinion formed or argued under the presence of an emotion, is that you become unreasonably certain of it.
I don't get the 'conflict' feeling when I do it. It feels more like 'betting mode', but with more specific counterarguments. Since it's all imaginary anyway, I don't feel committed enough to one side to activate conflict mode.

An akrasia fighting tool via Hacker News via Scientific American based on this paper. Read the Scientific American article for the short version. My super-short summary is that in self-talk asking "will I?" rather than telling yourself "I will" can be more effective at reaching success in goal-directed behavior. Looks like a useful tool to me.

This implies that the mantra "Will I become a syndicated cartoonist?" could be more effective than the original affirmative version, "I will become a syndicated cartoonist".
It might also be a useful tool for attaining self-knowledge outside of goal-directed behavior. Consider this passage from The Aleph:

What's the deal with programming, as a careeer? It seems like the lower levels at least should be readily accessible even to people of thoroughly average intelligence but I've read a lot that leads me to believe the average professional programmer is borderline incompetent.

E.g., Fizzbuzz. Apparently most people who come into an interview won't be able to do it. Now, I can't code or anything but computers do only and exactly what you tell them (assuming you're not dealing with a thicket of code so dense it has emergent properties) but here's what I'd tell t... (read more)


I have no numbers for this, but the idea is that after interviewing for a job, competent people get hired, while incompetent people do not. These incompetents then have to interview for other jobs, so they will be seen more often, and complained about a lot. So perhaps the perceived prevalence of incompetent programmers is a result of availability bias (?).

This theory does not explain why this problem occurs in programming but not in other fields. I don't even know whether that is true. Maybe the situation is the same elsewhere, and I am biased here because I am a programmer.

Joel Spolsky gave a similar explanation. Makes sense. I'm a programmer, and haven't noticed that many horribly incompetent programmers (which could count as evidence that I'm one myself!).
Do you consider fizzbuzz trivial? Could you write an interpreter for a simple Forth-like language, if you wanted to? If the answers to these questions are "yes", then that's strong evidence that you're not a horribly incompetent programmer. Is this reassuring?
Yes Probably; I made a simple lambda-calculus interpret once and started working on a Lisp parser (I don't think I got much further than the 'parsing' bit). Forth looks relatively simple, though correctly parsing quotes and comments is always a bit tricky. Of course, I don't think I'm a horribly incompetent programmer -- like most humans, I have a high opinion of myself :D
I'm probably not horribly incompetent (evidence: this and this), but there exist people who are miles above me, e.g. John Carmack (read his .plan files for a quick fix of inferiority) or Inigo Quilez who wrote the 4kb Elevated demo. Thinking you're "good enough" is dangerous.
From what I can tell the average person is borderline incompetent when it comes to the 'actually getting work done' part of a job. It is perhaps slightly more obvious with a role such as programming where output is somewhat closer to the level of objective physical reality.
I don't know anything about FizzBuzz, but your program generates no buzzes and lots of fizzes (appending fizz to numbers associated only with fizz or buzz.) This is not a particularly compelling demonstration of your point that it should be easy. (I'm not a programmer, at least not professionally. The last serious program I wrote was 23 years ago in Fortran.)
The bug would have been obvious if the pseudocode had been indented. I'm convinced that a large fraction of beginner programming bugs arise from poor code formatting. (I got this idea from watching beginners make mistakes, over and over again, which would have been obvious if they had heeded my dire warnings and just frickin' indented their code.) Actually, maybe this is a sign of a bigger conceptual problem: a lot of people see programs as sequences of instructions, rather than a tree structure. Indentation seems natural if you hold the latter view, and pointless if you can only perceive programs as serial streams of tokens.
This seems to predict that python solves this problem. Do you have any experience watching beginners with python? (Your second paragraph suggests that indentation is just the symptom and python won't help.)
Your general point is right. Ever since I started programming, it always felt like money for free. As long as you have the right mindset and never let yourself get intimidated. Your solution to FizzBuzz is too complex and uses data structures ("associate whatever with whatever", then ordered lists) that it could've done without. Instead, do this: for x in range(1, 101): fizz = (x%3 == 0) buzz = (x%5 == 0) if fizz and buzz: print "FizzBuzz" elif fizz: print "Fizz" elif buzz: print "Buzz" else: print x This is runnable Python code. (NB: to write code in comments, indent each line by four spaces.) Python a simple language, maybe the best for beginners among all mainstream languages. Download the interpreter and use it to solve some Project Euler problems for finger exercises, because most actual programming tasks are a wee bit harder than FizzBuzz.
How did you first find work? How do you usually find work, and what would you recommend competent programmers do to get started in a career?
The least-effort strategy, and the one I used for my current job, is to talk to recruiting firms. They have access to job openings that are not announced publically, and they have strong financial incentives to get you hired. The usual structure, at least for those I've worked with, is that the prospective employee pays nothing, while the employer pays some fraction of a year's salary for a successful hire, where success is defined by lasting longer than some duration. (I've been involved in hiring at the company I work for, and most of the candidates fail the first interview on a question of comparable difficulty to fizzbuzz. I think the problem is that there are some unteachable intrinsic talents necessary for programming, and many people irrevocably commit to getting comp sci degrees before discovering that they can't be taught to program.)
I think there are failure modes from the curiosity-stopping anti-epistemology cluster, that allow you to fail to learn indefinitely, because you don't recognize what you need to learn, and so never manage to actually learn that. With right approach anyone who is not seriously stupid could be taught (but it might take lots of time and effort, so often not worth it).
Do recruiting firms require that you have formal programming credentials?
Formal credentials certainly help, but I wouldn't say they're required, as long as you have something (such as a completed project) to prove you have skills.
My first paying job was webmaster for a Quake clan that was administered by some friends of my parents. I was something like 14 or 15 then, and never stopped working since (I'm 27 now). Many people around me are aware of my skills, so work usually comes to me; I had about 20 employers (taking different positions on the spectrum from client to full-time employer) but I don't think I ever got hired the "traditional" way with a resume and an interview. Right now my primary job is a fun project we started some years ago with my classmates from school, and it's grown quite a bit since then. My immediate boss is a former classmate of mine, and our CEO is the father of another of my classmates; moreover, I've known him since I was 12 or so when he went on hiking trips with us. In the past I've worked for friends of my parents, friends of my friends, friends of my own, people who rented a room at one of my schools, people who found me on the Internet, people I knew from previous jobs... Basically, if you need something done yesterday and your previous contractor was stupid, contact me and I'll try to help :-) ETA: I just noticed that I didn't answer your last question. Not sure what to recommend to competent programmers because I've never needed to ask others for recomendations of this sort (hah, that pattern again). Maybe it's about networking: back when I had a steady girlfriend, I spent about three years supporting our "family" alone by random freelance work, so naturally I learned to present a good face to people. Maybe it's about location: Moscow has a chronic shortage of programmers, and I never stop searching for talented junior people myself.
I was very surprised by this until I read the word "Moscow."
Is it different in the US? I imagined it was even easier to find a job in the Valley than in Moscow.
I was unsurprised by this until I read the word "Moscow". (Russian programmers & mathematicians seem to always be heading west for jobs.)
I took an internship after college. Professors can always use (exploit) programming labor. That gives you semi-real experience (might be very real if the professor is good) and allows you to build credibility and confidence.
Python tip: Using "range" creates a big list in memory, which is a waste of space. If you use xrange, you get an iterable object that only uses a single counter variable.
Hah. I first wrote the example using xrange, then changed it to range to make it less confusing to someone who doesn't know Python :-)
Not in python 3 ! range in Python 3 works like xrange in the previous versions (and xrange doesn't exist any more). (but the print functions would use a different syntax)
In fact, range in Python 2.5ish and above works the same, which is why they removed xrange in 3.0.
There was a discussion of transitioning to Python 3 on HN a week or two ago; apparently there are going to be a lot of programmers, and even more shops, holding off on transitioning, because it will break too many existing programs. (I haven't tried Python since version 1, so I don't know anything about it myself.)
A big problem with transitioning to Python 3 is that there are quite a few third-party libraries that don't support it (including two I use regularly - SciPy and Pygame). Some bits of the syntax are different, but that shouldn't be a huge issue except for big codebases, since there's a script to convert Python 2.6 to 3.0. I've used Python 3 but had to switch back to 2.6 so I could keep using those libraries :P
--"Epigrams in Programming", by Alan J. Perlis; ACM's SIGPLAN publication, September, 1982
Programming as a field exhibits a weird bimodal distribution of talent. Some people are just in it for the paycheck, but others think of it as a true intellectual and creative challenge. Not only does the latter group spend extra hours perfecting their art, they also tend to be higher-IQ. Most of them could make better money in the law/medicine/MBA path. So obviously the "programming is an art" group is going to have a low opinion of the "programming is a paycheck" group.
Do we have any refs for this? I know there's "The Camel Has Two Humps" (Alan Kay on it, the PDF), but anything else?
And going by his other papers, though, it looks like the effect isn't nearly so strong as was originally claimed. (Though that's wrt whether his "consistency test" works, didn't check about whether bimodalness still holds.)
No, just personal experience and observation backed up by stories and blog posts from other people. See also Joel Spolsky on Hitting the High Notes. Spolsky's line is that some people are just never going to be that good at programming. I'd rephrase it as: some people are just never going to be motivated to spend long hours programming for the sheer fun and challenge of it, and so they're never going to be that good at programming.
This is a good null hypothesis for skill variation in many cases, but not one supported by the research in the paper gwern linked.
Fixed that for you. :) (I'm a current law student.)
In addition to this, if you're a good bricklayer, you might do, at most, twice the work of a bad bricklayer. It's quite common for an excellent programmer (a hacker) to do more work than ten average programmers--and that's conservative. The difference is more apparent. My guess might be that you hear this complaint from good programmers, Barry? Although, I can guarantee that everyone I've met can do at least FizzBuzz. We have average programmers, not downright bad ones.
I'll second the suggestion that you try your hand at some actual programming tasks, relatively easy ones to start with, and see where that gets you. The deal with programming is that some people grok it readily and some don't. There seems to be some measure of talent involved that conscientious hard word can't replace. Still, it seems to me (I have had a post about this in the works for ages) that anyone keen on improving their thinking can benefit from giving programming a try. It's like math in that respect.
i think you overestimate human curiosity for one. Not everyone implements prime searching or Conways game of life for fun. For two. Even those that implement their own fun projects are not necessarily great programmers. It seems there are those that get pointers, and the others. For tree, where does a company advertise? There is a lot of mass mailing going on by not competent folks. I recently read Joel Spolskys book on how to hire great talent, and he makes the point that the really great programmers just never appear on the market anyway.
Are there really people who don't get pointers? I'm having a hard time even imagining this. Pointers really aren't that hard, if you take a few hours to learn what they do and how they're used. Alternately, is my reaction a sign that there really is a profoundly bimodal distribution of programming aptitudes?
There really are people who would not take that few hours.
I don't know if this counts, but when I was about 9 or 10 and learning C (my first exposure to programming) I understood input/output, loops, functions, variables, but I really didn't get pointers. I distinctly remember my dad trying to explain the relationship between the * and & operators with box-and-pointer diagrams and I just absolutely could not figure out what was going on. I don't know whether it was the notation or the concept that eluded me. I sort of gave up on it and stopped programming C for a while, but a few years later (after some Common Lisp in between), when I revisited C and C++ in high school programming classes, it seemed completely trivial. So there might be some kind of level of abstract-thinking-ability which is a prerequisite to understanding such things. No comment on whether everyone can develop it eventually or not.
There are really people who don't get pointers.
One of the epiphanies of my programming career was when I grokked function pointers. For a while prior to that I really struggled to even make sense of that idea, but when it clicked it was beautiful. (By analogy I can sort of understand what it might be like not to understand pointers themselves.) Then I hit on the idea of embedding a function pointer in a data structure, so that I could change the function pointed to depending on some environmental parameters. Usually, of course, the first parameter of that function was the data structure itself...
Cute. Sad, but that's already more powerful than straight OO. Python and Ruby support adding/rebinding methods at runtime (one reason duck typing is more popular these days). You might want to look at functional programming if you haven't yet, since you've no doubt progressed since your epiphany. I've heard nice things about statically typed languages such as Haskell and O'Caml, and my personal favorite is Scheme.
Oddly enough, I think Morendil would get a real kick out of JavaScript. So much in JS involves passing functions around, usually carrying around some variables from their enclosing scope. That's how the OO works; it's how you make callbacks seem natural; it even lets you define new control-flow structures like jQuery's each() function, which lets you pass in a function which iterates over every element in a collection. The clearest, most concise book on this is Doug Crockford's Javascript: The Good Parts. Highly recommended.
The technical term for this is a closure. A closure is a first-class* function with some associated state. For example, in Scheme, here is a function which returns counters, each with its own internal ticker: To create a counter, you'd do something like Then, to get values from the counter, you could call something like Here is the same example in Python, since that's what most people seem to be posting in: *That is, a function which you can pass around like a value.
While we're sharing fun information, I'd like to point out a little-used feature of Markdown syntax: if you put four spaces before a line, it's treated as code. Behold: (define (make-counter) (let ([internal-variable 0]) (lambda () (begin (set! internal-variable (+ internal-variable 1)) internal-variable)))) Also, the emacs rectangle editing functions are good for this. C-x r t is a godsend.
I suspect it's like how my brain reacts to negative numbers, or decimals; I have no idea how anyone could fail to understand them. But some people do. And, due to my tendency to analyse mistakes I make (especially factual errors) I remember the times when I got each one of those wrong. I even remember the logic I used. But they've become so ingrained in my brain now that failure to understand them is nigh inconceivable.
There is a difference in aptitude, but part of the problem is that pointers are almost never explained correctly. Many texts try to explain in abstract terms, which doesn't work; a few try to explain graphically, which doesn't work terribly well. I've met professional C programmers who therefore never understood pointers, but who did understand them after I gave them the right explanation. The right explanation is in terms of numbers: the key is that char *x actually means the same thing as int x (on a 32-bit machine, and modulo some superficial convenience). A pointer is just an integer that gets used to store a memory address. Then you write out a series of numbered boxes starting at e.g. 1000, to represent memory locations. People get pointers when you put it like that.
Yeah, pretty much anyone who isn't appallingly stupid can become a reasonably good programmer in about a year. Be warned though, the kinds of people who make good programmers are also the kind of people who spontaneously find themselves recompiling their Linux kernel in order to get their patched wifi drivers to work...
xkcd reference!
Dammit! That'll shouted at my funeral!

This article is pretty cool, since it describes someone running quality control on a hospital from an engineering perspective. He seems to have a good understanding of how stuff works, and it reads like something one might see on lesswrong.

Is there any philosophy worth reading?

As far as I can tell, a great deal of "philosophy" (basically the intellectuals' wastebasket taxon) consists of wordplay, apologetics, or outright nonsense. Consequently, for any given philosophical work, my prior strongly favors not reading it because the expected benefit won't outweigh the cost. It takes a great deal of evidence to tip the balance.

For example: I've heard vague rumors that GWF Hegel concludes that the Prussian State (under which, coincidentally, he lived) was the best form of human existence... (read more)

So my question is: What philosophical works and authors have you found especially valuable, for whatever reason?

You might find it more helpful to come at the matter from a topic-centric direction, instead of an author-centric direction. Are there topics that interest you, but which seem to be discussed mostly by philosophers? If so, which community of philosophers looks like it is exploring (or has explored) the most productive avenues for understanding that topic?

Remember that philosophers, like everyone else, lived before the idea of motivated cognition was fully developed; it was commonplace to have theories of epistemology which didn't lead you to be suspicious enough of your own conclusions. You may be holding them to too high a standard by pointing to some of their conclusions, when some of their intermediate ideas and methods are still of interest and value today.

However, you should be selective of who you read. Unless you're an academic philosopher, for instance, reading a modern synopsis of Kantian thought is vastly preferable to trying to read Kant yourself. For similar reasons, I've steered clear of Hegel's original texts.

Unfortunately for the present purpose, I myself went the long way (I went to a college with a strong Great Books core in several subjects), so I don't have a good digest to recommend. Anyone else have one?

Yoreth: That's an extremely bad way to draw conclusions. If you were living 300 years ago, you could have similarly heard that some English dude named Isaac Newton is spending enormous amounts of time scribbling obsessive speculations about Biblical apocalypse and other occult subjects -- and concluded that even if he had some valid insights about physics, it wouldn't be worth your time to go looking for them.
The value of Newton's theories themselves can quite easily be checked, independently of the quality of his epistemology. For a philosopher like Hegel, it's much harder to dissociate the different bits of what he wrote, and if one part looks rotten, there's no obvious place to cut. (What's more, Newton's obsession with alchemy would discourage me from reading whatever Newton had to say about science in general)
A bad way to draw conclusions. A good way to make significant updates based on inference.
Would you be so kind as to spell out the exact sort of "update based on inference" that applies here?
??? "People who say stupid things are, all else being equal, more likely to say other stupid things in related areas".
That's a very vague statement, however. How exactly should one identify those expressions of stupid opinions that are relevant enough to imply that the rest of the author's work is not worth one's time?
Nobody knows (obviously), but you can try to train your intuition to do that well. You'd expect this correlation to be there.
In the context of LessWrong it should be considered trivial to the point of outright patronising if not explicitly prompted. Bayesian inference is quite possibly the core premise of the community. In the process of redacting my reply I coined the term "Freudian Double-Entendre". Given my love of irony I hope the reader appreciates my restraint! <-- Example of a very vague statement. In fact if anyone correctly follows that I expect I would thoroughly enjoy reading their other comments.
Yep, and note that Hegel's philosophy is related to states more than Newton's physics is related to the occult.
Laktatos, Quine and Kuhn are all worth reading. Recommended works from each follows: Lakatos: " Proofs and Refutations" Quine: "Two Dogmas of Empiricism" Kuhn: "The Copernican Revolution" and "The Structure of Scientific Revolution" All of these have things which are wrong but they make arguments that need to be grappled with and understood (Copernican Revolution is more of a history book than a philosophy book but it helps present a case of Kuhn's approach to the history and philosophy of science in great detail). Kuhn is a particularly interesting case- I think that his general thesis about how science operates and what science is is wrong, but he makes a strong enough case such that I find weaker versions of his claims to be highly plausible. Kuhn also is just an excellent writer full of interesting factual tidbits. This seems like in general not a great attitude. The Descartes case is especially relevant in that Descartes did a lot of stuff not just philosophy. And some of his philosophy is worth understanding simply due to the fact that later authors react to him and discuss things in his context. And although he's often wrong, he's often wrong in a very precise fashion. His dualism is much more well-defined than people before him. Hegel however is a complete muddle. I'd label a lot of Hegel as not even wrong. ETA: And if I'm going to be bashing Hegel a bit, what kind of arrogant individual does it take to write a book entitled "The Encyclopedia of the Philosophical Sciences" that is just one's magnum opus about one's own philosophical views and doesn't discuss any others?
Yes. I agree with your criticisms - "philosophy" in academia seems to be essentially professional arguing, but there are plenty of well-reasoned and useful ideas that come of it, too. There is a lot of non-rational work out there (i.e. lots of valid arguments based on irrational premises) but since you're asking the question in this forum I am assuming you're looking for something of use/interest to a rationalist. I've developed quite a respect for Hilary Putnam and have read many of his books. Much of his work covers philosophy of the mind with a strong eye towards computational theories of the mind. Beyond just his insights, my respect also stems from his intellectual honesty. In the Introduction to "Representation and Reality" he takes a moment to note, "I am, thus, as I have done on more than one occasion, criticizing a view I myself earlier advanced." In short, as a rationalist I find reading his work very worthwhile. I also liked "Objectivity: The Obligations of Impersonal Reason" by Nicholas Rescher quite a lot, but that's probably partly colored by having already come to similar conclusions going in. PS - There was this thread over at Hacker News that just came up yesterday if you're looking to cast a wider net.
I've always been told that Hegel basically affixed the section about Prussia due to political pressures, and that modern philosophers totally ignore it. Having said that, I wouldn’t read Hegel. I recommend avoiding reading original texts, and instead reading modern commentaries and compilations. 'Contemporary Readings in Epistemology' was the favoured first-year text at Oxford. Bertrand Russell's "History of Western Philosophy" is quite a good read too. The Stanford Encyclopaedia of Philosophy is also very good.
I've enjoyed Nietzsche, he's an entertaining and thought-provoking writer. He offers some interesting perspectives on morality, history, etc.
None that actively affiliate themselves with the label 'philosophy'.
This is an understandable sentiment, but it's pretty harsh. Everybody makes mistakes -- there is no such thing as a perfect scholar, or perfect author. And I think that when Descartes is studied, there is usually a good deal of critique and rejection of his ideas. But there's still a lot of good stuff there, in the end. I have found Foucault to be a very interesting modern philosopher/historian. His book, I believe entitled "Madness and civilization", (translated from French), strikes me as a highly impressive analysis on many different levels. His writing style is striking, and his concentration on motivation and purpose goes very, very deep.
Maybe LW should have resident intellectual historians who read philosophy. They could distill any actual insights from dubious, old or badly written philosophy, and tell if a work is worthy reading for rationalists.

More on the coming economic crisis for young people, and let me say, wow, just wow: the essay is a much more rigorous exposition of the things I talked about in my rant.

In particular, the author had similar problems to me in getting a mortgage, such as how I get told on one side, "you have a great credit score and qualify for a good rate!" and on another, "but you're not good enough for a loan". And he didn't even make the mistake of not getting a credit card early on!

Plus, he gives a lot of information from his personal experience.

Be ... (read more)

I've seen discussion of Goodhart's Law + Conservation of Thought playing out nastily in investment. For example, junk bonds started out as finding some undervalued bonds among junk bonds. Fine, that's how the market is supposed to work. Then people jumped to the conclusion that everything which was called a junk bond was undervalued. Oops.

I have a question about prediction markets. I expect that it has a standard answer.

It seems like the existence of casinos presents a kind of problem for prediction markets. Casinos are a sort of prediction market where people go to try to cash out on their ability to predict which card will be drawn, or where the ball will land on a roulette wheel. They are enticed to bet when the casino sets the odds at certain levels. But casinos reliably make money, so people are reliably wrong when they try to make these predictions.

Casinos don't invalidate prediction markets, but casinos do seem to show that prediction markets will be predictably inefficient in some way. How is this fact dealt with in futarchy proposals?

One way to think of it is that decisions to gamble are based on both information and an error term which reflects things like irrationality or just the fact that people enjoy gambling. Prediction markets are designed to get rid of the error and have prices reflect the information: errors cancel out as people who err in opposite directions bet on opposite sides, and errors in one direction create +EV opportunities which attract savvy, informed gamblers to bet on the other side. But casinos are designed to drive gambling based solely on the error term - people are betting on events that are inherently unpredictable (so they have little or no useful information) against the house at fixed prices, not against each other (so the errors don't cancel out), and the prices are set so that bets are -EV for everyone regardless of how many errors other people make (so there aren't incentives for savvy informed people to come wager). Sports gambling is structured more similarly to prediction markets - people can bet on both sides, and it's possible for a smart gambler to have relevant information and to profit from it, if the lines aren't set properly - and sports betting lines tend to be pretty accurate.
I have also heard of at least one professional gambler who makes his living by identifying and confronting other peoples' superstitious gambling strategies. For example, if someone claims that 30 hasn't come up in a while, and thus is 'due,' he would make a separate bet with them (to which the house is not a party), claiming simply that they're wrong. Often, this is an even-money bet which he has upwards of a 97% chance of winning; when he loses, the relatively small payoff to the other party is supplemented by both the warm fuzzies associated with rampant confirmation bias, and the status kick from defeating a professional gambler in single combat.
The money brought in by stupid gamblers creates additional incentive for smart players to clear it out with correct predictions. The crazier the prediction market, the more reason for rational players to make it rational.
Right. Maybe I shouldn't have said that a prediction market would be "predictably inefficient". I can see that rational players can swoop in and profit from irrational players. But that's not what I was trying to get at with "predictably inefficient". What I meant was this: Suppose that you know next to nothing about the construction of roulette wheels. You have no "expert knowledge" about whether a particular roulette ball will land in a particular spot. However, for some reason, you want to make an accurate prediction. So you decide to treat the casino (or, better, all casinos taken together) as a prediction market, and to use the odds at which people buy roulette bets to determine your prediction about whether the ball will land in that spot. Won't you be consistently wrong if you try that strategy? If so, how Is this consistent wrongness accounted for in futarchy theory? I understand that, in a casino, players are making bets with the house, not with each other. But no casino has a monopoly on roulette. Players can go to the casino that they think is offering the best odds. Wouldn't this make the gambling market enough like a prediction market for the issue I raise to be a problem? I may just have a very basic misunderstanding of how futarchy would work. I figured that it worked like this: The market settles on a certain probability that something will happen by settling on an equilibrium for the odds at which people are willing to buy bets. Then policy makers look at the market's settled probability and craft their policy accordingly.
Roulette odds are actually very close to representing probabilities, although you'd consistently overestimate the probability if you just translated directly. Each $1 bet on a specific number pays out a $35 profit, suggesting p=1/36, but in reality p=1/38. Relative odds get you even closer to accurate probabilities; for instance, 7 & 32 have the same payout, from which we could conclude (correctly, in this case) that they are equally likely. With a little reasoning - 38 possible outcomes with identical payouts - you can find the correct probability of 1/38. This table shows that every possible roulette bet except for one has the same EV, which means that you'd only be wrong about relative probabilities if you were considering that one particular bet. Other casino games have more variability in EV, but you'd still usually get pretty close to correct probabilities. The biggest errors would probably be for low probability-high payout games like lotteries or raffles.
It's interesting that the market drives the odds so close to reality, but doesn't quite close the gap. Do you know if there are regulations that keep some rogue casino from selling roulette bets as though the odds were 1/37, instead of 1/36? I'm thinking now that the entire answer to my question is contained in Dagon's reply. Perhaps the gambling market is distorted by regulation, and its failure as a prediction market is entirely due to these regulations. Without such regulations, maybe the gambling business would function much more like an accurate prediction market, which I suppose would make it seem like a much less enticing business to go into. This would imply that, if you don't like casinos, you should want regulation on gambling to focus entirely on making sure that casinos don't use violence to keep other casinos from operating. Then maybe we'd see the casinos compete by bringing their odds closer to reality, which would, of course, make the casinos less profitable, so that they might close down of their own accord. (Of course, I'm ignoring games that aren't entirely games of chance.)
This really doesn't have much to do with the market. While I don't know the details of gambling laws in all the US states and Indian nations, I would be very surprised if there were regulations on roulette odds. Many casinos have roulette wheels with only one 0 (paid as if 1/36, actual odds 1/37), and with other casino games, such as blackjack, casinos frequently change the rules as part of a promotion or to try to get better odds. There is no "gambling market": casinos are places where people pay for entertainment, not to make money. While casinos do offer promotions and advertise favorable rules and odds, most people go for the entertainment, and no one who's serious about math and probability goes to make money (with exceptions for card-counting and poker tournaments, as orthonormal notes). Also see Unnamed's comment. Essentially, the answer is that a casino is not a market.
A single casino is not a market, but don't all casinos and gamblers together form a market for something? Maybe it's a market for entertainment instead of prediction ability, but it's a market for something, isn't it? Moreover, it seems, at least naïvely, to be a market in which a casino would attract more customers by offering more realistic odds.
Some casinos in Vegas have European roulette with a smaller house edge. I know this from a Vegas guidebook which listed where you could find the best odds at various games suggesting that at least some gamblers seek out the best odds. The Wikipedia link also states:
In the stock market, as in a prediction market, the smart money is what actually sets the price, taking others' irrationalities as their profit margin. There's no such mechanism in casinos, since the "smart money" doesn't gamble in casinos for profit (excepting card-counting, cheating, and poker tournaments hosted by casinos, etc).
The most obvious thing: customers are only allowed to take one side of a bet, whose terms are dictated by the house. If you had a general-topic prediction market with one agent who chose the odds for everything, and only allowed people to bet in one chosen direction on each topic, that agent (if they were at all clever) could make a lot of money, but the odds wouldn't be any "smarter" than that agent (and in fact would be dumber so as to make a profit margin).
But no casino has a monopoly on roulette. Yet the market doesn't seem to drive the odds to their correct values. Dagon notes above that regulations make it hard to enter the market as a casino. Maybe that explains why my naive expectations don't happen. Actually this raises another question for me. If I start a casino in Vegas, am I required to sell roulette bets as though the odds were p = 1/36, instead of, say, p = 1/37 ? [Edited for lack of clarity.]
Casinos have an assymetry: creation of new casinos is heavily regulated, so there's no way for people with good information to bet on their beliefs, and no mechanism for the true odds to be reached as the market price for a wager.
Normally I wouldn't comment on a typo, but I can't read "assymetry" without chuckling.

I think you're overestimating your ability to see what exactly is wrong and how to fix it. Humans (westerners?) are biased towards thinking that improvements they propose would indeed make things better. This tendancy is particularly visible in politics, where it causes the most damage.

More generally, humans are probably biased towards thinking their own ideas are particularly good, hence the "not invented here" syndrome, etc. Outside of politics, the level of confidence rarely reaches the level of threatening death and destruction if one's ideas are not accepted.


Is there a bias, maybe called the 'compensation bias', that causes one to think that any person with many obvious positive traits or circumstances (really attractive, rich, intelligent, seemingly happy, et cetera) must have at least one huge compensating flaw or a tragic history or something? I looked through Wiki's list of cognitive biases and didn't see it, but I thought I'd heard of something like this. Maybe it's not a real bias?

If not, I'd be surprised. Whenever I talk to my non-rationalist friends about how amazing persons X Y or Z are, they invariab... (read more)

It may have to do with the manner you bring it up - it's not hard to see how saying something like "X is amazing" could be interpreted "X is amazing...and you're not" (after all, how often do you tell your friends how amazing they are?), in which case the bias is some combination of status jockeying, cognitive dissonance and ego protection.
Wow, that's seems like a very likely hypothesis that I completely missed. Is there some piece of knowledge you came in with or heuristic you used that I could have used to think up your hypothesis?
I've spent some time thinking about this, and the best answer I can give is that I spend enough time thinking about the origins and motivations of my own behavior that, if it's something I might conceivably do right now, or (more importantly) at some point in the past, I can offer up a possible motivation behind it. Apparently this is becoming more and more subconscious, as it took quite a bit of thinking before I realized that that's what I had done.
Could it be a matter of being excessively influenced by fiction? It's more convenient for stories if a character has some flaws and suffering.
Is this actually incorrect, though? As far as I know, people have problems and inadequacies. When they solve them, they move on to worrying about other things. It's probably a safe bet that the awesome people you're describing do as well. What probably is wrong is that general awesomeness makes hidden bad stuff more likely.
Possibly a form of the just-world fallacy.
Given that there's the halo effect (that you mention) plus the affect heuristic, it seems that if there's a bias, it goes the other way - people tend to think all positive attributes clump together. If both effects exist, that would cast doubt on whether it counts as a bias at all, as the direction of the error is not consistently one way. (Right?)
Will's remark suggests that the biases exist in different circumstances. If I'm following Will, then the halo effect occurs when people have already interacted with impressive individuals, whereas Will's reported effect occurs only when people are hearing about an impressive individual in a second-hand or third-hand way.

Day-to-day question:

I live in a ground floor apartment with a sunken entryway. Behind my fairly large apartment building is a small wooded area including a pond and a park. During the spring and summer, oftentimes (~1 per 2 weeks) a frog will hop down the entryway at night and hop around on the dusty concrete until dying of dehydration. I occasionally notice them in the morning as I'm leaving for work, and have taken various actions depending on my feelings at the time and the circumstances of the moment.

  1. Action: Capture the frog and put it in the woods o
... (read more)
6Eliezer Yudkowsky14y
I don't consider frogs to be objects of moral worth.

Questions of priority - and the relative intensity of suffering between members of different species - need to be distinguished from the question of whether other sentient beings have moral status at all. I guess that was what shocked me about Eliezer's bald assertion that frogs have no moral status. After all, humans may be less sentient than frogs compared to our posthuman successors. So it's unsettling to think that posthumans might give simple-minded humans the same level of moral consideration that Elizeer accords frogs.

-- David Pearce via Facebook


Are there any possible facts that would make you consider frogs objects of moral worth if you found out they were true?

(Edited for clarity.)

6Eliezer Yudkowsky14y
"Frogs have subjective experience" is the biggy, there's a number of other things I already know myself to be confused about which impact on that, and so I don't know exactly what I should be looking for in the frog that would make me think it had a sense of its own existence. Certainly there are any number of news items I could receive about the frog's mental abilities, brain complexity, type of algorithmic processing, ability to reflect on its own thought processes, etcetera, which would make me think it was more likely that the frog was what a non-confused person of myself would regard as fulfilling the predicate I currently call "capable of experiencing pain", as opposed to being a more complicated version of neural network reinforcement-learning algorithms that I have no qualms about running on a computer. A simple example would be if frogs could recognize dots painted on them when seeing themselves in mirrors, or if frogs showed signs of being able to learn very simple grammar like "jump blue box". (If all human beings were being cryonically suspended I would start agitating for the chimpanzees.)
I am very surprised that you suggest that "having subjective experience" is a yes/no thing. I thought it is consensus opinion here that it is not. I am not sure about others on LW, but I would even go three steps further: it is not even a strict ordering of things. It is not even a partial ordering of things. I believe it can be only defined in the context of an Observer and an Object, where Observer gives some amount of weight to the theory that Object's subjective experience is similar to Observer's own.
Links? I'd be interested in seeing what people on LW thought about this, if it's been discussed before. I can understand the yes/no position, or the idea that there's a blurry line somewhere between thermostats and humans, but I don't understand what you mean about the Observer and Object. The Observer in your example has subjective experience?
I like the way you phrased your concern for "subjective experience" -- those are the types of characteristics I care about as well. But I'm curious: What does ability to learn simple grammar have to do with subjective experience?
We're not looking for objective experience, thus we're simply looking for experience. If we now define 'a sense of one's own existence' as the experience of self-awareness, i.e. consciousness, and if we also regard unconscious experience as unworthy, we're left with consciousness. Now since we can not define consciousness, we need a working definition. What are some important aspects of consciousness? 'Thinking', which requires 'knowledge' (data), is not the operative point between being an iPhone and being human. It's information processing after all. So what do we mean by unconscious, opposed to consciousness decision making? It's about deliberate, purposeful (goal-oriented) adaption. Thus to be consciousness is to be able to shape your environment in a way that it suits your volition. * The ability to define a system within the environment in which it is embedded to be yourself. * To be goal-oriented * The specific effectiveness and order of transformation by which the self-defined system (you) shapes the outside environment, in which it is embedded, does trump the environmental influence on the defined system. (more) How could this help with the frog-dilemma? Are frogs consciousness? * Are there signs of active environmental adaption by the frog-society as indicated by behavioral variability? * To what extent is frog behavior predictable? That is, we have to fathom the extent of active adaption of the environment by frogs opposed to passive adaption of frogs by the the environment. Further, one needs to test the frogs ability of deliberate, spontaneous behavior given environmental (experimental) stimuli and see if frogs can evade, i.e. action vs reaction. P.S. No attempt at a solution, just some quick thoughts I wanted to write down for clearance and possible feedback.

I'm surprised. Do you mean you wouldn't trade off a dust speck in your eye (in some post-singularity future where x-risk is settled one way or another) to avert the torture of a billion frogs, or of some noticeable portion of all frogs? If we plotted your attitudes to progressively more intelligent entities, where's the discontinuity or discontinuities?

You'd need to change that to 10^6 specks and 10^15 frogs or something, because emotional reaction to choosing to kill the frogs is also part of the consequences of the decision, and this particular consequence might have moral value that outweighs one speck. Your emotional reaction to a decision about human lives is irrelevant, the lives in question hold most of the moral worth, while with a decision to kill billions of cockroaches (to be safe from the question of moral worth of frogs), the lives of the cockroaches are irrelevant, while your emotional reaction holds most of moral worth.
I'm not so sure. I'm no expert on the subject, but I suspect cockroaches may have moderately rich emotional lives.
Hopefully he still thinks there's a small probability of frogs being able to experience pain, so that the expected suffering of frog torture would be hugely greater than a dust speck.
Depends. Would that make it harder to get frog legs?
Same questions to you, but with "rocks" for "frogs". Eliezer didn't say he was 100% sure frogs weren't objects of moral worth, nor is it a priori unreasonable to believe there exists a sharp cutoff without knowing where it is.
Seconded, and how do you (Eliezer) rate other creatures on the Great Chain of Being?
Would you save a stranded frog, though?
What about dogs?
Yeah, trying to save the world does that to you. ETA (May 2012): wow, I can't understand what prompted me to write a comment like this. Sorry.
Axiom: The world is worth saving. Fact: Frogs are part of the world. Inference: Frogs are worth saving in proportion to their measure and effect on the world. Query: Is life worth living if all you do is save more of it?
I don't know. I'm not Eliezer. I'd save the frogs because it's fun, not because of some theory.
As a matter of practical human psychology, no. People cannot just give and give and get nothing back from it but self-generated warm fuzzies, a score kept in your head by rules of your own that no-one else knows or cares about. You can do some of that, but if that's all you do, you just get drained and burned out.
Three does not follow from 1. It doesn't follow that the world is more likely to be saved if I save frogs. It also doesn't follow that saving frogs is the most efficient use of my time if I'm going to spend time saving the world. I could for example use that time to help reduce existential risk factors for everyone, which would happen to incidentally reduce the risk to frogs.
I find it difficult to explain, but know that I disagree with you. The world is worth saving precisely because of the components that make it up, including frogs. Three does follow from 1, unless you have a (fairly large) list of properties or objects in the world that you've deemed out of scope (not worth saving independently of the entire world). Do you have such a list, even implicitly? I might agree that frogs are out of scope, as that was one component of my motivation for posting this thread. And stating that there are "more efficient" ways of saving frogs than directly saving frogs does not refute the initial inference that frogs are worth saving in proportion to their measure and effect on the world. Perhaps you are really saying "their proportion and measure is low enough as to make it not worth the time to stoop and pick them up"? Which I might also agree with. But in my latest query, I was trying to point out that "a safe Singularity is a more efficient means of achieving goal X" or "a well thought out existential risk reduction project is a more efficient means of saving Y" can be used as a fully general counterargument, and I was wondering if people really believe they trump all other actions one might take.

I'm surprised by Eliezer's stance. At the very least, it seems the pain endured by the frogs is terrible, no? For just one reference on the subject, see, e.g., KL Machin, "Amphibian pain and analgesia," Journal of Zoo and Wildlife Medicine, 1999.

Rain, your dilemma reminds me of my own struggles regarding saving worms in the rain. While stepping on individual worms to put them out of their misery is arguably not the most efficient means to prevent worm suffering, as a practical matter, I think it's probably an activity worth doing, because it builds the psychological habit of exerting effort to break from one's routine of personal comfort and self-maintenance in order to reduce the pain of other creatures. It's easy to say, "Oh, that's not the most cost-effective use of my time," but it can become too easy to say that all the time to the extent that one never ends up doing anything. Once you start doing something to help, and get in the habit of expending some effort to reduce suffering, it may actually be easier psychologically to take the efficiency of your work to the next level. ("If saving worms is good, then working toward technology to help all kinds ... (read more)

Maybe so, but the question is why we should care. If only for the cheap signaling value.
My point was that the action may have psychological value for oneself, as a way of getting in the habit of taking concrete steps to reduce suffering -- habits that can grow into more efficient strategies later on. One could call this "signaling to oneself," I suppose, but my point was that it might have value in the absence of being seen by others. (This is over and above the value to the worm itself, which is surely not unimportant.)
2: I would put the frog in the grass. Warm fuzzies are a great way to start the day, and it only costs 30 seconds. If you're truly concerned about the well-being of frogs, you might want to do more. You'd also want to ask yourself what you're doing to help frogs everywhere. The fact that the frog ended up on your doorstep doesn't make you extra respons