This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

July Part 1

New Comment
768 comments, sorted by Click to highlight new comments since: Today at 3:06 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It seems to me that "emergence" has a useful meaning once we recognize the Mind Projection Fallacy:

We say that a system X has emergent behavior if we have heuristics for both a low-level description and a high-level description, but we don't know how to connect one to the other. (Like "confusing", it exists in the map but not the territory.)

This matches the usage: the ideal gas laws aren't "emergent" since we know how to derive them (at a physics level of rigor) from lower-level models; however, intelligence is still "emergent" for us since we're too dumb to find the lower-level patterns in the brain which give rise to patterns like thoughts and awareness, which we have high-level heuristics for.

Thoughts? (If someone's said this before, I apologize for not remembering it.)

No, I want my definition of "emergent" to say that the ideal gas laws are emergent properties of molecules. Why not just say We say that a system X has emergent behavior if we have heuristics for both a low-level description and a high-level description
The high-level structure shouldn't be the same as the low level structure, because I don't want to say a pile of sand emerges from grains of sand.
ISTM that the actual present usage of "emergent" is actually pretty well-defined as a cluster, and it doesn't include the ideal gas laws. I'm offering a candidate way to cash-out that usage without committing the Mind Projection Fallacy.
The fallacy here is thinking there's a difference between the way the ideal gas laws emerge from particle physics, and the way intelligence emerges from neurons and neurotransmitters. I've only heard "emergent" used in the following way: A system X has emergent behavior if we have heuristics for both a low-level description and a high-level description, and the high-level description is not easily predictable from the low-level description For instance, gliders moving across the screen diagonally is emergent in Conway's Life. The "easily predictable" part is what makes emergence in the map, not the territory.
Er, did you read the grandparent comment?
Yes. My point was that emergence isn't about what we know how to derive from lower-level descriptions, it's about what we can easily see and predict from lower-level descriptions. Like Roko, I want my definition of emergence to include the ideal gas laws (and I haven't heard the word used to exclude them). Also see this comment. []
For what it's worth, Cosma Shalizi's notebook page on emergence [] has a very reasonable discussion of emergence, and he actually mentions macro-level properties of gas as a form of "weak" emergence: To define emergence as it is normally used, he adds the criterion that "the new property could not be predicted from a knowledge of the lower-level properties," which looks to be exactly the definition you've chosen here (sans map/territory terminology).
Let's talk examples. One of my favorite examples to think about is Langton's Ant []. If we taboo "emergence" what do we think is going on with Langton's Ant?
We have one description of the ant/grid system in Langton's Ant: namely, the rules which totally govern the behavior of the system. We have another description of the system, however: the recurring "highway" pattern that apparently results from every initial configuration tested. These two descriptions seem to be connected, but we're not entirely sure how (The only explanation we have is akin to this: Q: Why does every initial configuration eventually result in the highway pattern? A: The rules did it.) That is, we have a gap in our map. Since the rules, which we understand fairly well, seem on some intuitive sense to be at a "lower level" of description than the pattern we observe, and since the pattern seems to depend on the "low-level" rules in some way we can't describe, some people call this gap "emergence."
I recall hearing, although I can't find a link, that the Langton Ant problem has been solved recently. That is, someone has given a formal proof that every ant results in the highway pattern.
It's worth checking on the Stanford Encyclopedia of Philosophy [] when this kind of issue comes up. It looks like this view - emergent=hard to predict from low-level model - is pretty mainstream. The first paragraph of the article on emergence says that it's a controversial term with various related uses, generally meaning that some phenomenon arises from lower-level processes but is somehow not reducible to them. At the start of section 2 ("Epistemological Emergence"), the article says that the most popular approach is to "characterize the concept of emergence strictly in terms of limits on human knowledge of complex systems." It then gives a few different variations on this type of view, like that the higher-level behavior could not be predicted "practically speaking; or for any finite knower; or for even an ideal knower." There's more there, some of which seems sensible and some of which I don't understand.
Many thanks!
It seems problematic that as soon as you work out how to derive high-level behavior from low-level behavior, you have to stop calling it emergent. It seems even more problematic that two people can look at the same phenomenon and disagree on whether it's "emergent" or not, because Bob knows the relevant derivation of high level behavior from low level behavior, but Alice doesn't, even if Alice nows that Bob knows. Perhaps we could refine this a little, and make emergence less subjective, but still avoid mind-projection-fallacy. We say that a system X has emergent behavior if there exists an exact and simple low-level description and an inexact but easy-to-compute high-level description, and the derivation of the high-level laws from the low-level ones is much more complex than either. [In the technical sense of kolmogorov complexity] (Like "Has chaotic dynamics", it is a property of a system)
I dunno, I kind of like the idea that as science advances, particular phenomena stop being emergent. I'd be very glad if "emergent" changed from a connotation of semantic stop-sign [] to a connotation of unsolved problem.
By your definition, is the empirical fact that one tenth of the digits of pi are 1s emergent behavior of pi? I may not understand the work that "low-level" and "high-level" are doing in this discussion. On the length of derivations, here are some relevant Godel cliches: System X (for instance, arithmetic) often obeys laws that are underivable. And it often obeys derivable laws of length n whose shortest derivation has length busy-beaver-of-n. (Uber die lange von Beiwessen is the title of a famous short Godel paper. He revisits the topic in a famous letter to von Neumann, available here: [])
Just a pedantic note: pi has not been proven normal. Maybe one fifth of the digits are 1s.
I'll stick to it. It's easier to perform experiments than it is to give mathematical proofs. If experiments can give strong evidence for anything (I hope they can!), then this data can give strong evidence that pi is normal: [] Maybe past ten-to-the-one-trillion digits, the statistics of pi are radically different. Maybe past ten-to-the-one-trillion meters, the laws of physics are radically different.
The later case seems more likely to me.
I was just thinking about the latter case, actually. If g equalled G (m1 ^ (1 + (10 ^ -30)) (m2 ^ (1 + (10 ^ -30))) / (r ^ 2), would we know about it?
Well, the force of gravity isn't exactly what you get from Newton's laws anyways (although most of the easily detectable differences like that in the orbit of Mercury are better thought of as due to relativity's effect on time than a change in g). I'm not actually sure how gravitational force could be non-additive with respect to mass. One would have the problem of then deciding what constitutes a single object. A macroscopic object isn't a single object in any sense useful to physics. Would for example this calculate the gravity of Earth as a large collection of particles or as all of them together? But the basic point, that there could be weird small errors in our understanding of the laws of physics is always an issue. To use a slightly more plausible example, if say the force of gravity on baryons is slightly stronger than that on leptons (slightly different values of G) we'd be unlikely to notice. I don't think we'd notice even if it were in the 2nd or 3rd decimal of G (partially because G is such a very hard constant to measure.)
IMO, that would be emergent behaviour of mathematics, rather than of pi. Pi isn't a system in itself as far as I can see.
I have in mind a system, for instance a computer program, that computes pi digit-by-digit. There are features of such a computer program that you can notice from its output, but not (so far as anyone knows) from its code, like the frequency of 1s.
If you had some physical system that computed digit frequencies of Pi, I'd definitely want to call the fact that the fractions were very close to 1 emergent behavior. Does anyone disagree?
I can't disagree about what you want but I myself don't really see the point in using the word emergent for a straightforward property of irrational numbers. I wouldn't go so far as to say the term is useless but whatever use it could have would need to describe something more complex properties that are caused by simpler rules.
This isn't a general property of irrational numbers, although with probability 1 any irrational number will have this property. In fact, any random real number will have this property with probability 1 (rational numbers have measure 0 since they form a countable set). This is pretty easy to prove if one is familiar with Lebesque measure. There are irrational numbers which do not share this property. For example, .101001000100001000001... is irrational and does not share this property.
True enough. it would seem that irrational number is not the correct term for the set I refer to.
The property you are looking for is normalness to base 10. See normal number []. ETA: Actually, you want simple normalness to base 10 which is slighly weaker.
Any irrational number drawn from what distribution? There are plenty of distributions that you could draw irrational numbers from which do not have this property, and which contain the same number of numbers in them. For example, the set of all irrational numbers in which every other digit is zero has the same cardinality as the set of all irrational numbers.
I'm presuming he's talking about measure, using the standard Lebesgue measure on R
Yes, although generally when asking these sorts of questions one looks at the standard Lebesque measure on [0,1] or [0,1) since that's easier to normalize. I've been told that this result also holds for any bell-curve distribution centered at 0 but I haven't seen a proof of that and it isn't at all obvious to me how to construct one.
Well, the quick way is to note that the bell-curve measure is absolutely continuous [] with respect to Lebesgue measure, as is any other measure given by an integrable distribution function on the real line. (If you want, you can do this by hand as well, comparing the probability of a small bounded open set in the bell curve distribution with its Lebesgue measure, taking limits, and then removing the condition of boundedness.)
Excellent, yes that does work. Thanks very much!
The only problem with that seems to be that when people talk about emergent behavior they seem to be more often than not talking about "emergence" as a property of the territory, not a property of the map. So for example, someone says that "AI will require emergent behavior"- that's a claim about the territory. Your definition of emergence seems like a reasonable and potentially useful one but one would need to be careful that the common connotations don't cause confusion.
I agree. But given that outsiders use the term all the time, and given that they can point to a reasonably large cluster of things (which are adequately contained in the definition I offered), it might be more helpful to say that emergence is a statement of a known unknown (in particular, a missing reduction between levels) than to refuse to use the term entirely [], which can appear to be ignoring phenomena.

Why are Roko's posts deleted? Every comment or post he made since April last year is gone! WTF?

Edit: It looks like this discussion sheds some light on it. As best I can tell, Roko said something that someone didn't want to get out, so someone (maybe Roko?) deleted a huge chunk of his posts just to be safe.

I've deleted them myself. I think that my time is better spent looking for a quant job to fund x-risk research than on LW, where it seems I am actually doing active harm by staying rather than merely wasting time. I must say, it has been fun, but I think I am in the region of negative returns, not just diminishing ones.

So you've deleted the posts you've made in the past. This is harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable.

For example, consider these posts, and comments on them, that you deleted:

I believe it's against community blog ethics to delete posts in this manner. I'd like them restored.

Edit: Roko accepted this argument and said he's OK with restoring the posts under an anonymous username (if it's technically possible).

And I'd like the post of Roko's that got banned restored. If I were Roko I would be very angry about having my post deleted because of an infinitesimal far-fetched chance of an AI going wrong. I'm angry about it now and I didn't even write it. That's what was "harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable." That's what should be against the blog ethics.

I don't blame him for removing all of his contributions after his post was treated like that.

It's also generally impolite (though completely within the TOS) to delete a person's contributions according to some arbitrary rules. Given that Roko is the seventh highest contributor to the site, I think he deserves some more respect. Since Roko was insulted, there doesn't seem to be a reason for him to act nicely to everyone else. If you really want the posts restored, it would probably be more effective to request an admin to do so.
I didn't insult Roko. The decision, and justification given, seem wholly irrational to me (which is separate from claiming a right to demand that decision altered).
It's ironic that, from a timeless point of view, Roko has done well. Future copies of Roko on LessWrong will not receive the same treatment as this copy did, because this copy's actions constitute proof of what happens as a result. (This comment is part of my ongoing experiment to explain anything at all with timeless/acausal reasoning.)
What "treatment" did you have in mind? At best, Roko made a honest mistake, and the deletion of a single post of his was necessary to avoid more severe consequences (such as FAI never being built). Roko's MindWipe was within his rights, but he can't help having this very public action judged by others. What many people will infer from this is that he cares more about arguing for his position (about CEV and other issues) than honestly providing info, and now that he has "failed" to do that he's just picking up his toys and going home.
I just noticed this. A brilliant disclaimer!
Parent is inaccurate: although Roko's comments are not, Roko's posts (i.e., top-level submissions) are still available, as are their comment sections minus Roko's comments (but Roko's name is no longer on them and they are no longer accessible via /user/Roko/ URLs).

Not via user/Roko or via /tag/ or via /new/ or via /top/ or via / - they are only accessible through direct links saved by previous users, and that makes them much harder to stumble upon. This remains a cost.

Could the people who have such links post them here?
I don't really see what the fuss is. His articles and comments were mediocre at best.

I understand. I've been thinking about quitting LessWrong so that I can devote more time to earning money for paperclips.


I'm deeply confused by this logic. There was one post where due to a potentially weird quirk of some small fraction of the population, reading that post could create harm. I fail to see how the vast majority of other posts are therefore harmful. This is all the more the case because this breaks the flow of a lot of posts and a lot of very interesting arguments and points you've made.

ETA: To be more clear, leaving LW doesn't mean you need to delete the posts.

I am disapointed. I have just started on LW, and found many of Roko's posts and comments interesting and consilient with my current and to be a useful bridge between aspects of LW that are less consilient. :(

Allow me to provide a little context by quoting from a comment, now deleted, Eliezer made this weekend in reply to Roko and clearly addressed to Roko:

I don't usually talk like this, but I'm going to make an exception for this case.

Listen to me very closely, you idiot.

[paragraph entirely in bolded caps.]

[four paragraphs of technical explanation.]

I am disheartened that people can be . . . not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.

This post was STUPID.

Although it does not IMHO make it praiseworthy, the above quote probably makes Roko's decision to mass delete his comments more understandable on an emotional level.

In defense of Eliezer, the occasion of Eliezer's comment was one in which IMHO strong emotion and strong language might reasonably be seen as appropriate.

If either Roko or Eliezer wants me to delete (part of all of) this comment, I will.

EDIT: added the "I don't usually talk like this" paragraph to my quote in repsonse to criticism by Aleksei.

I'm not them, but I'd very much like your comment to stay here and never be deleted.

Your up-votes didn't help, it seems.
Woah. Thanks for alerting me to this fact, Tim.
Out of curiosity, what's the purpose of the banning? Is it really assumed that banning the post will mean it can't be found in the future via other means or is it effectively a punishment to discourage other people from taking similar actions in the future?
Does not seem very nice to take such an out-of-context partial quote from Eliezer's comment. You could have included the first paragraph, where he commented on the unusual nature of the language he's going to use now (the comment indeed didn't start off as you here implied), and also the later parts where he again commented on why he thought such unusual language was appropriate.
I'm still having trouble seeing how so much global utility could be lost because of a short blog comment. If your plans are that brittle, with that much downside, I'm not sure security by obscurity is such a wise strategy either...
The major issue as I understand it wasn't the global utility problem but the issue that when Roko posted the comment he knew that some people were having nightmares about the scenario in question. Presumably increasing the set of people who are nervous wrecks is not good.
I was told [] it was something that, if thought about too much, would cause post-epic level problems. The nightmare aspect wasn't part of my concept of whatever it is until now. I also get the feeling Eliezer wouldn't react as dramatically as an above synopsis implies unless it was a big deal (or hilarious to do so). He seems pretty ... rational, I think is the word. Despite his denial of being Quirrell in a parent post, a non-deliberate explosive rant and topic banning seems unlikely. He also mentions that only a certain inappropriate post was banned, and Roko said he deleted his own posts himself. And yet the implication going around that it was all deleted as administrative action. A rumor started by Eliezer himself so he could deny being "evil," knowing some wouldn't believe him? Quirrell wouldn't do that, right? ;)

I see. A side effect of banning one post, I think; only one post should've been banned, for certain. I'll try to undo it. There was a point when a prototype of LW had just gone up, someone somehow found it and posted using an obscene user name ("masterbater"), and code changes were quickly made to get that out of the system when their post was banned.

Holy Cthulhu, are you people paranoid about your evil administrator. Notice: I am not Professor Quirrell in real life.

EDIT: No, it wasn't a side effect, Roko did it on purpose.

Notice: I am not Professor Quirrell in real life.

Indeed. You are open about your ambition to take over the world, rather than hiding behind the identity of an academic.

Notice: I am not Professor Quirrell in real life.

And that is exactly what Professor Quirrell would say!

Professor Quirrell wouldn't give himself away by writing about Professor Quirrell, even after taking into account that this is exactly what he wants you to think.

Of course as you know very well. :)

A side effect of banning one post, I think;

In a certain sense, it is.

Of course, we already established that you're Light Yagami [].
I'm not sure we should believe you.
-4JamesAndrix13y [] Note to reader: This thread is curiosity inducing, this is affecting your judgement. You might think you can compensate for this bias but you probably won't in actuality. Stop reading anyway. Trust me on this. Edit: Me, and Larks, and ocr-fork, AND ROKO and [some but not all others] I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot. Maybe it will stay dangerous for a long time, I don't know, but there seems to be a decent chance that you'll find out about it soon enough. Don't assume it's Ok because you understand the need for friendliness and aren't writing code. There are no secrets to intelligence in hidden comments. (Though I didn't see the original thread, I think I figured it out and it's not giving me any insights.) Don't feel left out or not smart for not 'getting it' we only 'got it' because it was told to us. Try to compensate for your ego. if you fail, Stop reading anyway. Ab ernyyl fgbc ybbxvat. Phevbfvgl erfvfgnapr snvy. []
Technically, you didn't say "for now".

Cryo-wives: A promising comment from the NYT Article:

As the spouse of someone who is planning on undergoing cryogenic preservation, I found this article to be relevant to my interests!

My first reactions when the topic of cryonics came up (early in our relationship) were shock, a bit of revulsion, and a lot of confusion. Like Peggy (I believe), I also felt a bit of disdain. The idea seemed icky, childish, outlandish, and self-aggrandizing. But I was deeply in love, and very interested in finding common ground with my then-boyfriend (now spouse). We talked, and talked, and argued, and talked some more, and then I went off and thought very hard about the whole thing.

Part of the strength of my negative response, I realized, had to do with the fact that my relationship with my own mortality was on shaky ground. I don't want to die. But I'm fairly certain I'm going to. Like many people, I've struggled to come to a place where I can accept the specter of my own death with some grace. Humbleness and acceptance in the face of death are valued very highly (albeit not always explicitly) in our culture. The companion, I think, to this humble acceptance of death is a humble (and painful) acce

... (read more)

That is really a beautiful comment.

It's a good point, and one I never would have thought of on my own: people find it painful to think they might have a chance to survive after they've struggled to give up hope.

One way to fight this is to reframe cryonics as similar to CPR: you'll still die eventually, but this is just a way of living a little longer. But people seem to find it emotionally different, perhaps because of the time delay, or the uncertainty.

6Eliezer Yudkowsky13y
I always figured that was a rather large sector of people's negative reaction to cryonics; I'm amazed to find someone self-aware enough to notice and work through it.
That's more comparable to being in a long coma with some uncertain possibility of waking up from it, so perhaps it could be reframed along those lines; some people probably do specify that they should be taken off of life support if they are found comatose, but to choose to be kept alive is not socially disapproved of, as far as I know.
Hopefully this provides incentive for people to kick Eliezer's ass at FAI theory. You don't want to look cultish, do you?
To me, the most appealing aspect of #lesswrong is that my comments will not be archived for posterity. This is also an interesting quote. Edit: I obviously missed the "only" in your note there.

I thought Less Wrong might be interested to see a documentary I made about cognitive bias. It was made as part of a college project and a lot of the resources that the film uses are pulled directly from Overcoming Bias and Less Wrong. The subject of what role film can play in communicating the ideas of Less Wrong is one that I have heard brought up, but not discussed at length. Despite the film's student-quality shortcomings, hopefully this documentary can start a more thorough dialogue that I would love to be a part of.

The link to the video is Here:

Pen and paper interviews would almost certainly be more accurate. The problem being that images of people writing on paper are especially un-cinematic. The participants were encouraged to take as much time as they needed, many of which took several minutes before responding on some questions. However, the majority of them were concerned with how much time the interview would take up, and their quick responses were self imposed. Whether the evidence is too messy to draw firm conclusions from, I agree that it is. This is an inherent problem with documentaries. Omissions of fact are easily justified. Also, just like in fiction films, a higher degree of manipulation over the audience is more sought after than accuracy.
I just posted a comment over there noting that the last interviewee rediscovered anchoring and adjustment [].

Geoff Greer published a post on how he got convinced to sign up for cryonics: Insert Frozen Food Joke Here.

This is really an excellent, down to the earth, one minute teaser, to go that route. Excellent writing. It would wish I had a follow up move for those who get interested after that points, but raised doubts, be it philosophical, religious, moral, scientific (the last one probably the easiest). I know those issues had been discussed already, but how could one react in a five minute coffee-break, when the co-worker responds (standard phrases to go): "But death gives meaning to live. And if nobody died, there would be too many people around here. Only the rich ones could get the benefits. And ultimately, whatever end the universe takes, we will all die, you know science, don't ya?" I know the sequence answers, but I utterly fail to give any non-embarrassing answer at such questions. It does not help to not being signed up for cryonics oneself.
If they think that we'll all eventually die even with cryonics and they think that death gives meaning to life then they don't need to worry about cryonics removing meaning since it is just pushing the amount of time until death up. (I wouldn't bother addressing the death giving meaning to life claim except to note that it seems to be a much more common meme among people who haven't actually lost loved ones.) As to the problem of too many people, overpopulation is a massive problem whether or not a few people get cryonicly preserved. As to the problem of just the rich getting the benefits, patiently explain that there's no reason to think that the rich now will be treated substantially different from the less rich who sign up for cryonics. And if society ever has the technology to easily revive people from cryonic suspension then the likely standard of living will be so high compared to now that even if the rich have more it won't matter.
I talk about it as something I'm thinking about, and ask what they think. That way, it's not you trying to persuade someone, it's just a conversation. "Yeah, we'll all die eventually, but this is just a way of curing aging, just like trying to find a cure for heart disease or cancer. All those things are true of any medical treatment, but that doesn't mean we shouldn't save lives."
... "and like any medical treatment, initially only the rich will benefit, but they'll help bring down the price for everyone else. Infact, for just a small weakly payment..."
This is off-topic but I'm curious: How did you stumble on my blog?
Google alert on "Eliezer Yudkowsky". (Usually brings up articles about Friendly AI, SIAI and Less Wrong.)

Are any LWer's familiar with adversarial publishing? The basic idea is that two researchers who disagree on some empirically testable proposition come together with an arbiter to design an experiment to resolve their disagreement.

Here's a summary of the process from an article (pdf) I recently read (where Daniel Kahneman was one of the adversaries).

  1. When tempted to write a critique or to run an experimental refutation of a recent publication, consider the possibility of proposing joint research under an agreed protocol. We call the scholars engaged in such an effort participants. If theoretical differences are deep or if there are large differences in experimental routines between the laboratories, consider the possibility of asking a trusted colleague to coordinate the effort, referee disagreements, and collect the data. We call that person an arbiter.
  2. Agree on the details of an initial study, designed to subject the opposing claims to an informative empirical test. The participants should seek to identify results that would change their mind, at least to some extent, and should explicitly anticipate their interpretations of outcomes that would be inconsistent with their theoret
... (read more)

Since I assume he doesn't want to have existential risk increase, a credible threat is all that's necessary.

Perhaps you weren't aware, but Eliezer has stated that it's rational to not respond to threats of blackmail. See this comment.

(EDIT: I deleted the rest of this comment since it's redundant given what you've written elsewhere in this thread.)

This is true, and yes wfg did imply the threat.

(Now, analyzing not advocating and after upvoting the parent...)

I'll note that wfg was speculating about going ahead and doing it. After he did it (and given that EY doesn't respond to threats speculative:wga should act now based on the Roko incident) it isn't threat. It is then just a historical sequence of events. It wouldn't even be a particularly unique sequence events.

Wfg is far from the only person who responded by punishing SIAI in a way EY would expect to increase existential risk. ie. Not donating to SIAI when they otherwise would have.or by updating their p(EY(SIAI) is a(re) crackpot(s)) and sharing that knowledge. The description RationalWiki would be an example.

I don't think he was talking about human beings there. Obviously you don't want a reputation for being susceptable to being successfully blackmailed, but IMHO, maximising expected utilily results in a strategy which is not as simple as never responding to blackmail threats.
I think this is correct. Eliezer's spoken from The Strategy of Conflict before, which goes into mathematical detail about the tradeoffs of precommitments against inconsistently rational players. The "no blackmail" thing was in regards to a rational UFAI.
These are really interesting points. Just in case you haven't seen the developments on the thread, check out the whole thing here []. I'm not sure that blackmail is a good name to use when thinking about my commitment, as it has negative connotations and usually implies a non-public, selfish nature. I'm also pretty sure it's irrational to ignore such things when making decisions. Perhaps not in a game theory sense, but absolutely in the practical life-theory sense. As an example, our entire legal system is based on these sorts of credible threats. If EY feels differently I'm not sure what to say except that I think he's being foolish. I see the game theory he's pretending exempts him from considering others reactions to his actions, I just don't think it's rational to completely ignore new causal information. But like I said earlier, I'm not saying he has to do anything, I'm just making sure we all know that an existential risk reduction of 0.0001% via LW censorship won't actually be a reduction of 0.0001%. (and though you deleted the relevant part, I'd also be down to discuss what a sane moderation system should be like.)

Suppose I were to threaten to increase existential risk by 0.0001% unless SIAI agrees to program its FAI to give me twice the post-Singuarity resource allocation (or whatever the unit of caring will be) that I would otherwise receive. Can see why it might have a policy against responding to threats? If Eliezer does not agree with you that censorship increases existential risk, he might censor some future post just to prove the credibility of his precommitment.

If you really think censorship is bad even by Eliezer's values, I suggest withdrawing your threat and just try to convince him of that using rational arguments. I rather doubt that Eliezer has some sort of unfixable bug regarding censorship that has to be patched using such extreme measures. It's probably just that he got used to exercising strong moderation powers on SL4 (which never blew up like this, at least to my knowledge), and I'd guess that he has already updated on the new evidence and will be much more careful next time.

I do not expect that (non-costly signalling by someone who does not have significant status) to work any more than threats would. A better suggestion would be to forget raw threats and consider what other alternatives wfg has available by which he could deploy an equivalent amount of power that would have the desired influence. Eliezer moved the game from one of persuasion (you should not talk about this) to one about power and enforcement (public humiliation, censorship and threats). You don't take a pen to a gun fight.
2Wei Dai13y
I don't understand why, just because Eliezer chose to move the game from one of persuasion to one about power and enforcement, you have to keep playing it that way. If Eliezer is really so irrational that once he has exercised power on some issue, he is no longer open to any rational arguments on that topic, then what are we all doing here? Shouldn't we be trying to hinder his efforts (to "not take over the world") instead of (however indirectly) helping him?
Good questions, these were really fun to think about / write up :) First off let's kill a background assumption that's been messing up this discussion: that EY/SIAI/anyone needs a known policy toward credible threats. It seems to me that stated policies to credible threats are irrational unless a large number of the people you encounter will change their behavior based on those policies. To put it simply: policies are posturing. If an AI credibly threatened to destroy the world unless EY became a vegetarian for the rest of the day, and he was already driving to a BBQ, is eating meat the only rational thing for him to do? (It sure would prevent future credible threats!) If EY planned on parking in what looked like an empty space near the entrance to his local supermarket, only to discover that on closer inspection it was a handicapped-only parking space (with a tow truck only 20 feet away), is getting his car towed the only rational thing to do? (If he didn't an AI might find out his policy isn't iron clad!) This is ridiculous. It's posturing. It's clearly not optimal. In answer to your question: Do the thing that's actually best. The answer might be to give you 2x the resources. It depends on the situation: what SIAI/EY knows about you, about the likely effect of cooperating with you or not, and about the cost vs benefits of cooperating with you. Maybe there's a good chance that knowing you'll get more resources makes you impatient for SIAI to make a FAI, causing you to donate more. Who knows. Depends on the situation. (If the above doesn't work when an AI is involved, how about EY makes a policy that only applies to AIs?) In answer to your second paragraph I could withdraw my threat, but that would lessen my posturing power for future credible threats. (har har...) The real reason is I'm worried about what happens while I'm trying to convince him. I'd love to discuss what sort of moderation is correct for a community like less wrong -- it sounds amazing

I'm not sure that blackmail is a good name to use when thinking about my commitment, as it has negative connotations and usually implies a non-public, selfish nature.

More importantly, you aren't threatening to publicize something embarrassing to Eliezer if he doesn't comply, so it's technically extortion.

8Wei Dai13y
I think by "blackmail" Eliezer meant to include extortion since the scenario that triggered that comment was also technically extortion.
That one also has negative connotation, but it's your thinking to bias as you please :p
Technical analysis does not imply bias either way. Just curiosity. ;)
To be precise, not respond when whether or not one is 'blackmailed' is counterfactually dependent on whether one would respond, which isn't the case with the law. (Of course, there are unresolved problems with who 'moves first', etc.)
Fair enough, so you're saying he only responds to credible threats from people who don't consider if he'll respond to credible threats?
Yes, again modulo not knowing how to analyze questions of who moves first (e.g. others who consider this and then make themselves not consider if he'll respond).
To put that bit about the legal system more forcefully: If EY really doesn't include these sorts of things in his thinking (he disregards US laws for reasons of game theory?), we have much bigger things to worry about right now than 0.0001% censorship.

There's a course "Street Fighting Mathematics" on MIT OCW, with an associated free Creative Commons textbook (PDF). It's about estimation tricks and heuristics that can be used when working with math problems. Despite the pop-sounding title, it appears to be written for people who are actually expected to be doing nontrivial math.

Might be relevant to the simple math of everything stuff.

For a teaser, the part about singing logarithms looks cool.

From a recent newspaper story:

The odds that Joan Ginther would hit four Texas Lottery jackpots for a combined $21 million are astronomical. Mathematicians say the chances are as slim as 1 in 18 septillion — that's 18 and 24 zeros.

I haven't checked this calculation at all, but I'm confident that it's wrong, for the simple reason that it is far more likely that some "mathematician" gave them the wrong numbers than that any compactly describable event with odds of 1 in 18 septillion against it has actually been reported on, in writing, in the history of intelligent life on my Everett branch of Earth. Discuss?

It seems right to me. If the chance of one ticket winning is one in 10^6, the chance of four specified tickets winning four drawings is one in 10^24. Of course, the chances of "Person X winning the lottery week 1 AND Person Y winning the lottery week 2 AND Person Z winning the lottery week 3 AND Person W winning the lottery week 4" are also 10^24, and this happens every four weeks.
From the article (there is a near invisible more text button) And she was the only person ever to have bought 4 tickets (birthday paradoxes and all)... I did see an analysis of this somewhere, I'll try and dig it up. Here [] it is. There is hackernews commentary here []. I find this, from the original msnbc article, depressing
Is it depressing because someone with a Ph.D. in math is playing the lottery, or depressing because she must've have figured out something we don't know, given that she's won four times?
The former. It is also depressing because it can be used in articles on the lottery in the following way, "See look at this person good at maths, playing the lottery, that must mean it is a smart thing to play the lottery".
Depressing because someone with a Ph.D. in math is playing the lottery. I don't see any reason to think she figured out some way of beating the lottery.
It's also far more likely that she cheated. Or that there is a conspiracy in the Lottery to make she win four times.
The most eyebrow-raising part of that article:
Hm. Have you looked at the multiverse lately? It's pretty apparent that something has gone horribly weird somewhere along the way. Your confidence should be limited by that dissonance. It's the same with MWI, and cryonics, and moral cognitivism, and any other belief where your structural uncertainty hasn't been explicitly conditioned on your anthropic surprise. I'm not sure to what extent your implied confidence in these matters is pedagogical rather than indicative of your true beliefs. I expect mostly pedagogical? That's probably fine and good, but I doubt such subtle epistemic manipulation for the public good is much better than the Dark Arts. (Added: In this particular case, something less metaphysical is probably amiss, like a math error.)
So let me try to rewrite that (and don't be afraid to call this word salad): (Note: the following comment is based on premises which are very probably completely unsound and unusually prone to bias. Read at your own caution and remember the distinction between impressions and beliefs. These are my impressions.) You're Eliezer Yudkowsky. You live in a not-too-far-from-a-Singularity world, and a Singularity is a BIG event, decision theoretically and fun theoretically speaking. Isn't it odd that you find yourself at this time and place given all the people you could have found yourself as in your reference class? Isn't that unsettling? Now, if you look out at the stars and galaxies and seemingly infinite space (though you can't see that far), it looks as if the universe has been assigned measure via a universal prior (and not a speed prior) as it is algorithmically about as simple as you can get while still having life and yet seemingly very computationally expensive. And yet you find yourself as Eliezer Yudkowsky (staring at a personal computer, no less) in a close-to-Singularity world: surely some extra parameters must have been thrown into the description of this universe; surely your experience is not best described with a universal prior alone, instead of a universal prior plus some mixture of agents computing things according to their preference. In other words, this universe looks conspicuously like it has been optimized around Eliezer-does-something-multiversally-important. (I suppose this should also up your probability that you're a delusional narcissist, but there's not much to do about that.) Now, if such optimization pressures exist, then one has to question some reductionist assumptions: if this universe gets at least some of its measure from the preferences of simulator-agents, then what features of the universe would be affected by those preferences? Computational cost is one. MWI implies a really big universe, and what are the chances that you would
It all adds up to normality, damn it!
What whats to what? More seriously, that aphorism begs the question. Yes, your hypothesis and your evidence have to be in perfectly balanced alignment. That is, from a Bayesian perspective, tautological. However, it doesn't help you figure out how it is exactly that the adding gets done. It doesn't help distinguish between hypotheses. For that we need Solomonoff's lightsaber. I don't see how saying "it (whatever 'it' is) adds up to (whatever 'adding up to' means) normality (which I think should be 'reality')" is at all helpful. Reality is reality? Evidence shouldn't contradict itself? Cool story bro, but how does that help me?
0Kevin13y []
This is rather tangential to your point, but the universe looks very computationally cheap to me. In terms of the whole ensemble, quantum mechanics is quite cheap. It only looks expensive to us because we measure by a classical slice, which is much smaller. But even if we call it exponential, that is very quick by the standards of the Solomonoff prior.
Hm, I'm not sure I follow: both a classical and quantum universe are cheap, yes, but if you're using a speed prior or any prior that takes into account computational expense, then it's the cost of the universes relative to each other that helps us distinguish between which universe we expect to find ourselves in, not their cost relative to all possible universes. I could very, very well just be confused. Added: Ah, sorry, I think I missed your point. You're saying that even infinitely large universes seem computationally cheap in the scheme of things? I mean, compared to all possible programs in which you would expect life to evolve, the universe looks hugeeeeeee to me. It looks infinite, and there are tons of finite computations... when you compare anything to the multiverse of all things, that computation looks cheap. I guess we're just using different scales of comparison: I'm comparing to finite computations, you're comparing to a multiverse.
No, that's not what I meant; I probably meant something silly in the details, but I think the main point still applies. I think you're saying that the size of the universe is large compared to the laws of physics. To which I still reply: not large by the standards of computable functions.
1Eliezer Yudkowsky13y
Er, sorry, I'm guessing my comment came across as word salad? Added: Rephrased and expanded and polemicized my original comment in a reply to my original comment.
Yeah I didn't get it either.
Hm. It's unfortunate that I need to pass all of my ideas through a Nick Tarleton or a Steve Rayhawk before they're fit for general consumption. I'll try to rewrite that whole comment when I'm less tired.
Illusion of transparency: they can probably generate sense in response to anything, but it's not necessarily faithful translation of what you say.
Consider that one of my two posts, Abnormal Cryonics, was simply a narrower version of what I wrote above (structural uncertainty is highly underestimated) and that Nick Tarleton wrote about a third of that post. He understood what I meant and was able to convey it better than I could. Also, Nick Tarleton is quick to call bullshit if something I'm saying doesn't seem to be meaningful, which is a wonderful trait.
Well, that was me calling bullshit.
Thanks! But it seems you're being needlessly abrasive about it. Perhaps it's a cultural thing? Anyway, did you read the expanded version of my comment? I tried to be clearer in my explanation there, but it's hard to convey philosophical intuitions.
I find myself unable to clearly articulate what's wrong with your idea, but in my own words, it reads as follows: "One should believe certain things to be probable because those are the kinds of things that people believe through magical thinking."
The problem with that idea is that there is no default level of belief. You are not allowed to say What is the difference between hesitating to assign negligible probability vs. to assign non-negligible probability? Which way is the certainty, which way is doubt? If you don't have good understanding of why you should believe one way or the other, you can't appoint a direction where safe level of credence lies and stay there pending the enlightenment. Your argument is not strong enough to shift the belief of one in septillion up to something believable, but your argument must be that strong to do it. You can't appeal to being hesitant to believe otherwise, it's not a strong argument, but a statement about not having one.
Was your point that Eliezer's Everett Branch is weird enough already that it shouldn't be that surprising if universally improbable things have occurred?
Erm, uh, kinda, in a more general sense. See my reply to my own comment where I try to be more expository.
I'm afraid it is word salad.

Would people be interested in a description of someone with high-social skills failing in a social situation (getting kicked out of a house)? I can't guarantee an unbiased account, as I was a player. But I think it might be interesting, purely as an example where social situations and what should be done are not as simple as sometimes portrayed.

Would people be interested in a description of someone with high-social skills failing in a social situation (getting kicked out of a house)?

I'm not sure it's that relevant to rationality, but I think most humans (myself included!) are interested in hearing juicy gossip, especially if it features a popular trope such as "high status (but mildly disliked by the audience) person meets downfall".

How about this division of labor: you tell us the story and we come up with some explanation for how it relates to rationality, probably involving evolutionary psychology.

He is not high status as such, although he possibly could be if he didn't waste time being drunk. Okay here goes the broad brush description of characters. Feel free to ask more questions to fill in details that you want. Dramatis Personae Mr G: Me. Tall scruffy geek. Takes little care of appearance. Tidy in social areas. Chats to everyone, remembers details of peoples lives, although forgets peoples names. Not particularly close (not facebook friends with any of the others). Doesn't bring girls/friends home. Can tell a joke or make a humorous observation but not a master, can hold his own in banter though. Little evidence of social circle apart from occasional visits to friends far away. Accommodating to peoples niggles and competent at fixing stuff that needs fixing. Does a fair amount of house work, because it needs doing. Has never suggested going out with the others, but has gone out by himself to random things. Is often out doing work at uni when others are at home. Shares some food with others, occasionally. Miss C: Assertive, short, fairly plump Canadian supply teacher. Is mocked by Mr S for canadianisms, especially when teaching the children that the British idiom is wrong. For example saying that learnt is not a word. Young, not very knowledgeable about current affairs/world. Boyfriend back home. Has smoked pot. Drinks and parties on the weekend, generally going out with friends from home. Facebook friends with the other 2 (I think). Fairly liberal. Came into the house a week before Mr G. Watches a lot of TV in the shared area. Has family and friends visit occasionally. Miss B: Works in digital marketing (does stuff on managing virals). Dry sense of humour. Boyfriend occasionally comes to visit, boyfriend is teacher who wants to be a stand up comedian. Is away most weekends, visiting family or boyfriend. Gets on with everyone on a surface level. Fairly pretty although not a stunner. Can banter a bit, but not much. Plays up to the "ditzy" personae some
This description seems very British and I'm not quite clear on some of it. For instance, I had no idea what a strop is. Urban Dictionary [] defines it as sulking, being angry, or being in a bad mood. Some of the other things seem like they would only make sense with more cultural context, specifically the emphasis on bantering and making witty remarks. I wouldn't say that this guy has great social skills, given his getting drunk and stealing food, slamming doors and walking around naked, and so forth. Pretty much the opposite, in fact. As to why he got kicked out, I guess people finally got tired of the way he acted, or this group of people was less tolerant of it.
By social skills I meant what people with Aspergers lack naturally. Magnetism/charisma, etc. It is hard to get that across in a textual description. People with poor social skills here know not to get drunk and wander around naked, but can't charm the pants off a pretty girl. The point of the story is that having charisma is in itself not a get out of jail free card that is sometimes described here. Sorry for the british-ness. It is hard to talk about social situations without thinking in my native idiom. I'll try and translate it tomorrow.
You're conflating a few different things here. There's seduction ability, which is its own unique set of skills (it's very possible to be good at seduction but poor at social skills; some of the PUA gurus fall in this category). There's the ability to pick up social nuances in real-time, which is what people with Aspergers tend to learn slower than others (though no one has this "naturally"; it has to be learned through experience). There's knowledge of specific rules, like "don't wander around naked". And charisma or magnetism is essentially confidence and acting ability. These skillsets are all independent: you can be good at some and poor at others. Well, of course not. For instance, if you punch someone in the face, they'll get upset regardless of your social skills in other situations. What this guy did was similar (though perhaps less extreme). Understood, and thanks for writing that story; it was really interesting. The whole British way of thinking is foreign to this clueless American, and I'm curious about it. (I'm also confused by the suggestion that being Facebook friends is a measure of intimacy.)
Interesting, I wouldn't have said that they were as independent as you make out. I'd say it is unusual to be confidant with good acting ability and not be able to read social nuances (how do you know how you should act?). And confidance is definately part of the PUA skillset. Apart from that I'd agree, there are different levels of skill. When sober he was fairly good at everything. He would steer the conversations where he wanted, generally organise the flat to his liking and not do anything stupid like going around naked. If you looked at our interactions as a group, he would have appeared the Alpha. His excuse for wandering around naked was that he thought he was alone and that he should have the right to go into the kitchen naked if he wanted to. I.e. he tried to brazen it out. That might give you some idea of his attitude, what he expected to get away with and that he had probably gotten away with it in the past. Apart from the lack of common sense (when very drunk), I think his main problem was underestimating people or at least not being able to read them. He was too reliant on his feeling of being the Alpha to realise his position was tenuous. No one was relying upon the flat as their main social group, so no one cared about him being Alpha of that group. You might get upset but still not be able to do anything against the Guy. See Highschool. People use Facebook in a myriad of different ways. Some people friend everyone they come across, which means their friends lists gives little information. Mine is to keep an eye on the doings of people I care about. People I don't care about just add noise. So mine is more informative than most. Mr S. is very promiscuous with over 700 friends, I'm not sure about the other two.
I just assumed that for the sake of brevity he covered the other aspects under "etc". I would add in "intuitive aptitude for Machiavellian social politics".
Do I correctly interpret this to say that both Miss C and Miss B goes out (drinking?) on the weekends, but not together?
Yup. Sorry, that wasn't clear.
Yes. And do not hesitate to use many many words.

Heh, that makes Roko's scenario similar to the Missionary Paradox: if only those who know about God but don't believe go to hell, it's harmful to spread the idea of God. (As I understand it, this doesn't come up because most missionaries think you'll go to hell even if you don't know about the idea of God.)

But I don't think any God is supposed to follow a human CEV; most religions seem to think it's the other way around.

Daniel Dennett and Linda LaScola on Preachers who are not believers:

There are systemic features of contemporary Christianity that create an almost invisible class of non-believing clergy, ensnared in their ministries by a web of obligations, constraints, comforts, and community. ... The authors anticipate that the discussion generated on the Web (at On Faith, the Newsweek/Washington Post website on religion, link) and on other websites will facilitate a larger study that will enable the insights of this pilot study to be clarified, modified, and expanded.

Paul Graham on guarding your creative productivity:

I'd noticed startups got way less done when they started raising money, but it was not till we ourselves raised money that I understood why. The problem is not the actual time it takes to meet with investors. The problem is that once you start raising money, raising money becomes the top idea in your mind. That becomes what you think about when you take a shower in the morning. And that means other questions aren't. [...]

You can't directly control where your thoughts drift. If you're controlling them, they're not drifting. But you can control them indirectly, by controlling what situations you let yourself get into. That has been the lesson for me: be careful what you let become critical to you. Try to get yourself into situations where the most urgent problems are ones you want think about.

So my brother was watching Bullshit, and saw an exorcist claim that whenever a kid mentions having an invisible friend, they (the exorcist) tell the kid that the friend is a demon that needs exorcising.

Now, being a professional exorcist does not give a high prior for rationality.

But still, even given that background, that's a really uncritically stupid thing to say. And it occurred to me that in general, humans say some really uncritically stupid things to children.

I wonder if this uncriticality has anything to do with, well, not expecting to be criticized... (read more)

Probably not very, because we can't actually imagine what that hypothetical person would say to us. It'd probably end up used as a way to affirm your positions by only testing strong points.
While I have difficulty imagining what someone far smarter than myself would say, what I can do is imagine explaining myself to a smart person who doesn't have my particular set of biases and hangups; and I find that does sometimes help.
I do it sometimes, and I think it helps.
I do it too - using some of the smarter and more critical posters on LW, actually - and I also think it helps. I think this diffuses some of LucasSloan's criticisms below - if it's a real person, you can to a reasonable extent imagine how they might reply. I think it works because placing yourself in a conflict (even an imaginary one) narrows and sharpens your focus as the subconscious processes get activated that try to 'win' it. The risk is though, that like any opinion formed or argued under the presence of an emotion, is that you become unreasonably certain of it.
I don't get the 'conflict' feeling when I do it. It feels more like 'betting mode', but with more specific counterarguments. Since it's all imaginary anyway, I don't feel committed enough to one side to activate conflict mode.

An akrasia fighting tool via Hacker News via Scientific American based on this paper. Read the Scientific American article for the short version. My super-short summary is that in self-talk asking "will I?" rather than telling yourself "I will" can be more effective at reaching success in goal-directed behavior. Looks like a useful tool to me.

This implies that the mantra "Will I become a syndicated cartoonist?" could be more effective than the original affirmative version, "I will become a syndicated cartoonist" [].
It might also be a useful tool for attaining self-knowledge outside of goal-directed behavior. Consider this passage from The Aleph []:

What's the deal with programming, as a careeer? It seems like the lower levels at least should be readily accessible even to people of thoroughly average intelligence but I've read a lot that leads me to believe the average professional programmer is borderline incompetent.

E.g., Fizzbuzz. Apparently most people who come into an interview won't be able to do it. Now, I can't code or anything but computers do only and exactly what you tell them (assuming you're not dealing with a thicket of code so dense it has emergent properties) but here's what I'd tell t... (read more)

I have no numbers for this, but the idea is that after interviewing for a job, competent people get hired, while incompetent people do not. These incompetents then have to interview for other jobs, so they will be seen more often, and complained about a lot. So perhaps the perceived prevalence of incompetent programmers is a result of availability bias (?).

This theory does not explain why this problem occurs in programming but not in other fields. I don't even know whether that is true. Maybe the situation is the same elsewhere, and I am biased here because I am a programmer.

Joel Spolsky gave a similar explanation []. Makes sense. I'm a programmer, and haven't noticed that many horribly incompetent programmers (which could count as evidence that I'm one myself!).
Do you consider fizzbuzz trivial? Could you write an interpreter for a simple Forth-like language, if you wanted to? If the answers to these questions are "yes", then that's strong evidence that you're not a horribly incompetent programmer. Is this reassuring?
Yes Probably; I made a simple lambda-calculus interpret once and started working on a Lisp parser (I don't think I got much further than the 'parsing' bit). Forth looks relatively simple, though correctly parsing quotes and comments is always a bit tricky. Of course, I don't think I'm a horribly incompetent programmer -- like most humans, I have a high opinion of myself :D
I'm probably not horribly incompetent (evidence: this [] and this []), but there exist people who are miles above me, e.g. John Carmack (read his .plan files [] for a quick fix of inferiority) or Inigo Quilez who wrote the 4kb Elevated demo []. Thinking you're "good enough" is dangerous.
From what I can tell the average person is borderline incompetent when it comes to the 'actually getting work done' part of a job. It is perhaps slightly more obvious with a role such as programming where output is somewhat closer to the level of objective physical reality.
I don't know anything about FizzBuzz, but your program generates no buzzes and lots of fizzes (appending fizz to numbers associated only with fizz or buzz.) This is not a particularly compelling demonstration of your point that it should be easy. (I'm not a programmer, at least not professionally. The last serious program I wrote was 23 years ago in Fortran.)
The bug would have been obvious if the pseudocode had been indented. I'm convinced that a large fraction of beginner programming bugs arise from poor code formatting. (I got this idea from watching beginners make mistakes, over and over again, which would have been obvious if they had heeded my dire warnings and just frickin' indented their code.) Actually, maybe this is a sign of a bigger conceptual problem: a lot of people see programs as sequences of instructions, rather than a tree structure. Indentation seems natural if you hold the latter view, and pointless if you can only perceive programs as serial streams of tokens.
This seems to predict that python solves this problem. Do you have any experience watching beginners with python? (Your second paragraph suggests that indentation is just the symptom and python won't help.)
Your general point is right. Ever since I started programming, it always felt like money for free. As long as you have the right mindset and never let yourself get intimidated. Your solution to FizzBuzz is too complex and uses data structures ("associate whatever with whatever", then ordered lists) that it could've done without. Instead, do this: for x in range(1, 101): fizz = (x%3 == 0) buzz = (x%5 == 0) if fizz and buzz: print "FizzBuzz" elif fizz: print "Fizz" elif buzz: print "Buzz" else: print x This is runnable Python code. (NB: to write code in comments, indent each line by four spaces.) Python a simple language, maybe the best for beginners among all mainstream languages. Download the interpreter [] and use it to solve some Project Euler problems [] for finger exercises, because most actual programming tasks are a wee bit harder than FizzBuzz.
How did you first find work? How do you usually find work, and what would you recommend competent programmers do to get started in a career?
The least-effort strategy, and the one I used for my current job, is to talk to recruiting firms. They have access to job openings that are not announced publically, and they have strong financial incentives to get you hired. The usual structure, at least for those I've worked with, is that the prospective employee pays nothing, while the employer pays some fraction of a year's salary for a successful hire, where success is defined by lasting longer than some duration. (I've been involved in hiring at the company I work for, and most of the candidates fail the first interview on a question of comparable difficulty to fizzbuzz. I think the problem is that there are some unteachable intrinsic talents necessary for programming, and many people irrevocably commit to getting comp sci degrees before discovering that they can't be taught to program.)
I think there are failure modes from the curiosity-stopping anti-epistemology cluster, that allow you to fail to learn indefinitely, because you don't recognize what you need to learn, and so never manage to actually learn that. With right approach anyone who is not seriously stupid could be taught (but it might take lots of time and effort, so often not worth it).
Do recruiting firms require that you have formal programming credentials?
Formal credentials certainly help, but I wouldn't say they're required, as long as you have something (such as a completed project) to prove you have skills.
My first paying job was webmaster for a Quake clan that was administered by some friends of my parents. I was something like 14 or 15 then, and never stopped working since (I'm 27 now). Many people around me are aware of my skills, so work usually comes to me; I had about 20 employers (taking different positions on the spectrum from client to full-time employer) but I don't think I ever got hired the "traditional" way with a resume and an interview. Right now my primary job is a fun project we started some years ago with my classmates from school, and it's grown quite a bit since then. My immediate boss is a former classmate of mine, and our CEO is the father of another of my classmates; moreover, I've known him since I was 12 or so when he went on hiking trips with us. In the past I've worked for friends of my parents, friends of my friends, friends of my own, people who rented a room at one of my schools, people who found me on the Internet, people I knew from previous jobs... Basically, if you need something done yesterday and your previous contractor was stupid, contact me and I'll try to help :-) ETA: I just noticed that I didn't answer your last question. Not sure what to recommend to competent programmers because I've never needed to ask others for recomendations of this sort (hah, that pattern again). Maybe it's about networking: back when I had a steady girlfriend, I spent about three years supporting our "family" alone by random freelance work, so naturally I learned to present a good face to people. Maybe it's about location: Moscow has a chronic shortage of programmers, and I never stop searching for talented junior people myself.
I was very surprised by this until I read the word "Moscow."
Is it different in the US? I imagined it was even easier to find a job in the Valley than in Moscow.
I was unsurprised by this until I read the word "Moscow". (Russian programmers & mathematicians seem to always be heading west for jobs.)
I took an internship after college. Professors can always use (exploit) programming labor. That gives you semi-real experience (might be very real if the professor is good) and allows you to build credibility and confidence.
Python tip: Using "range" creates a big list in memory, which is a waste of space. If you use xrange, you get an iterable object that only uses a single counter variable.
Hah. I first wrote the example using xrange, then changed it to range to make it less confusing to someone who doesn't know Python :-)
Not in python 3 ! range in Python 3 works like xrange in the previous versions (and xrange doesn't exist any more). (but the print functions would use a different syntax)
In fact, range in Python 2.5ish and above works the same, which is why they removed xrange in 3.0.
There was a discussion of transitioning to Python 3 on HN a week or two ago; apparently there are going to be a lot of programmers, and even more shops, holding off on transitioning, because it will break too many existing programs. (I haven't tried Python since version 1, so I don't know anything about it myself.)
A big problem with transitioning to Python 3 is that there are quite a few third-party libraries that don't support it (including two I use regularly - SciPy and Pygame). Some bits of the syntax are different, but that shouldn't be a huge issue except for big codebases, since there's a script to convert Python 2.6 to 3.0. I've used Python 3 but had to switch back to 2.6 so I could keep using those libraries :P
--"Epigrams in Programming" [], by Alan J. Perlis; ACM's SIGPLAN publication, September, 1982
Programming as a field exhibits a weird bimodal distribution of talent. Some people are just in it for the paycheck, but others think of it as a true intellectual and creative challenge. Not only does the latter group spend extra hours perfecting their art, they also tend to be higher-IQ. Most of them could make better money in the law/medicine/MBA path. So obviously the "programming is an art" group is going to have a low opinion of the "programming is a paycheck" group.
Do we have any refs for this? I know there's "The Camel Has Two Humps" [] (Alan Kay on it [], the PDF []), but anything else?
And going by his other papers [], though, it looks like the effect isn't nearly so strong as was originally claimed. (Though that's wrt whether his "consistency test" works, didn't check about whether bimodalness still holds.)
No, just personal experience and observation backed up by stories and blog posts from other people. See also Joel Spolsky on Hitting the High Notes []. Spolsky's line is that some people are just never going to be that good at programming. I'd rephrase it as: some people are just never going to be motivated to spend long hours programming for the sheer fun and challenge of it, and so they're never going to be that good at programming.
This is a good null hypothesis for skill variation in many cases, but not one supported by the research in the paper gwern linked.
Fixed that for you. :) (I'm a current law student.)
In addition to this, if you're a good bricklayer, you might do, at most, twice the work of a bad bricklayer. It's quite common for an excellent programmer (a hacker) to do more work than ten average programmers--and that's conservative. The difference is more apparent. My guess might be that you hear this complaint from good programmers, Barry? Although, I can guarantee that everyone I've met can do at least FizzBuzz. We have average programmers, not downright bad ones.
I'll second the suggestion that you try your hand at some actual programming tasks, relatively easy ones to start with, and see where that gets you. The deal with programming is that some people grok it readily and some don't. There seems to be some measure of talent involved that conscientious hard word can't replace. Still, it seems to me (I have had a post about this in the works for ages) that anyone keen on improving their thinking can benefit from giving programming a try. It's like math in that respect.
i think you overestimate human curiosity for one. Not everyone implements prime searching or Conways game of life for fun. For two. Even those that implement their own fun projects are not necessarily great programmers. It seems there are those that get pointers, and the others. For tree, where does a company advertise? There is a lot of mass mailing going on by not competent folks. I recently read Joel Spolskys book on how to hire great talent, and he makes the point that the really great programmers just never appear on the market anyway. []
Are there really people who don't get pointers? I'm having a hard time even imagining this. Pointers really aren't that hard, if you take a few hours to learn what they do and how they're used. Alternately, is my reaction a sign that there really is a profoundly bimodal distribution of programming aptitudes?
There really are people who would not take that few hours.
I don't know if this counts, but when I was about 9 or 10 and learning C (my first exposure to programming) I understood input/output, loops, functions, variables, but I really didn't get pointers. I distinctly remember my dad trying to explain the relationship between the * and & operators with box-and-pointer diagrams and I just absolutely could not figure out what was going on. I don't know whether it was the notation or the concept that eluded me. I sort of gave up on it and stopped programming C for a while, but a few years later (after some Common Lisp in between), when I revisited C and C++ in high school programming classes, it seemed completely trivial. So there might be some kind of level of abstract-thinking-ability which is a prerequisite to understanding such things. No comment on whether everyone can develop it eventually or not.
There are really people who don't get pointers.
One of the epiphanies of my programming career was when I grokked function pointers. For a while prior to that I really struggled to even make sense of that idea, but when it clicked it was beautiful. (By analogy I can sort of understand what it might be like not to understand pointers themselves.) Then I hit on the idea of embedding a function pointer in a data structure, so that I could change the function pointed to depending on some environmental parameters. Usually, of course, the first parameter of that function was the data structure itself...
Cute. Sad, but that's already more powerful than straight OO. Python and Ruby support adding/rebinding methods at runtime (one reason duck typing is more popular these days). You might want to look at functional programming if you haven't yet, since you've no doubt progressed since your epiphany. I've heard nice things about statically typed languages such as Haskell and O'Caml, and my personal favorite is Scheme.
Oddly enough, I think Morendil would get a real kick out of JavaScript. So much in JS involves passing functions around, usually carrying around some variables from their enclosing scope. That's how the OO works; it's how you make callbacks seem natural; it even lets you define new control-flow structures like jQuery's each() function, which lets you pass in a function which iterates over every element in a collection. The clearest, most concise book on this is Doug Crockford's Javascript: The Good Parts []. Highly recommended.
The technical term for this is a closure. A closure is a first-class* function with some associated state. For example, in Scheme, here is a function which returns counters, each with its own internal ticker: To create a counter, you'd do something like Then, to get values from the counter, you could call something like Here is the same example in Python, since that's what most people seem to be posting in: *That is, a function which you can pass around like a value.
While we're sharing fun information, I'd like to point out a little-used feature of Markdown syntax: if you put four spaces before a line, it's treated as code. Behold: (define (make-counter) (let ([internal-variable 0]) (lambda () (begin (set! internal-variable (+ internal-variable 1)) internal-variable)))) Also, the emacs rectangle editing functions are good for this. C-x r t is a godsend.
I suspect it's like how my brain reacts to negative numbers, or decimals; I have no idea how anyone could fail to understand them. But some people do. And, due to my tendency to analyse mistakes I make (especially factual errors) I remember the times when I got each one of those wrong. I even remember the logic I used. But they've become so ingrained in my brain now that failure to understand them is nigh inconceivable.
There is a difference in aptitude, but part of the problem is that pointers are almost never explained correctly. Many texts try to explain in abstract terms, which doesn't work; a few try to explain graphically, which doesn't work terribly well. I've met professional C programmers who therefore never understood pointers, but who did understand them after I gave them the right explanation. The right explanation is in terms of numbers: the key is that char *x actually means the same thing as int x (on a 32-bit machine, and modulo some superficial convenience). A pointer is just an integer that gets used to store a memory address. Then you write out a series of numbered boxes starting at e.g. 1000, to represent memory locations. People get pointers when you put it like that.
Yeah, pretty much anyone who isn't appallingly stupid can become a reasonably good programmer in about a year. Be warned though, the kinds of people who make good programmers are also the kind of people who spontaneously find themselves recompiling their Linux kernel in order to get their patched wifi drivers to work...
xkcd reference! []
Dammit! That'll shouted at my funeral!

This article is pretty cool, since it describes someone running quality control on a hospital from an engineering perspective. He seems to have a good understanding of how stuff works, and it reads like something one might see on lesswrong.

Is there any philosophy worth reading?

As far as I can tell, a great deal of "philosophy" (basically the intellectuals' wastebasket taxon) consists of wordplay, apologetics, or outright nonsense. Consequently, for any given philosophical work, my prior strongly favors not reading it because the expected benefit won't outweigh the cost. It takes a great deal of evidence to tip the balance.

For example: I've heard vague rumors that GWF Hegel concludes that the Prussian State (under which, coincidentally, he lived) was the best form of human existence... (read more)

So my question is: What philosophical works and authors have you found especially valuable, for whatever reason?

You might find it more helpful to come at the matter from a topic-centric direction, instead of an author-centric direction. Are there topics that interest you, but which seem to be discussed mostly by philosophers? If so, which community of philosophers looks like it is exploring (or has explored) the most productive avenues for understanding that topic?

Remember that philosophers, like everyone else, lived before the idea of motivated cognition was fully developed; it was commonplace to have theories of epistemology which didn't lead you to be suspicious enough of your own conclusions. You may be holding them to too high a standard by pointing to some of their conclusions, when some of their intermediate ideas and methods are still of interest and value today.

However, you should be selective of who you read. Unless you're an academic philosopher, for instance, reading a modern synopsis of Kantian thought is vastly preferable to trying to read Kant yourself. For similar reasons, I've steered clear of Hegel's original texts.

Unfortunately for the present purpose, I myself went the long way (I went to a college with a strong Great Books core in several subjects), so I don't have a good digest to recommend. Anyone else have one?

Yoreth: That's an extremely bad way to draw conclusions. If you were living 300 years ago, you could have similarly heard that some English dude named Isaac Newton is spending enormous amounts of time scribbling obsessive speculations about Biblical apocalypse and other occult subjects ['s_occult_studies] -- and concluded that even if he had some valid insights about physics, it wouldn't be worth your time to go looking for them.
The value of Newton's theories themselves can quite easily be checked, independently of the quality of his epistemology. For a philosopher like Hegel, it's much harder to dissociate the different bits of what he wrote, and if one part looks rotten, there's no obvious place to cut. (What's more, Newton's obsession with alchemy would discourage me from reading whatever Newton had to say about science in general)
A bad way to draw conclusions. A good way to make significant updates based on inference.
Would you be so kind as to spell out the exact sort of "update based on inference" that applies here?
??? "People who say stupid things are, all else being equal, more likely to say other stupid things in related areas".
That's a very vague statement, however. How exactly should one identify those expressions of stupid opinions that are relevant enough to imply that the rest of the author's work is not worth one's time?
Nobody knows (obviously), but you can try to train your intuition to do that well. You'd expect this correlation to be there.
In the context of LessWrong it should be considered trivial to the point of outright patronising if not explicitly prompted. Bayesian inference is quite possibly the core premise of the community. In the process of redacting my reply I coined the term "Freudian Double-Entendre". Given my love of irony I hope the reader appreciates my restraint! <-- Example of a very vague statement. In fact if anyone correctly follows that I expect I would thoroughly enjoy reading their other comments.
Yep, and note that Hegel's philosophy is related to states more than Newton's physics is related to the occult.
Laktatos, Quine and Kuhn are all worth reading. Recommended works from each follows: Lakatos: " Proofs and Refutations" Quine: "Two Dogmas of Empiricism" Kuhn: "The Copernican Revolution" and "The Structure of Scientific Revolution" All of these have things which are wrong but they make arguments that need to be grappled with and understood (Copernican Revolution is more of a history book than a philosophy book but it helps present a case of Kuhn's approach to the history and philosophy of science in great detail). Kuhn is a particularly interesting case- I think that his general thesis about how science operates and what science is is wrong, but he makes a strong enough case such that I find weaker versions of his claims to be highly plausible. Kuhn also is just an excellent writer full of interesting factual tidbits. This seems like in general not a great attitude. The Descartes case is especially relevant in that Descartes did a lot of stuff not just philosophy. And some of his philosophy is worth understanding simply due to the fact that later authors react to him and discuss things in his context. And although he's often wrong, he's often wrong in a very precise fashion. His dualism is much more well-defined than people before him. Hegel however is a complete muddle. I'd label a lot of Hegel as not even wrong. ETA: And if I'm going to be bashing Hegel a bit, what kind of arrogant individual does it take to write a book entitled "The Encyclopedia of the Philosophical Sciences" that is just one's magnum opus about one's own philosophical views and doesn't discuss any others?
Yes. I agree with your criticisms - "philosophy" in academia seems to be essentially professional arguing, but there are plenty of well-reasoned and useful ideas that come of it, too. There is a lot of non-rational work out there (i.e. lots of valid arguments based on irrational premises) but since you're asking the question in this forum I am assuming you're looking for something of use/interest to a rationalist. I've developed quite a respect for Hilary Putnam [] and have read many of his books. Much of his work covers philosophy of the mind with a strong eye towards computational theories of the mind. Beyond just his insights, my respect also stems from his intellectual honesty. In the Introduction to "Representation and Reality" he takes a moment to note, "I am, thus, as I have done on more than one occasion, criticizing a view I myself earlier advanced." In short, as a rationalist I find reading his work very worthwhile. I also liked "Objectivity: The Obligations of Impersonal Reason" by Nicholas Rescher quite a lot, but that's probably partly colored by having already come to similar conclusions going in. PS - There was this thread [] over at Hacker News that just came up yesterday if you're looking to cast a wider net.
I've always been told that Hegel basically affixed the section about Prussia due to political pressures, and that modern philosophers totally ignore it. Having said that, I wouldn’t read Hegel. I recommend avoiding reading original texts, and instead reading modern commentaries and compilations. 'Contemporary Readings in Epistemology' was the favoured first-year text at Oxford. Bertrand Russell's "History of Western Philosophy" is quite a good read too. The Stanford Encyclopaedia of Philosophy [] is also very good.
I've enjoyed Nietzsche, he's an entertaining and thought-provoking writer. He offers some interesting perspectives on morality, history, etc.
None that actively affiliate themselves with the label 'philosophy'.
This is an understandable sentiment, but it's pretty harsh. Everybody makes mistakes -- there is no such thing as a perfect scholar, or perfect author. And I think that when Descartes is studied, there is usually a good deal of critique and rejection of his ideas. But there's still a lot of good stuff there, in the end. I have found Foucault to be a very interesting modern philosopher/historian. His book, I believe entitled "Madness and civilization", (translated from French), strikes me as a highly impressive analysis on many different levels. His writing style is striking, and his concentration on motivation and purpose goes very, very deep.
Maybe LW should have resident intellectual historians who read philosophy. They could distill any actual insights from dubious, old or badly written philosophy, and tell if a work is worthy reading for rationalists.

More on the coming economic crisis for young people, and let me say, wow, just wow: the essay is a much more rigorous exposition of the things I talked about in my rant.

In particular, the author had similar problems to me in getting a mortgage, such as how I get told on one side, "you have a great credit score and qualify for a good rate!" and on another, "but you're not good enough for a loan". And he didn't even make the mistake of not getting a credit card early on!

Plus, he gives a lot of information from his personal experience.

Be ... (read more)

I've seen discussion of Goodhart's Law + Conservation of Thought playing out nastily in investment. For example, junk bonds started out as finding some undervalued bonds among junk bonds. Fine, that's how the market is supposed to work. Then people jumped to the conclusion that everything which was called a junk bond was undervalued. Oops.

I have a question about prediction markets. I expect that it has a standard answer.

It seems like the existence of casinos presents a kind of problem for prediction markets. Casinos are a sort of prediction market where people go to try to cash out on their ability to predict which card will be drawn, or where the ball will land on a roulette wheel. They are enticed to bet when the casino sets the odds at certain levels. But casinos reliably make money, so people are reliably wrong when they try to make these predictions.

Casinos don't invalidate prediction markets, but casinos do seem to show that prediction markets will be predictably inefficient in some way. How is this fact dealt with in futarchy proposals?

One way to think of it is that decisions to gamble are based on both information and an error term which reflects things like irrationality or just the fact that people enjoy gambling. Prediction markets are designed to get rid of the error and have prices reflect the information: errors cancel out as people who err in opposite directions bet on opposite sides, and errors in one direction create +EV opportunities which attract savvy, informed gamblers to bet on the other side. But casinos are designed to drive gambling based solely on the error term - people are betting on events that are inherently unpredictable (so they have little or no useful information) against the house at fixed prices, not against each other (so the errors don't cancel out), and the prices are set so that bets are -EV for everyone regardless of how many errors other people make (so there aren't incentives for savvy informed people to come wager). Sports gambling is structured more similarly to prediction markets - people can bet on both sides, and it's possible for a smart gambler to have relevant information and to profit from it, if the lines aren't set properly - and sports betting lines tend to be pretty accurate.
I have also heard of at least one professional gambler who makes his living by identifying and confronting other peoples' superstitious gambling strategies. For example, if someone claims that 30 hasn't come up in a while, and thus is 'due,' he would make a separate bet with them (to which the house is not a party), claiming simply that they're wrong. Often, this is an even-money bet which he has upwards of a 97% chance of winning; when he loses, the relatively small payoff to the other party is supplemented by both the warm fuzzies associated with rampant confirmation bias, and the status kick from defeating a professional gambler in single combat.
The money brought in by stupid gamblers creates additional incentive for smart players to clear it out with correct predictions. The crazier the prediction market, the more reason for rational players to make it rational.
Right. Maybe I shouldn't have said that a prediction market would be "predictably inefficient". I can see that rational players can swoop in and profit from irrational players. But that's not what I was trying to get at with "predictably inefficient". What I meant was this: Suppose that you know next to nothing about the construction of roulette wheels. You have no "expert knowledge" about whether a particular roulette ball will land in a particular spot. However, for some reason, you want to make an accurate prediction. So you decide to treat the casino (or, better, all casinos taken together) as a prediction market, and to use the odds at which people buy roulette bets to determine your prediction about whether the ball will land in that spot. Won't you be consistently wrong if you try that strategy? If so, how Is this consistent wrongness accounted for in futarchy theory? I understand that, in a casino, players are making bets with the house, not with each other. But no casino has a monopoly on roulette. Players can go to the casino that they think is offering the best odds. Wouldn't this make the gambling market enough like a prediction market for the issue I raise to be a problem? I may just have a very basic misunderstanding of how futarchy would work. I figured that it worked like this: The market settles on a certain probability that something will happen by settling on an equilibrium for the odds at which people are willing to buy bets. Then policy makers look at the market's settled probability and craft their policy accordingly.
Roulette odds are actually very close to representing probabilities, although you'd consistently overestimate the probability if you just translated directly. Each $1 bet on a specific number pays out a $35 profit, suggesting p=1/36, but in reality p=1/38. Relative odds get you even closer to accurate probabilities; for instance, 7 & 32 have the same payout, from which we could conclude (correctly, in this case) that they are equally likely. With a little reasoning - 38 possible outcomes with identical payouts - you can find the correct probability of 1/38. This table [] shows that every possible roulette bet except for one has the same EV, which means that you'd only be wrong about relative probabilities if you were considering that one particular bet. Other casino games have more variability in EV, but you'd still usually get pretty close to correct probabilities. The biggest errors would probably be for low probability-high payout games like lotteries or raffles.
It's interesting that the market drives the odds so close to reality, but doesn't quite close the gap. Do you know if there are regulations that keep some rogue casino from selling roulette bets as though the odds were 1/37, instead of 1/36? I'm thinking now that the entire answer to my question is contained in Dagon's reply []. Perhaps the gambling market is distorted by regulation, and its failure as a prediction market is entirely due to these regulations. Without such regulations, maybe the gambling business would function much more like an accurate prediction market, which I suppose would make it seem like a much less enticing business to go into. This would imply that, if you don't like casinos, you should want regulation on gambling to focus entirely on making sure that casinos don't use violence to keep other casinos from operating. Then maybe we'd see the casinos compete by bringing their odds closer to reality, which would, of course, make the casinos less profitable, so that they might close down of their own accord. (Of course, I'm ignoring games that aren't entirely games of chance.)
This really doesn't have much to do with the market. While I don't know the details of gambling laws in all the US states and Indian nations, I would be very surprised if there were regulations on roulette odds. Many casinos have roulette wheels with only one 0 (paid as if 1/36, actual odds 1/37), and with other casino games, such as blackjack, casinos frequently change the rules as part of a promotion or to try to get better odds. There is no "gambling market": casinos are places where people pay for entertainment, not to make money. While casinos do offer promotions and advertise favorable rules and odds, most people go for the entertainment, and no one who's serious about math and probability goes to make money (with exceptions for card-counting and poker tournaments, as orthonormal [] notes). Also see Unnamed's [] comment. Essentially, the answer is that a casino is not a market.
A single casino is not a market, but don't all casinos and gamblers together form a market for something? Maybe it's a market for entertainment instead of prediction ability, but it's a market for something, isn't it? Moreover, it seems, at least naïvely, to be a market in which a casino would attract more customers by offering more realistic odds.
Some casinos in Vegas have European roulette with a smaller house edge []. I know this from a Vegas guidebook which listed where you could find the best odds at various games suggesting that at least some gamblers seek out the best odds. The Wikipedia link also states:
In the stock market, as in a prediction market, the smart money is what actually sets the price, taking others' irrationalities as their profit margin. There's no such mechanism in casinos, since the "smart money" doesn't gamble in casinos for profit (excepting card-counting, cheating, and poker tournaments hosted by casinos, etc).
The most obvious thing: customers are only allowed to take one side of a bet, whose terms are dictated by the house. If you had a general-topic prediction market with one agent who chose the odds for everything, and only allowed people to bet in one chosen direction on each topic, that agent (if they were at all clever) could make a lot of money, but the odds wouldn't be any "smarter" than that agent (and in fact would be dumber so as to make a profit margin).
But no casino has a monopoly on roulette. Yet the market doesn't seem to drive the odds to their correct values. Dagon notes above that regulations make it hard to enter the market as a casino. Maybe that explains why my naive expectations don't happen. Actually this raises another question for me. If I start a casino in Vegas, am I required to sell roulette bets as though the odds were p = 1/36 [], instead of, say, p = 1/37 ? [Edited for lack of clarity.]
Casinos have an assymetry: creation of new casinos is heavily regulated, so there's no way for people with good information to bet on their beliefs, and no mechanism for the true odds to be reached as the market price for a wager.
Normally I wouldn't comment on a typo, but I can't read "assymetry" without chuckling.

I think you're overestimating your ability to see what exactly is wrong and how to fix it. Humans (westerners?) are biased towards thinking that improvements they propose would indeed make things better. This tendancy is particularly visible in politics, where it causes the most damage.

More generally, humans are probably biased towards thinking their own ideas are particularly good, hence the "not invented here" syndrome, etc. Outside of politics, the level of confidence rarely reaches the level of threatening death and destruction if one's ideas are not accepted.


Is there a bias, maybe called the 'compensation bias', that causes one to think that any person with many obvious positive traits or circumstances (really attractive, rich, intelligent, seemingly happy, et cetera) must have at least one huge compensating flaw or a tragic history or something? I looked through Wiki's list of cognitive biases and didn't see it, but I thought I'd heard of something like this. Maybe it's not a real bias?

If not, I'd be surprised. Whenever I talk to my non-rationalist friends about how amazing persons X Y or Z are, they invariab... (read more)

It may have to do with the manner you bring it up - it's not hard to see how saying something like "X is amazing" could be interpreted "X is amazing...and you're not" (after all, how often do you tell your friends how amazing they are?), in which case the bias is some combination of status jockeying, cognitive dissonance and ego protection.
Wow, that's seems like a very likely hypothesis that I completely missed. Is there some piece of knowledge you came in with or heuristic you used that I could have used to think up your hypothesis?
I've spent some time thinking about this, and the best answer I can give is that I spend enough time thinking about the origins and motivations of my own behavior that, if it's something I might conceivably do right now, or (more importantly) at some point in the past, I can offer up a possible motivation behind it. Apparently this is becoming more and more subconscious, as it took quite a bit of thinking before I realized that that's what I had done.
Could it be a matter of being excessively influenced by fiction? It's more convenient for stories if a character has some flaws and suffering.
Is this actually incorrect, though? As far as I know, people have problems and inadequacies. When they solve them, they move on to worrying about other things. It's probably a safe bet that the awesome people you're describing do as well. What probably is wrong is that general awesomeness makes hidden bad stuff more likely.
Possibly a form of the just-world fallacy [].
Given that there's the halo effect (that you mention) plus the affect heuristic, it seems that if there's a bias, it goes the other way - people tend to think all positive attributes clump together. If both effects exist, that would cast doubt on whether it counts as a bias at all, as the direction of the error is not consistently one way. (Right?)
Will's remark suggests that the biases exist in different circumstances. If I'm following Will, then the halo effect occurs when people have already interacted with impressive individuals, whereas Will's reported effect occurs only when people are hearing about an impressive individual in a second-hand or third-hand way.

Day-to-day question:

I live in a ground floor apartment with a sunken entryway. Behind my fairly large apartment building is a small wooded area including a pond and a park. During the spring and summer, oftentimes (~1 per 2 weeks) a frog will hop down the entryway at night and hop around on the dusty concrete until dying of dehydration. I occasionally notice them in the morning as I'm leaving for work, and have taken various actions depending on my feelings at the time and the circumstances of the moment.

  1. Action: Capture the frog and put it in the woods o
... (read more)
6Eliezer Yudkowsky13y
I don't consider frogs to be objects of moral worth.

Questions of priority - and the relative intensity of suffering between members of different species - need to be distinguished from the question of whether other sentient beings have moral status at all. I guess that was what shocked me about Eliezer's bald assertion that frogs have no moral status. After all, humans may be less sentient than frogs compared to our posthuman successors. So it's unsettling to think that posthumans might give simple-minded humans the same level of moral consideration that Elizeer accords frogs.

-- David Pearce via Facebook

Are there any possible facts that would make you consider frogs objects of moral worth if you found out they were true?

(Edited for clarity.)

6Eliezer Yudkowsky13y
"Frogs have subjective experience" is the biggy, there's a number of other things I already know myself to be confused about which impact on that, and so I don't know exactly what I should be looking for in the frog that would make me think it had a sense of its own existence. Certainly there are any number of news items I could receive about the frog's mental abilities, brain complexity, type of algorithmic processing, ability to reflect on its own thought processes, etcetera, which would make me think it was more likely that the frog was what a non-confused person of myself would regard as fulfilling the predicate I currently call "capable of experiencing pain", as opposed to being a more complicated version of neural network reinforcement-learning algorithms that I have no qualms about running on a computer. A simple example would be if frogs could recognize dots painted on them when seeing themselves in mirrors, or if frogs showed signs of being able to learn very simple grammar like "jump blue box". (If all human beings were being cryonically suspended I would start agitating for the chimpanzees.)
I am very surprised that you suggest that "having subjective experience" is a yes/no thing. I thought it is consensus opinion here that it is not. I am not sure about others on LW, but I would even go three steps further: it is not even a strict ordering of things. It is not even a partial ordering of things. I believe it can be only defined in the context of an Observer and an Object, where Observer gives some amount of weight to the theory that Object's subjective experience is similar to Observer's own.
Links? I'd be interested in seeing what people on LW thought about this, if it's been discussed before. I can understand the yes/no position, or the idea that there's a blurry line somewhere between thermostats and humans, but I don't understand what you mean about the Observer and Object. The Observer in your example has subjective experience?
I like the way you phrased your concern for "subjective experience" -- those are the types of characteristics I care about as well. But I'm curious: What does ability to learn simple grammar have to do with subjective experience?
We're not looking for objective experience, thus we're simply looking for experience. If we now define 'a sense of one's own existence' as the experience of self-awareness, i.e. consciousness, and if we also regard unconscious experience as unworthy, we're left with consciousness. Now since we can not define consciousness, we need a working definition. What are some important aspects of consciousness? 'Thinking', which requires 'knowledge' (data), is not the operative point between being an iPhone and being human. It's information processing after all. So what do we mean by unconscious, opposed to consciousness decision making? It's about deliberate, purposeful (goal-oriented) adaption. Thus to be consciousness is to be able to shape your environment in a way that it suits your volition. * The ability to define a system within the environment in which it is embedded to be yourself. * To be goal-oriented * The specific effectiveness and order of transformation by which the self-defined system (you) shapes the outside environment, in which it is embedded, does trump the environmental influence on the defined system. (more []) How could this help with the frog-dilemma? Are frogs consciousness? * Are there signs of active environmental adaption by the frog-society as indicated by behavioral variability? * To what extent is frog behavior predictable? That is, we have to fathom the extent of active adaption of the environment by frogs opposed to passive adaption of frogs by the the environment. Further, one needs to test the frogs ability of deliberate, spontaneous behavior given environmental (experimental) stimuli and see if frogs can evade, i.e. action vs reaction. P.S. No attempt at a solution, just some quick thoughts I wanted to write down for clearance and possible feedback.

I'm surprised. Do you mean you wouldn't trade off a dust speck in your eye (in some post-singularity future where x-risk is settled one way or another) to avert the torture of a billion frogs, or of some noticeable portion of all frogs? If we plotted your attitudes to progressively more intelligent entities, where's the discontinuity or discontinuities?

You'd need to change that to 10^6 specks and 10^15 frogs or something, because emotional reaction to choosing to kill the frogs is also part of the consequences of the decision, and this particular consequence might have moral value that outweighs one speck. Your emotional reaction to a decision about human lives is irrelevant, the lives in question hold most of the moral worth, while with a decision to kill billions of cockroaches (to be safe from the question of moral worth of frogs), the lives of the cockroaches are irrelevant, while your emotional reaction holds most of moral worth.
I'm not so sure []. I'm no expert on the subject, but I suspect cockroaches may have moderately rich emotional lives.
Hopefully he still thinks there's a small probability of frogs being able to experience pain, so that the expected suffering of frog torture would be hugely greater than a dust speck.
Depends. Would that make it harder to get frog legs?
Same questions to you, but with "rocks" for "frogs". Eliezer didn't say he was 100% sure frogs weren't objects of moral worth, nor is it a priori unreasonable to believe there exists a sharp cutoff without knowing where it is.
Seconded, and how do you (Eliezer) rate other creatures on the Great Chain of Being?
Would you save a stranded frog, though?
What about dogs?
Yeah, trying to save the world does that to you. ETA (May 2012): wow, I can't understand what prompted me to write a comment like this. Sorry.
Axiom: The world is worth saving. Fact: Frogs are part of the world. Inference: Frogs are worth saving in proportion to their measure and effect on the world. Query: Is life worth living if all you do is save more of it?
I don't know. I'm not Eliezer. I'd save the frogs because it's fun, not because of some theory.
As a matter of practical human psychology, no. People cannot just give and give and get nothing back from it but self-generated warm fuzzies, a score kept in your head by rules of your own that no-one else knows or cares about. You can do some of that, but if that's all you do, you just get drained and burned out.
Three does not follow from 1. It doesn't follow that the world is more likely to be saved if I save frogs. It also doesn't follow that saving frogs is the most efficient use of my time if I'm going to spend time saving the world. I could for example use that time to help reduce existential risk factors for everyone, which would happen to incidentally reduce the risk to frogs.
I find it difficult to explain, but know that I disagree with you. The world is worth saving precisely because of the components that make it up, including frogs. Three does follow from 1, unless you have a (fairly large) list of properties or objects in the world that you've deemed out of scope (not worth saving independently of the entire world). Do you have such a list, even implicitly? I might agree that frogs are out of scope, as that was one component of my motivation for posting this thread. And stating that there are "more efficient" ways of saving frogs than directly saving frogs does not refute the initial inference that frogs are worth saving in proportion to their measure and effect on the world. Perhaps you are really saying "their proportion and measure is low enough as to make it not worth the time to stoop and pick them up"? Which I might also agree with. But in my latest query, I was trying to point out that "a safe Singularity is a more efficient means of achieving goal X" or "a well thought out existential risk reduction project is a more efficient means of saving Y" can be used as a fully general counterargument [], and I was wondering if people really believe they trump all other actions one might take.

I'm surprised by Eliezer's stance. At the very least, it seems the pain endured by the frogs is terrible, no? For just one reference on the subject, see, e.g., KL Machin, "Amphibian pain and analgesia," Journal of Zoo and Wildlife Medicine, 1999.

Rain, your dilemma reminds me of my own struggles regarding saving worms in the rain. While stepping on individual worms to put them out of their misery is arguably not the most efficient means to prevent worm suffering, as a practical matter, I think it's probably an activity worth doing, because it builds the psychological habit of exerting effort to break from one's routine of personal comfort and self-maintenance in order to reduce the pain of other creatures. It's easy to say, "Oh, that's not the most cost-effective use of my time," but it can become too easy to say that all the time to the extent that one never ends up doing anything. Once you start doing something to help, and get in the habit of expending some effort to reduce suffering, it may actually be easier psychologically to take the efficiency of your work to the next level. ("If saving worms is good, then working toward technology to help all kinds ... (read more)

Maybe so, but the question is why we should care. If only for the cheap signaling value.
My point was that the action may have psychological value for oneself, as a way of getting in the habit of taking concrete steps to reduce suffering -- habits that can grow into more efficient strategies later on. One could call this "signaling to oneself," I suppose, but my point was that it might have value in the absence of being seen by others. (This is over and above the value to the worm itself, which is surely not unimportant.)
2: I would put the frog in the grass. Warm fuzzies are a great way to start the day, and it only costs 30 seconds. If you're truly concerned about the well-being of frogs, you might want to do more. You'd also want to ask yourself what you're doing to help frogs everywhere. The fact that the frog ended up on your doorstep doesn't make you extra responsible for the frog; it merely provides you with an opportunity to help. Also, wash your hands before eating.
The goal of helping frogs is to gain fuzzies, not utilons. Thinking about all the frogs that you don't have the opportunity to help would mean losing those fuzzies.
There's no utility in saving (animal) life? Or is that only for this particular instance? Edit 20-Jun-2014: Frogs saved since my original post: 21.5. Frogs I've failed to save: 23.5.
How often do you find frogs in the stairwell? Could it make sense to carry something (a plastic bag?) to pick up the frog with so that you don't get slime on your hands? If it were me, I think I'd go with plastic bag or other hand cover, possibly have room temperature water with me (probably good enough for frogs, and I'm willing to drink the stuff), and put the frog on the lawn unless I'm in the mood for a bit of a walk and seeing the woods. I have no doubt that I would habitually wonder whether there are weird events in people's lives which are the result of interventions by incomprehensibly powerful beings.
Once per two weeks? I would go with 1+3 for maximum fuzzies. If the frog is alive, that is.
Have you collected any data on how often the frog would find its own way out if left alone? Setting up an experiment that could reliably distinguish this from it being eaten by a bird or moved by another passing human might be tricky.
It is impossible for the frogs to escape from the stairwell without human intervention. The stairs are fairly high and only slabs of concrete with air below them. The most I've seen a frog succeed at is making it halfway underneath the door to the maintenance area also located in the stairwell. I have never observed another human helping a frog. From my memory (not the best experimental apparatus, but it is what I have), the ratio of frog corpses after work to live, unrescued frogs in the morning has thus far been 1:1.
Is there any possibility of constructing some kind of frog barrier at the top of the stairwell or amphibian escape ramp (PVC pipe?) or does the layout of the public space make that impractical? My preference would be for an engineering solution if I hypothetically valued frog survival highly. A web-cam activated frog elevator would be entertaining but probably overkill. Of course this may not be optimal if the warm-fuzzies from individual frog-assisting episodes are of greater expected utility than automated out-of-sight out-of-mind frog moving machinery.
I think the warm-fuzzies from an engineering solution could be quite significant. Throughout the day, if I had encountered a live or dead frog in my stairwell, my mind might return to the subject of frogs caught in stairwells several times. If I had saved the frog by hand, I would feel some satisfaction (tinged with some cynicism, see here []) but also anxiety that I could not always be there for every frog. With the engineering solution, I would feel proud about human ingenuity and happy about all the potential frogs I could be saving any moment. Lots and lots of warm fuzzies.
I would not be able to alter the stairwell in such a fashion. This is a commercial apartment complex with many other people living in the building. The mailboxes are also in this bottom-level stairwell, meaning it gets quite a bit of traffic aside from myself. I reiterate I have never seen another person help a frog, and the ratio has always been 1:1. Other buildings in the apartment complex apparently also have this problem, as I asked someone who lived in another building if they'd ever seen a dead frog, and they said yes, on occasion, they see them when getting the mail. I did not ask if they saw any live frogs. There are at least 4 identical stairwells per building, and at least 6 buildings adjacent to a pond. This adds to my feelings of "this problem is too big for me."
When I was a young child my dad was building an extension on our house. He dug a deep trench for the foundations and one morning I came down to find what my hazy memory suggests were thousands of baby frogs from a nearby lake. I spent some time ferrying buckets of tiny frogs from the trench to the top of our driveway in a bid to save them from their fate. The following morning on the way to school I passed thousands of flat baby frogs on the road. I believe this early lesson may have inured me to the starkly beautiful viciousness of nature.
How wide is the entrance to the stairwell? Could you add a ramp at the sides so the frogs have a chance of getting up without inconveniencing the other people? It would also enable people with bikes to wheel them in easier (if any of the residents store bikes in their flats). Is there any group/person who represents the views of the residents of the building(s)? Perhaps write a letter to them with the best solution to the problem you think of and let them sort it out? Otherwise I would probably devise some form of frog capture device, so I could quickly and cleanly remove frogs (alive or dead). Some form of hinged box controlled by a line on a pole. Edit: Err, read the frequency. Once every 2 weeks doesn't justify taking a tool with you, Probably just take some gloves and rescue the frogs.

How facts Backfire

Mankind may be crooked timber, as Kant put it, uniquely susceptible to ignorance and misinformation, but it’s an article of faith that knowledge is the best remedy. If people are furnished with the facts, they will be clearer thinkers and better citizens. If they are ignorant, facts will enlighten them. If they are mistaken, facts will set them straight.

In the end, truth will out. Won’t it?

Maybe not. Recently, a few political scientists have begun to discover a human tendency deeply discouraging to anyone with faith in the power of info

... (read more)
Interesting tidbit from the article: I have long been thinking that the openly aggressive approach some display in promoting atheism / political ideas / whatever seems counterproductive, and more likely to make the other people not listen than it is to make them listen. These results seem to support that, though there have also been contradictory reports from people saying that the very aggressiveness was what made them actually think.
I'd guess aggression would have a polarising affect, depending upon ingroup or outgroup affiliation. Aggression from an member of your own group is directed at something important that you ought to take note of. Aggression from an outsider is possibly directed at you so something to be ignored (if not credible) or countered. We really need some students to do some tests upon, or a better way of searching psych research than google.
Data point: After years of having the correct arguments in my hand, having indeed generated many of them myself, and simply refusing to update, Eliezer, Cectic, and Dan Meissler ganged up on me and got the job done. I think Jesus and Mo helped too, now I think of it. That period's already getting murky in my head =/ Anyhow, point is, none of the above are what you'd call gentle. ETA: I really do think humor is incredibly corrosive to religion. Years before this, the closest I ever came to deconversion was right after I read "Kissing Hank's Ass"
Presumably there's heterogeneity in people's reactions to aggressiveness and to soft approaches. Most likely a minority of people react better to aggressive approaches and most people react better to being fed opposing arguments in a sandwich with self-affirmation bread.
I believe aggressive debates are not about convincing the people you are debating with, that is likely to be impossible. Instead it is about convincing third parties who have not yet made up their mind. For that purpose it might be better to take an overly extreme position and to attack your opponents as much as possible.
I think one of the reasons this self-esteem seeding works is that identifying your core values makes other issues look less important. On the other hand, if you e.g. independently expressed that God is an important element of your identity and belief in him is one of your treasured values, then it may backfire and you will be even harder to move you away from that. (Of course I am not sure: I have never seen any scientific data on that. This is purely a wild guess.)
The primary study in question is here []. I haven't been able to locate online a copy of the study about self-esteem and corrections.

The more recent analysis I've read says that people pretty much become suicide bombers for nationalist reasons, not religious reasons.

I suppose that "There should not be American bases on the sacred soil of Saudi Arabia" is a hybrid of the two, and so might be "I wanted to kill because Muslims were being hurt"-- it's a matter of group identity more than "Allah wants it".

I don't have specifics for the 9/11 bombers.


Thanks, I often commit that mistake. I just write without thinking much, not recognizing the potential importance the given issues might bear. I guess the reason is that I mainly write for urge of feedback and to alleviate mental load.

It's not really meant as an excuse but rather the exposure of how one can use the same arguments to support a different purpose while criticizing others for these arguments and not their differing purpose. And the bottom line is that there will have to be a tradeoff between protecting values and the violation of the same to g... (read more)


Reading Michael Vassar's comments on WrongBot's article ( made me feel that the current technique of learning how to write a LW post isn't very efficient (read lots of LW, write a post, wait for lots of comments, try to figure out how their issues could be resolved, write another post etc - it uses up lots of the writer's time and lot's of the commentors time).

I was wondering whether there might be a more focused way of doing this. Ie. A short term workshop, a few writers who hav... (read more)

We could use a more structured system, perhaps. At this point, there's nothing to stop you from writing a post before you're ready, except your own modesty. Raise the threshold, and nobody will have to yell at people for writing posts that don't quite work. Possibilities: 1. Significantly raise the minimum karma level. 2. An editorial system: a more "advanced" member has to read your post before it becomes top-level. 3. A wiki page about instructions for posting. It should include: a description of appropriate subject matter, formatting instructions, common errors in reasoning or etiquette. 4. A social norm that encourages editing (including totally reworking an essay.) The convention for blog posts on the internet in general mandates against editing -- a post is supposed to be an honest record of one's thoughts at the time. But LessWrong is different, and we're supposed to be updating as we learn from each other. We could make "Please edit this" more explicit. A related thought on karma -- I have the suspicion that we upvote more than we downvote. It would be possible to adjust the site to keep track of each person's upvote/downvote stats. That is, some people are generous with karma, and some people give more negative feedback. We could calibrate ourselves better if we had a running tally.
Kuro5hin had an editorial system, where all posts started out in a special section where they were separate and only visible to logged in users. Commenters would label their comments as either "topical" or "editorial", and all editorial comments would be deleted when the post left editing; and votes cast during editing would determine where the post went (front page, less prominent section, or deleted). Unfortunately, most of the busy smart people only looked at the posts after editing, while the trolls and people with too much free time managed the edit queue, eventually destroying the quality of the site and driving the good users away. It might be possible to salvage that model somehow, though. We upvote much more than we downvote - just look at the mean comment and post scores. Also, the number of downvotes a user can make is capped at their karma.
Enthusiastically seconded. The only change I'd make is to hide editorial comments when the post leaves editing (instead of deleting them), with a toggle option for logged-in users to carry on viewing them. I think it is. There are several tricks we could use to give busy-smart people more of a chance to edit posts. On Kuro5hin, if I remember right, posts left the editing queue automatically after 24 hours, either getting posted or kicked into the bit bucket. Also, users could vote to push the story out of the queue early. If Less Wrong reimplemented this system, we could raise the threshold for voting a story out of editing early, or remove the option entirely. We could even lengthen the period it spends in the editing stage. (This would also have the advantage of filtering out impatient people who couldn't wait 3 days or whatever for their story to post.) LW's also just got a much smaller troll ratio than Kuro5hin did, which would help a lot.
It seems like there's at least some interesting in doing something to deal with helping people to develop posting skills through a means other than simply writing lots of articles and bombarding the community with them. The editorial system seems like it has a lot of promising aspects. The main thing is, it seems more valuable to implement a weak system than to simply talk about implementing a stronger system so whether the editorial system is the best that can be done depends on whether the people in charge of the community are interested in implementing it. If they turn out to not be, I still wonder whether there's a few people out there that can volunteer to help make posts better and a few people who can volunteer to not bombard LW but instead to develop their skills in a quieter way (nb: that doesn't refer to anyone in particular except, potentially, myself). Personally, I still think that would be useful, even if suboptimal. Does the lack of a response from EY imply that he's not interested in that sort of change and, if so, is it EY who would be the one to make the decision?
EY has stated in the past that the reason most suggestions do not result in a change in the web site is that no programmer (or no programmer that EY and EY's agents trust) is available to make the change. Also, I think he reads only a fraction of LW these months.
Meanwhile, it would be probably be worthwhile if people would write about any improvement they've made in their ability to think and to convey their ideas, whether it's deliberate or the result of being in useful communities. I'm not sure that I've made improvements myself-- I think my strategy (which it took a while to make conscious) of writing for the my past self who didn't have the current insight has served me pretty well-- that and General Semantics (a basic understanding that the map isn't the territory). If I were writing for a general audience, I think I'd need to learn about appropriate levels of redundancy.
I wouldn't read anything into the lack of response, EY often doesn't comment on meta-discussion. In fact I'd guess there's a good chance he hasn't even seen this thread! I guess it might be worth raising this in the Spring 2010 meta-thread []? Come to think of it, it's been 4+ months since that meta thread was started - it may even be worth someone posting a Summer 2010 meta-thread with this as a topic starter.
Okay then. Well I don't have the karma to start a thread so I'll leave it to someone who has if they think it's worth while. If nothing else, I wondered about the possibility of doing a top level post expressly for this purpose. So people could post an article with the idea being that comments in response would be aimed at improving it, rather than just general comments. And the further understanding that the original article would then be edited and people could comment on this new one. If the post got a good enough response after a few drafts, it could then be posted at the top level. Otherwise, it would be a good lesson anyway. It would also be less cluttered because it would all be within that purpose made, top level post.
Sounds like a good idea. The Open Thread could be (and has been) used for this, but it may be worthwhile to set up a thread specifically for constructive criticism on draft articles.
Another technical solution. Not trivial to implement, but also contains significant side benefits. * Find some subset of sequences and other highly ranked posts that are "super-core" and has large consensus not just in karma, but also in agreement by high-karma members (say top ten). * Create a multiple choice test and implement it online, which is external technologies exist for already I am sure. Some karma + passing test gets top posting privileges. I have to confess I abused my newly acquired posting privileges and probably diluted the site's value with a couple of posts. Thank goodness they were rather short :). I took the hint though and took to participating in the comment discussion and reading sequences until I am ready to contribute at a higher level.
Is there any consensus about the "right" way to write a LW post? I see a lot of diversity in style, topic, and level of rigor in highly-voted posts. I certainly have no good way to tell if I'm doing it right; Michael Vassar doesn't think so, but he's never had a post voted as highly as my first one was. (Voting is not solely determined by post quality; this is a big part of the problem.) I would certainly love to have a better way to get feedback than the current mechanisms; it's indisputable that my writing could be better. Being able to workshop posts would be great, but I think it would be hard to find the right people to do the workshopping; off the top of my head I can really only think of a handful of posters I'd want to have doing that, and I get the impression that they're all too busy. Maybe not, though. (I think this is a great idea.)
I didn't think there was anything particularly wrong with your post, but newer posts get a much higher level of karma than old ones, which must be taken into account. Some of the core sequence posts have only 2 karma, for example.
Agreed, and that is exactly the sort of factor I was alluding to in my parenthetical.
I suppose there's a few options including: See who's willing to run workshops and then once that's known, people can choose whether to join or not. If none of the top contributors could be convinced to run them then they may still be useful for people of a lower level of post writing ability (which I suspect is where I am, at the moment). The other thing is, even regardless of who ran the workshops, the ability to get faster feedback and to redraft gives a chance to develop an article more thoroughly before posting it properly and may give a sense of where improvements can be made and where the gaps in thinking and writing are. But I guess that questions like that are secondary to the question of whether enough people think it's a good enough idea and whether anyone would be willing to run workshops at all.
Upvoted for raising the topic, but the approach I'd prefer is jimrandomh's suggestion [] of having all posts pass through an editorial stage before being posted 'for real.'

Rationality applied to swimming

The author was a lousy swimmer for a long time, but got respect because he put in so much effort. Eventually he became a swim coach, and he quickly noticed that the bad swimmers looked the way he did, and the good swimmers looked very different, so he started teaching the bad swimmers to look like the good swimmers, and began becoming a better swimmer himself.

Later, he got into the physics of good swimming. For example, it's more important to minimize drag than to put out more effort.

I'm posting this partly because it's alway... (read more)

Thought without Language Discussion of adults who've grown up profoundly deaf without having been exposed to sign language or lip-reading.

Edited because I labeled the link as "Language without Thought-- this counts as an example of itself.

That is amazingly interesting.

Two things of interest to Less Wrong:

First, there's an article about intelligence and religiosity. I don't have access to the papers in question right now, but the upshot is apparently that the correlation between intelligence (as measured by IQ and other tests) and irreligiosity can be explained with minimal emphasis on intelligence but rather on ability to process information and estimate your own knowledge base as well. They found for example that people who were overconfident about their knowledge level were much more likely to be religious. There may... (read more)

The selective attention test (YouTube video link) is quite well-known. If you haven't heard of it, watch it now.

Now try the sequel (another YouTube video).

Even when you're expecting the tbevyyn, you still miss other things. Attention doesn't help in noticing what you aren't looking for.

More here.

Has anyone been doing, or thinking of doing, a documentary (preferably feature-length and targeted at popular audiences) about existential risk? People seem to love things that tell them the world is about to end, whether it's worth believing or not (2012 prophecies, apocalyptic religion, etc., and on the more respectable side: climate change, and... anything else?), so it may be worthwhile to have a well-researched, rational, honest look at the things that are actually most likely to destroy us in the next century, while still being emotionally compelling... (read more)

Sure, I've been thinking about it, I need $10MM to produce it though.

Nobel Laureate Jean-Marie Lehn is a transhumanist.

We are still apes and are fighting all around the world. We are in the prisons of dogmatism, fundamentalism and religion. Let me say that clearly. We must learn to be rational ... The pace at which science has progressed has been too fast for human behaviour to adapt to it. As I said we are still apes. A part of our brain is still a paleo-brain and many of reactions come from our fight or flight instinct. As long as this part of the brain can take over control the rational part of the brain (we will face

... (read more)

"Therefore, “Hostile Wife Phenomenon” is actually “Distant, socially dysfunctional Husband Syndrome” which manifests frequently among cryonics proponents. As a coping mechanism, they project (!) their disorder onto their wives and women in general to justify their continued obsessions and emotional failings."

Assorted hilarious anti-cryonics comments on the accelerating future thread

If anyone is interested in seeing comments that are more representative of a mainstream response than what can be found from an Accelerating Future thread, Metafilter [] recently had a post [] on the NY Times article. The comments aren't hilarious and insane, they're more of a casually dismissive nature. In this thread, cryonics is called an "afterlife scam" [], a pseudoscience [], science fiction [] (technically true at this stage, but there's definitely an implied negative connotation on the "fiction" part, as if you shouldn't invest in cryonics because it's just nerd fantasy), and Pascal's Wager for atheists [] (The comparison is fallacious, and I thought the original Pascal's Wager was for atheists anyways...). There are a few criticisms that it's selfish [], more than a few jokes sprinkled throughout the thread (as if the whole idea is silly), and even your classic death apologist []. All in all, a delightful cornucopia of irrationality. ETA: I should probably point out that there were a few defenses. The most highly received defense of cryonics appears to be this post []. There was also a comment from someone registered with Alcor [] that was very good, I thought. I attempted a couple of rebuttals, but I don't think they were well-received. Also, check out this hilarious description of Robin Hanson [] from a commenter there: I guess that t
The responses are interesting. I think this is the most helpful to my understanding: I think this is the biggest PR hurdle for cryonics: it resembles (superficially) a transparent scam selling the hope of immortality for thousands of dollars.
um... why isn't it? There's a logically possible chance of revival someday, yeah. But with no way to estimate how likely it is, you're blowing money on mere possibility. We don't normally make bets that depend on the future development of currently unknown technologies. We aren't all investing in cold fusion just because it would be really awesome if it panned out. Sorry, I know this is a cryonics-friendly site, but somebody's got to say it.
There are a lot of alternatives to fusion energy and since energy production is a widely recognized societal issue, making individual bets on that is not an immediate matter of life and death on a personal level. I agree with you, though, that a sufficiently high probability estimate on the workability of cryonics is necessary to rationally spend money on it. However, if you give 1% chance for both fusion and cryonics to work, it could still make sense to bet on the latter but not on the first.
Don't read too much into my fusion analogy; you're right that cryonics is different than fusion.
May I suggest also that we be careful to distinguish cold fusion from fusion in general? Cold fusion is extremely unlikely. Hot fusion reactors whether laser confinement or magnetic confinement already exist, the only issue is getting them to produce more useful energy than you put in. This is very different than cold fusion where the scientific consensus is that there's nothing fusing.
... and different to almost any other unproven technology (for the exact same reason).
That's ok, it's a skepticism friendly site as well. I don't see a mechanism whereby I get a benefit within my lifetime by investing in cold fusion, in the off chance that it is eventually invented and implemented.
Well, if you think there's a decent probability for cryonics to turn out then investing in pretty much anything long-term becomes much more likely to be personally beneficial. Indeed, research in general increases the probability that cryonics will end up working (since it reduces the chance of catastrophic events or social problems and the like occurring before the revival technology is reached). The problem with cold fusion is that it is extremely unlikely to work given the data we have. I'd estimate that it is orders of magnitude more likely that say Etale cohomolgy [] turns out to have a practical application than it is that cold fusion will turn out to function. (I'm picking Etale cohomology as an example because it is pretty but very abstract math that as far as I am aware has no applications and seems very unlikely to have any applications for the foreseeable future).
You don't think it likely that etale cohomology will be applied to cryptography? I'm sure there are papers already claiming to apply it, but I wouldn't want to evaluate them. Some people describe it as part of Schoof's algorithm, but I'm not sure that's fair. (or maybe you count elliptic curve cryptography as whimsy - it won't survive quantum computers any longer than rsa)
Yeah, ok. That may have been a bad example, or it may be an indication that everything gets some application. I don't know how it relates to Schoof's algorithm. It isn't as far as I'm aware used in the algorithm or in the correctness proof but this is stretching my knowledge base. I don't have enough expertise to evaluate any claims about applying Etale cohomology to cryptography. I'm not sure what to replace that example with. Stupid cryptographers going and making my field actually useful to people.
There's always a way to estimate how likely something is, even if it's not a very accurate prediction. And mere used like seems kinda like a dark side word, if you'll excuse me. Cryonics is theoretically possible, in that it isn't inconsistant with science/physics as we know it so far. I can't really delve into this part much, as I don't know anything about cold fusion and thus can't understand the comparison properly, but it sounds as if it might be inconsistant with physics? Possibly relevant: Is Molecular Nanotechnology Scientific? [] Also, the benefits of cryonics working if you invested in it would be greater than those of investing in cold fusion. And this is just the impression I get, but it sounds like you're being a contrarian contrarian. I think it's your last sentence: it made me think of Lonely Dissent [].
The unfair thing is, the more a community (like LW) values critical thinking, the more we feel free to criticize it. You get a much nicer reception criticizing a cryonicist's reasoning than criticizing a religious person's. It's easy to criticize people who tell you they don't mind. The result is that it's those who need constructive criticism the most who get the least. I'll admit I fall into this trap sometimes.
(belated reply:) You're right about the openness to criticism part, but there's another thing that goes with it: the communities that value critical thinking will respond to criticism by thinking more, and on occasion this will literally lead to the consensus reversing on the specific question. Without a strong commitment to rationality, however, frequently criticism is met by intransigence instead, even when it concerns the idea rather than the person. Yes, people caught in anti-epistemological [] binds get less criticism - but they usually don't listen to criticism, either. Dealing with these is an unsolved problem.
Well, right off the bat, there's a difference [] between "cryonics is a scam" and "cryonics is a dud investment". I think there's sufficient evidence to establish the presence of good intentions - the more difficult question is whether there's good evidence that resuscitation will become feasible.
You seem to be under the assumption that there is some minimum amount of evidence needed to give a probability. This is very common, but it is not the case. It's just as valid to say that the probability that an unknown statement X about which nothing is known is true is 0.5, as it is to say that the probability that a particular well-tested fair coin will come up heads is 0.5. Probabilities based on lots of evidence are better than probabilities based on little evidence, of course; and in particular, probabilities based on little evidence can't be too close to 0 or 1. But not having enough evidence doesn't excuse you from having to estimate the probability of something before accepting or rejecting it.

I'm not disputing your point vs cryonics, but 0.5 will only rarely be the best possible estimate for the probability of X. It's not possible to think about a statement about which literally nothing is known (in the sense of information potentially available to you). At the very least you either know how you became aware of X or that X suddenly came to your mind without any apparent reason. If you can understand X you will know how complex X is. If you don't you will at least know that and can guess at the complexity based on the information density you expect for such a statement and its length.

Example: If you hear someone whom you don't specifically suspect to have a reason to make it up say that Joachim Korchinsky will marry Abigail Medeiros on August 24 that statement probably should be assigned a probability quite a bit higher than 0.5 even if you don't know anything about the people involved. If you generate the same statement yourself by picking names and a date at random you probably should assign a probability very close to 0.

Basically it comes down to this: Most possible positive statements that carry more than one bit of information are false, but most methods of encountering statements are biased towards true statements.

I wonder what the average probability of truth is for every spoken statement made by the human populace on your average day, for various message lengths. Anybody wanna try some Fermi calculations? I'm guessing it's rather high, as most statements are trivial observations about sensory data, performative utterances, or first-glance approximations of one's preferences. I would also predict sentence accuracy drops off extremely quickly the more words the sentence has, and especially so the more syllables there are per word in that sentence.
Once you are beyond the most elementary of statements I really don't think so, rather the opposite, at least for unique rather than for repeated statements. Most untrue statements are probably either ad hoc lies ("You look great." "That's a great gift." "I don't have any money with me.") or misremembered information. In the case of of ad hoc lies there is not enough time to invent plausible details and inventing details without time to think it through increases the risk of being caught, in the case of misremembered information you are less likely to know or remember additional information you could include in the statement than someone who really knows the subject and wouldn't make that error. Of course more information simply means including more things even the best experts on the subject are simply wrong about as well as more room for misrememberings, but I think the first effect dominates because there are many subjects the second effect doesn't really apply to, e. g. the content of a work of fiction or the constitution of a state (to an extent even legal matters in general). Complex untrue statements would be things like rehearsed lies and anecdotes/myths/urban legends. Consider the so called conjunction fallacy, if it was maladaptive for evaluating the truth of statements encountered normally it probably wouldn't exist. So in every day conversation (or at least the sort of situations that are relevant for the propagation of the memes and or genes involved) complex statements, at least of those kinds that can be observed to be evaluated "fallaciously", are probably more likely to be true.
There isn't no way to estimate it. We can make reasonable estimations of probability based on the data we have (what we know about nanotech, what we know about brain function, what we know about chemical activity at very low temperatures, etc.). Moreover, it is always possible to estimate something's likelyhood, and one cannot simply say "oh, this is difficult to estimate accurately, so I'll assign it a low probability." For any statement A that is difficult to estimate, I could just as easily make the same argument for ~A. Obviously, both A and ~A can't both have low probabilities.
That's true; uncertainty about A doesn't make A less likely. It does, however, make me less likely to spend money on A, because I'm risk-averse.
Have you decided on a specific sum that you would spend based on your subjective impression of the chances of cryonics working?
Maybe $50. That's around the most I'd be willing to accept losing completely.
Nice. I believe that would buy you indefinite cooling as a neuro patient, if about a billion other individuals (perhaps as few as 100 million) are also willing to spend the same amount. Would you pay that much for a straight-freeze, or would that need to be an ideal perfusion with maximum currently-available chances of success?
I wonder how much money it would cost to commission the required science and marketing to get 10^5 cryopreserved people? I welcome your guesses. My guess ROT13'd V guvax gung vg jbhyq pbfg nebhaq bar uhaqerq zvyyvba qbyynef bire n crevbq bs guvegl lrnef

Very interesting story about a project that involved massive elicitation of expert probabilities. Especially of interest to those with Bayes Nets/Decision analysis background.

Machine learning is now being used to predict manhole explosions in New York. This is another example of how machine learning/specialized AI are becoming increasingly common place to the point where they are being used for very mundane tasks.

Somebody said that the reason there is no progress in AI is that once a problem domain is understood well enough that there are working applications in it, nobody calls it AI any longer.
I think philosophy is a similar case. Physics used to be squarely in philosophy, until it was no longer a confused mess, but actually useful. Linguistics too used to be considered a branch of philosophy.
As did economics.

They could talk about it elsewhere.

My understanding is that waitingforgodel doesn't particularly want to discuss that topic, but thinks that it's important that LW's moderation policy be changed in the future for other reasons. In that case it appears to me the best way to go about it is to try to convince Eliezer using rational arguments.

A public commitment has been made.

Commitment to a particular moderation policy?

Eliezer has a bias toward secrecy.

I'm inclined to agree, but do you have an argument that he is biased (instead of us)?

In my obse

... (read more)
See waitingforgodel's actual words on the subject []. We could speculate that these "aren't his real reasons" but they certainly are sane reasons and it isn't usually useful to presume we know what people want despite what they say. At least for the purpose of good faith discussion if not for out personal judgement. Waitingforgodel's general goals could be achieved without relying on LW itself but in a way that essentially nullifies the censorship influence (at least in an 'opt in' manner), even ensuring a negligible onging trivial inconvenience []. This wouldn't be easy or likely for him to achieve but see below for a possible option. Assuming an outcome was achieved that ensured overt censorship created more discussion rather than less (Streisand Effect []) it may actually become in Eliezer's interest to allow such discussions on LW. That would remove attention from the other location and put it back to a place where he can express a greater but still sub-censorship form of influence. More so on this specific topic than the general case. You are right that it wouldn't be violating a public commitment to not censor something unrelated. Now, there is a distinction to be made that I consider important. Let's not pretend this is about moderation. Moderation in any remotely conventional sense would be something that applied to Eliezer's reply and not Roko's post. There hasn't been an instance of more dramatic personal abuse. The response was anything but 'moderate'. Without for the purpose of this point labelling it good or bad this is about censoring an idea. I don't think those who are most in support of the banning would put this in the same category as moderation. Not right now but it is true that 'him or us' is something to consider if I was focusing on this issue. I actually typed some examples
2Wei Dai13y
I don't think it was the main reason for my suggestion. I thought that threatening Eliezer with existential risk was obviously a suboptimal strategy for wfg, and looked for a better alternative to suggest to him. Rational argument was the first thing that came to mind since that's always been how I got what I wanted from Eliezer in the past. You might be right that there are other even more effective approaches wfg could take to get what he wants, but to be honest I'm more interested in talking about Eliezer's possible biases than the details of those approaches. :) Your larger point about not limiting ourselves to actions that are ineffective does seem like a good one. I'll have to think a bit about whether I'm personally biased in that regard.
I am trying to remember the reference to Eliezer's discussion of keeping science safe by limiting it to people who are able to discover it themselves. ie. Security by FOFY. I know he has created a post somewhere but don't have the link (or keyword cache). If I recall he also had Harry preach on the subject and referenced an explicit name. I wouldn't go so far as to say the idea is useless but I also don't quite have Eliezer's faith. I also wouldn't want to reply to a straw man from my hazy recollections.

Slashdot having an epic case of tribalism blinding their judgment? This poster tries to argue that, despite Intelligent Design proponents being horribly wrong, it is still appropriate for them to use the term "evolutionist" to refer to those they disagree with.

The reaction seems to be basically, "but they're wrong, why should they get to use that term?"


I haven't regularly read Slashdot in several years, but I seem to recall that it was like that pretty much all the time.
There's a legitimate reason to not want ID proponents and creationists to use the term "evolutionist" although it isn't getting stated well in that thread. In particular, the term is used to portray evolution as an ideology with ideological adherents. Thus, the use of the term "evolutionism" as well. It seems like the commentators in question have heard some garbled bit about that concern and aren't quite reproducing it accurately.
Thanks for the reply. Wouldn't your argument apply just the same to any inflection of a term to have "ism"? If you and I are arguing about whether wumpuses are red, and you think they are, is it a poor portrayal to refer to you as a "reddist"? Does that imply it's an ideology, etc? What would you suggest would be a better term for ID proponents to use?
I presume someone who took this argument seriously would say that either a) that's its ok to use the term if they stop making ridiculous claims about ideology or b) suggest "mainstream biologists" or "evolution proponents" both of which are wordy but accurate (I don't think that even ID proponents would generally disagree with the point that they aren't the mainstream opinion among biologists.)
Do you expect that, in general, people should never use the form "X-ist", but rather, use "X proponent"? Should evolution proponents use "Intelligent Design advocate" and "creation advocate"?
If a belief doesn't fit an ideological or religious framework, I think that X-ist and ism are often bad. I actually use the phrases "ID proponent" fairly often partially for this reason. I'm not sure however that this case is completely symmetric given that ID proponents self-identify as part of the "intelligent design movement" (a term used for example repeatedly by William Dembski and occasionally by Michael Behe.)

A second post has been banned. Strange: it was on a totally different topic from Roko's.

4Eliezer Yudkowsky13y
Still the sort of thing that will send people close to the OCD side of the personality spectrum into a spiral of nightmares, which, please note, has apparently already happened in at least two cases. I'm surprised by this, but accept reality. It's possible we may have more than the usual number of OCD-side-of-the-spectrum people among us.
So, this is the problem that didn't occur to me. I assumed implicitly that because such things were easy for me to brush off, the same logic would apply to others. Which is kind of silly, because I knew about one of the previous worriers from Benton House. I think that the bottom line here is that I need to update in favor of greater general caution surrounding anything to do with the singularity, AGI, etc.
Was the discussion in question epistemologicaly interesting (vs. intellectual masturbation)? If so, how many OCD personalities joining the site would call for closing the thread? I am curious about decision criteria. Thanks. As an aside, I've had some SL-related psychological effects, particularly related to material notion of self: a bit of trouble going to sleep, realizing that logically there is little distinction from death-state. This lasted a short while, but then you just learn to "stop worrying and love the bomb". Besides "time heals all wounds" certain ideas helped, too. (I actually think this is an important SL, though it does not sit well within the SciFi hierarchy). This worked for me, but I am generally very low on the OCD scale, and I am still mentally not quite ready for some of the discussions going on here.
It is impossible to have rules without Mr. Potter exploiting them. []
There is an upside to this, though. Timelessly speaking, there is nothing special about the moment of your death, since there are always going to be other yous elsewhere that are alive, and there will always be some continuations of any given experience moment that survive. It is very Zen.
Is it OCD or depression? Depression can include (is defined by?) obsessively thinking about things that make one feel worse.
Depressive thinking generally focuses on short term issues or general failure. I'm not sure this reflects that. Frankly, it seems to come across superficially at least more like paranoia, especially of the form that one historically saw (and still sees) in some Christians worrying about hell and whether or not they are saved. The reaction to these threads is making me substantially update my estimates both for LW as a rational community and for our ability to discuss issues in a productive fashion.
(comment edited) I wonder why PlaidX's post isn't getting deleted - the discussion there is way closer to the forbidden topic.
Yep. But not unexpectedly this time; homung posted in the open thread that he was looking for 20 karma so he could post on the subject, and I sent him a private message saying he shouldn't, which he either didn't see or ignored.
What was the second topic? I am most interested in knowing just what things are forbidden.
It was about the possibility of torturing someone by creating copies of the person and torturing them.
If I'm thinking of the right post, it's another one that involved AI and torture, though from a very different angle than Roko's post. It was a dialogue between a human and a uFAI; I don't quite remember what points it was trying to make, but if we're talking about what could affect people with OCD/anxiety conditions, then it's probably just the "uFAI talking about torturing people" aspect that was deemed problematic anyway.
Ahh, I remember the one. It was titled "What do you choose? 3^^^3 people being tortured for 50 years or some E. coli in the eye?" uFAIs in counterfactuals do evil things in contrived scenarios []. It's what they do.
Oh, wow, I don't think I saw that one. I guess that makes three banned AI-torture posts, then? Edit: The one I was thinking of was "Acausal torture? But a scratch, in the multiverse. (A dialogue for human and UFAI)".

Have any LWers traveled the US without a car/house/lot-of-money for a year or more? Is there anything an aspiring rationalist in particular should know on top of the more traditional advice? Did you learn much? Was there something else you wish you'd done instead? Any unexpected setbacks (e.g. ended up costing more than expected; no access to books; hard to meet worthwhile people; etc.)? Any unexpected benefits? Was it harder or easier than you had expected? Is it possible to be happy without a lot of social initiative? Did it help you develop social initi... (read more)

Ironically, your comment series is evidence that censorship partially succeeded in this case. Although existential risk could increase, that was not the primary reason for suppressing the idea in the post.

Succeeded - in promoting what end?
Streisand Effect []

I've actually speculated as to whether Eliezer was going MoR:Quirrel on us. Given that aggressive censorship was obviously going to backfire a shrewd agent would not use such an approach if they wanted to actually achieve the superficially apparent goal. Whenever I see an intelligent, rational player do something that seems to be contrary to their interests I take a second look to see if I am understanding what their real motivations are. This is an absolutely vital skill when dealing with people in a corporate environment.

Could it be the case that Eliezer is passionate about wanting people to consider torture:AIs and so did whatever he could to make it seem important to people, even though it meant taking a PR hit in the process? I actually thought this question through for several minutes before feeling it was safe to dismiss the possibility.

So I actually haven't read MoR - could you summarize the reference for me? I mean, I can basically see what you're saying from context, but is there anything beyond that it would be useful to know? My instinct is that it just doesn't feel like something Eliezer would do. But what do I know?
There isn't much more to it than can be inferred from the context. MoR:Quirrel is just a clever, devious and rational manipulator. I don't either... but then that's the assumption MoR:Harry made about MoR:Dumbledore. At Quirrel's prompting Harry decided "it was time and past time to ask Draco Malfoy what the other side of that war had to say about the character of Albus Percival Wulfric Brian Dumbledore." :) (Of course EY hasn't been in a war and I don't think there are any people who accuse him of being an especially devious political manipulator.)
I had thought about it and reached no conclusion.

Note to reader: This thread is curiosity inducing, this is affecting your judgement. You might think you can compensate for this bias but you probably won't in actuality. Stop reading anyway. Trust me on this. Edit: Me, and Larks, and ocr-fork, AND ROKO and [some but not all others]

I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot. Maybe it will stay dangerous for a long time, I don't know, but there seems to be a dece... (read more)

I heard it and I don't think it's "wise" to defer to Eliezer's judgment on the matter. I stopped discussing this stuff on LW for a different reason: I feel LW is Eliezer's site and I'd better follow his requests here.
EY might have a disproportionately high influence on us and our future. In this case I believe it is appropriate not to grant him certain rights, i.e. constrain his protection from criticism. He still has the option to ban people, further explain himself or just ignore it. But just censoring something that is by definition highly important and not stating any reasons for it makes me highly suspicious. Even more so if I'm told not to pursue this issue any further in the manner of a sacred truth you are not supposed to know.
Sorry, I didn't mean to misrepresent consensus. Will edit shortly. To be clear: I do have a lot of complaints with how this was and is being handled. My intent is for other to not keep digging for the reasons I did, because I now think my reasons were not sufficient, given the reasons I had not to.

I don't post things like this because I think they're right, I post them because I think they are interesting. The geometry of TV signals and box springs causing cancer on the left sides of people's bodies in Western countries...that's a clever bit of hypothesizing, right or wrong.

In this case, an organization I know nothing about (Vetenskap och Folkbildning from Sweden) says that Olle Johansson, one of the researchers who came up with the box spring hypothesis, is a quack. In fact, he was "Misleader of the year" in 2004. What does this mean in

... (read more)

Who's right? Who knows. It's a fine opportunity to remain skeptical.

Bullshit. The 'skeptical' thing to do would be to take 30 seconds to think about the theory's physical plausibility before posting it on one's blog, not regurgitate the theory and cover one's ass with an I'm-so-balanced-look-there's-two-sides-to-the-issue fallacy.

TV-frequency EM radiation is non-ionizing, so how's it going to transfer enough energy to your cells to cause cancer? It could heat you up, or it could induce currents within your body. But however much heating it causes, the temperature increase caused by heat insulation from your mattress and cover is surely much greater, and I reckon you'd get stronger induced currents from your alarm clock/computer/ceiling light/bedside lamp or whatever other circuitry's switched on in your bedroom. (And wouldn't you get a weird arrhythmia kicking off before cancer anyway?)

(As long as I'm venting, it's at least a little silly for Kottke to say he's posting it because it's 'interesting' and not because it's 'right,' because surely it's only interesting because it might be right? Bleh.)

it's at least a little silly for Kottke to say he's posting it because it's 'interesting' and not because it's 'right'

Yup, that's the bit I thought made it appropriate for LW.

It reminded me of my speculations on "asymmetric intellectual warfare" - we are bombarded all day long with things that are "interesting" in one sense or another but should still be dismissed outright, if only because paying attention to all of them would leave us with nothing left over for worthwhile items.

But we can also note regularities in the patterns of which claims of this kind get raised to the level of serious consideration. I'm still perplexed by how seriously mainstream media takes claims of "electrosensitivity", but not totally surprised: there is something that seems "culturally appropriate" to the claims. The rate at which cell phones have spread through our culture has made "radio waves" more available as a potential source of worry, and has tended to legitimize a particular subset of all possible absurd claims.

Neither that nor the rest of your 'graf is a decisive argument against a causal connection. And unless you can increase my probability that there are no subsystems of the human body that can resonate at TV frequencies, I will continue in my tentative belief that TV-frequency EM might still cause a problem even if sub-kilohertz EM does not. Very good point about, "It's interesting," though. "It's interesting," should be a good reason only to teenagers still learning to find pleasure in learning new science.

If breast cancer and melanomas are more likely on the left side of the body at a level that's statistically significant, that's interesting even if the proposed explanation is nonsense.

Even so, ISTM that picking through the linked article for its many flaws in reasoning would have been more interesting even than not-quite-endorsing its conclusions. What I find interesting is the question, what motivates an influential blogger with a large audience to pass on this particular kind of factoid? The ICCI blog has an explanation based on relevance theory and "the joy of superstition" [], but unfortunately (?) it involves Paul the Octopus: (ETA: note the parallel between the above and "I post these things because they are interesting, not because they're right". And to be lucid, my own expectations of relevance get aroused for the same reasons as most everyone else's; I just happen to be lucky enough to know a blog where I can raise the discussion to the meta level.)

(So this is just about the first real post I made here and I kinda have stage fright posting here, so if its horribly bad and uninteresting and so please tell me what I did wrong, ok? Also, I've been frying to figure out the spelling and grammar and failed, sorry about that.) (Disclaimer: This post is humorous, and not everything should be taken all to seriously! As someone (Boxo) reviewing it put it: "it's like a contest between 3^^^3 and common sense!")

1) My analysis of

Lets say 1 second of tort... (read more)

Given some heavy utilitarian assumptions. This isn't an argument, it's more plausible to just postulate disutility of torture without explanation.
It's arbitrarily chosen from the dust speck being -1, I find it easier to imagine one second of torture than years for comparing to something that happens in less than a second. It's just an example.
The importance of an argument doesn't matter for the severity of an error in reasoning present in that argument. The error might be unimportant in itself, but that it was made in an unimportant argument doesn't argue for the unimportance of the error.
Oh. I misinterpreted what error you were referencing. yea, you're right I guess. Sorry.
And from this I can't infer whether communication succeeded or you are just making a social sound (not that it's very polite of me to remark this).
I first thought you had a problem with me making the number -1 000 000 from nowhere. Later I realized you meant that to some people it might not be obvious that the utility of 50 years of torture is the average utility per second time the number of seconds.
I assign ants exactly zero utility, but the wild surge objection [] still applies - you can't affect the universe in 3^^^3 ways without some risk of dramatic unintended results.
My argument is that you ALMOST certainly don't care about ants at all, but that there is some extremely small uncertainty about what your values are. The disutility of getting a dust speck in your eye also has that argument.
1Wei Dai13y
You might be interested in my post Value Uncertainty and the Singleton Scenario [] where I suggested (based on an idea of Nick Bostrom and Toby Ord) another way of handling uncertainty about your utility function, which perhaps gives more intuitive results in these cases.
I consider these results perfectly intuitive, why shouldn't they be? 3^^^3 is a really big number, it makes sense you have to be really careful around it.

Sparked by my recent interested in, I went back to take a look at Wrong Tomorrow, a prediction registry for pundits - but it's down. And doesn't seem to have been active recently.

I've emailed the address listed on the original OB ANN for WT, but while I'm waiting on that, does anyone know what happened to it?

I got a reply from Maciej Ceglowski today; apparently WT was taken down to free resources for another site. It's back up, for now. (I have to say, seriously going through prediction sites is kind of discouraging. The free ones all seem to be marginal and very unpopular, while the commercial ones aren't usable in the long run and are too fragmented.)
In relation to these sorts of sites, what's a normal level of success on this sort of thing for LW readers? If people chose ten things now that they thought were fifty percent likely to occur by the end of next week, would exactly five of them end up happening?
I don't know of any LWers who have used PB enough to really have a solid level of normal. My own [] PB stats are badly distorted by all my janitorial work. I suspect not many LWers have put in the work for calibration; at least, I see very few scores posted at [] So, I couldn't say. It would be nice if we were all calibrated. (But incidentally you can be perfectly calibrated and not have 5/10 of 50% items happen; it could just be a bad week for you.)

UDT/TDT understanding check: Of the 3 open problems Eliezer lists for TDT, the one UDT solves is counterfactual mugging. Is this correct? (A yes or no is all I'm looking for, but if the answer is no, an explanation of any length would be appreciated)

4Eliezer Yudkowsky13y
So TDT fails on counterfactual mugging, as far as you understand it to work, and the reasoning I gave here [] is in error?

Something I wonder about just how is how many people on LW might have difficulties with the metaphors used.

An example: In, I still haven't quite figured what a waterline is supposed to mean in that context, or what kind of associations the word has, and neither had someone else I asked about that.

I think "waterline" here should be taken in the same context as "A rising tide floats all boats".

Are there any Less Wrongers in the Grand Rapids area that might be interested in meeting up at some point?

Grand Rapids, MI, you mean? I'm in Michigan, but West Bloomfield, so a couple hours away, but still, if we found some more MI LWers, maybe.

This is my PGP public key. In the future, anything I write which seems especially important will be signed. This is more for signaling purposes than any fear of impersonation -- signing a post is a way to strongly signal its seriousness.

Version: GnuPG v1.4.7 (Cygwin)

... (read more)
You may want to copy this key block to a user page on the LW wiki, where it can be easily referenced in the future.
That would also have the advantage of hopefully requiring different credentials to access, so it would be marginally harder to change the recorded public key while signing a forged post with it.
Not just harder; it would be all but impossible since the wiki keeps a hstory of all changes (unlike LW posts) and jimrandomh is not a wiki sysop.
Telling people what you're trying to signal is a way to make them take your signaling less seriously.
It still works as a signal, because (1) signing a comment requires some extra effort, and (2) it is harder to retract a comment that has been signed (since the signature remains valid proof of authorship even if the original comment is edited or deleted). A little bit of real cost and utility goes a long way.
But PGP's security and quality pretty much make up for that loss in signaling seriousness, don't you think?
Unless you're signalling that you know about signalling including acknowledging your own signalling. :)

Given all the recent discussion of contrived infinite torture scenarios, I'm curious to hear if anyone has reconsidered their opinion of my post on Dangerous Thoughts. I am specifically not interested in discussing the details or plausibility of said scenarios.

Yes. I previously believed that thinking a true statement could only be harmful by either leading to a false statement, stealing cognitive resources, or lowering confidence. I also believed that general rationality plus a few meditative tricks would be a sufficient and fully general defense against all such harms. I know better now.
Yep - I'm having some fun there right now, my nick is want_to_want. Anyone knowledgeable in psych research, join in!
It is really quite sad. The issues people like Baumeister (and, say Carol Dweck) are working on are highly instrumetally important, but even after all the defensive arguments made by gagaoolala (in the reddit thread) it's clear that the studies were just not done right - a decent study should not require this much defence. Each argument, even if it's excellent by itself, is an additional assumption, and the conjunction of assumption takes away from the validity of the conclusion. I wonder why this seems common in psychology, simply luck of training or other, more pernicious reasons?

Do you like the LW wiki page (actually, pages) on Free Will? I just wrote a post to Scott Aaronson's blog, and the post assumed an understanding of the compatibilist notion of free will. I hoped to link to the LW wiki, but when I looked at it, I decided not to, because the page is unsuitable as a quick introduction.

EDIT: Come over, it is an interesting discussion of highly LW-relevant topics. I even managed to drop the "don't confuse the map with the territory"-bomb. As a bonus, you can watch the original topic of Scott's post: His diavlog with A... (read more)

The "Free Will" pages are fairly weak right now - I think it would be useful to rewrite at least the "Solution" page, and probably the question page as well.

John Hari - My Experiment With Smart Drugs (2008)

How does everyone here feel about these 'Smart Drugs'? They seem quire tempting to me, but are there candidates that have been in use long and considered safe?

It surprised me that he didn't consider taking provigil one or two days a week. It also should have surprised me (but didn't-- it just occurred to me) that he didn't consider testing the drugs' effects on his creativity.
There's some discussion here [] and here [].

I figure the open thread is as good as any for a personal advice request. It might be a rationality issue as well.

I have incredible difficulty believing that anybody likes me. Ever since I was old enough to be aware of my own awkwardness, I have the constant suspicion that all my "friends" secretly think poorly of me, and only tolerate me to be nice.

It occurred to me that this is a problem when a close friend actually said, outright, that he liked me -- and I happen to know that he never tells even white lies, as a personal scruple -- and I ... (read more)

Update for the curious: did talk to a friend (the same one mentioned above, who, I think, is a better "shrink" than some real shrinks) and am now resolved to kick this thing, because sooner or later, excessive approval-seeking will get me in trouble. I'm starting with what I think of as homebrew CBT: I will not gratuitously apologize or verbally belittle myself. I will try to replace "I suck, everyone hates me" thoughts with saner alternatives. I will keep doing this even when it seems stupid and self-deluding. Hopefully the concrete behavioral stuff will affect the higher-level stuff. After all. A mathematician I really admire gave me career advice -- and it was "Believe in yourself." Yeah, in those words, and he's a logical guy, not very soft and fuzzy.
1Paul Crowley13y
Here's my rationalist CBT: the things that depression tells you are way too extreme to be accurate - self-deluding is believing them, not examining them rationally.
sounds good.
I have exactly the same problem. I think I understand where mine comes from, from being abused by my older siblings. I have Asperger's, so I was an easy target. I think they would sucker me in by being nice to me, then when I was more vulnerable whack me psychologically (or otherwise). It is very difficult for me to accept praise of any sort because it reflexively puts me on guard and I become hypersensitive. You can't get psychotherapy from a friend, it doesn't work and can't work because the friendship dynamic gets in the way (from both directions). A good therapist can help a great deal, but that therapist needs to be not connected to your social network.
The issue that are dealt with in psychotherapy are fundamentally non-rational issues. Rational issues are trivial to deal with (for people who are rationalists). The substrate of the issues dealt with in psychotherapy are feelings and not thoughts. I see feelings as an analog component of the human utility function. That analog component affects the gain and feedback in the non-analog components. The feedback by which thoughts affect feelings is slow and tenuous and takes a long time and considerable neuronal remodeling. That is why psychotherapy takes a long time, the neuronal remodeling necessary to affect feelings is much slower than the neuronal remodeling that affects thoughts. A common response to trauma is to dissociate and suppress the coupling between feelings and thoughts. The easiest and most reliable way to do this is to not have feelings because feelings that are not felt cannot be expressed and so cannot be observed and so cannot be used by opponents as a basis of attack. I think this is the basis of the constricted affect of PTSD.
Alicorn's Living Luminously [] series covers some methods of systematic mental introspection and tweaking like this. The comments on alief [] are especially applicable.
For what it's worth, this is often known as Imposter Syndrome [], though it's not any sort of real psychiatric diagnosis. Unfortunately, I'm not aware of any reliable strategies for defeating it; I have a friend who has had similar issues in a more academic context and she seems to have largely overcome the problem, but I'm not sure as to how.
You might want to check out Learning Methods []-- they've got techniques for tracking down the thoughts behind your emotions, and then looking at whether the thoughts make sense.
I was like this from ages 12-18, perhaps? It started because quite a few people actually were mean to me, but my brain incorrectly extrapolated and assumed everyone was. The beginning of the end was when I started to do something that I had defined as the province of the liked-people (in this case, dating), though it took about two years to purge the habit. Perhaps there is something you are similarly defining to imply likedness, and you can do that thing.
Perhaps it would help to think about how you treat people you like vs. people you dislike and how you react to their flaws and faults. If you have a trusted friend you can talk to about this perhaps ask them about things they've done similar to your own self-perceived flaws (weird or embarrassing things you've said) - the friend you mention sounds like a good candidate. You might find that you didn't even notice these things, don't remember them or noticed them but didn't change your opinion significantly. If you can see the symmetry with genuinely liking certain other people despite their imperfections perhaps it will be easier to appreciate how others can genuinely like you.
0Paul Crowley13y
If you don't mind my asking, have you tried any kind of "talking therapy"?
Oh god no. I'm very old-fashioned; still think of that as a recourse for the genuinely troubled or ill, not fortunate people like me.
5Paul Crowley13y
* Who made that rule? What potential bad consequence of someone you wouldn't call "genuinely troubled or ill" trying a talking therapy do you foresee? * This article by Yvain on the difficulties with that distinction [] may interest you.
For what it's worth, the stigma of seeing a mental health professional has basically vanished over the last ten years. Sometimes being in therapy is even a status symbol... A therapist isn't necessarily better than honest conversation with a good friend, but it sounds like you have trouble having that kind of conversation with your friends. Out of the different types of therapy, most of them have little evidence as to their efficacy, but there is a fair amount of evidence that cognitive behavior therapy works. So I'll ask again -- why not try it? Being old fashioned isn't a very good reason.
Does your sense of being unlikeable have an impact on your self-esteem or lifestyle? To paraphrase something I heard about these things, it's only a problem if you think that it's a problem Anyway, I second the recommendation of the Luminosity sequence, also this workbook []; it covers a lot of the same material as talk therapy would but you can work through them independently, without the need to impose on anyone else.
yeah, that's why I brought it up, it is a problem. Because I'll spend time being very unhappy that nobody "really" likes me, and sometimes do stupid things to seek approval. Thanks for the link.

An object lesson in how not to think about the future:

(from Pharyngula)

Could be funny, if it was a joke... :(
Can you elaborate on what specifically you think they're doing wrong?
2Paul Crowley13y
I haven't looked in detail, but two things struck me at a glance: first, it's tremendously specific; second, the impact of technologies like brain emulation seems hugely understated.
The silly thing is that they present it as a timeline, but it is in fact an incoherent list of technological breakthroughs without really considering the interaction between them. It's like they had a nano writer, a climate writer and so on, all of them wrote a timeline, and the editors merged them in the end.
He he, poor WW2 veterans miss the deadline by just one year:
When I'm (physically) driving down the street, I'd like to be able to right-click on a tree I see and find out what kind of tree it is. And who planted it (e.g., federal or state funds) and when, if I want to know. I can't wait till then.

I just finished polishing off a top level post, but 5 new posts went up tonight - 3 of them substantial. So I ask, what should my strategy be? Should I just submit my post now because it doesn't really matter anyway? Or wait until the conversation dies down a bit so my post has a decent shot of being talked about? If I should wait, how long?

Definitely wait. My personal favorite timing is one day for each new (substantial) post.

Either I misunderstand CEV, or the above statement re: the Abrahamic god following CEV is false.

Coherent Extrapolated Volition [] This is exactly the argument religious people use to excuse any shortcomings of their personal FAI. Namely, their personal FAI knows better than you what's best for your AND everyone else. What average people do is follow what is being taught here on LW. They decide based on their prior. Their probability estimates tell them that their FAI is likely to exist and make up excuses for extraordinary decisions based on the possible existence of it. That is, support their FAI while trying to inhibit other uFAI all in the best interest of the world at large.
The link and quotation you posted do not seem to back up your argument that the Abrahamic god follows CEV. Could you clarify?
It's not about it following CEV but people believing it, that it acts in their best interest. Reasons are subordinate. It is the similar systematic of positive and negative incentive that I wanted to highlight. I grew up in a family of Jehovah's Witnesses. I can assure you that all believed this to be the case. Faith is considered the way to happiness. Positive incentive: Negative incentive: I could find heaps of arguments for Christianity that highlight the same believe of God knowing what's the best for you and the world. This is what most people on this planet believe and this is also the underpinning of the rapture of the nerds.
Ah, I understand-- except that I think the "negative incentive" element we're discussing is absurd, would obviously trigger failsafes with CEV as described, etc.
There'll always be elements that suffer, that is perceive FAI as uFAI subjectively.
The whole buzz about the removed content is basically about a kind of incompleteness theorem of FAI. Those who seek to represent every element will reach a certain critical point. You can never represent all elements, never make everyone happy. You can maximize friendliness and happiness and continue to do so, but this journey will always be incomplete. It's more artful than this and does only affect a tiny minority though. Nothing to worry about anyway, it's very unlikely in my opinion. So unlikely that I'd deliberately spit it into the face without losing a nights sleep over it.
Yahweh and the associated moral system are far from incomprehensible if you know the cultural context of the Israelites. It's a recognizably human morality, just a brutal one obsessed with purity of various sorts.
It is not about the moral system being incomprehensible but the acts of the FAI. Whenever something bad happens religious people excuse it with an argument based on "higher intention". This is the gist of what I wanted to highlight. The similarity between religious people and those true believers into the technological singularity and AI's. This is not to say it is the same. I'm not arguing about that. I'm saying that this might draw the same kind of people committing the same kind of atrocities. This is very dangerous. If people don't like anything happening, i.e. don't understand it, it's claimed to be a means to an end that will ultimately benefit their extrapolated volition. People are not going to claim this in public. But I know that there are people here on LW who are disposed to extensive violence if necessary. To be clear, I do not doubt the possibilities talked about on LW. I'm not saying they are nonsense like the old religions. What I'm on about is that the ideas the SIAI is based on, while not being nonsense, are posed to draw the same fanatic fellowship and cause the same extreme decisions. Ask yourself, wouldn't you fly a plane into a tower if that was the only way to disable Skynet? The difference between religion and the risk of uFAI makes it even more dangerous. This crowd is actually highly intelligent and their incentive based on more than fairy tales told by goatherders. And if dumb people are already able to commit large-scale atrocities based on such nonsense, what are a bunch of highly-intelligent and devoted geeks who see a tangible danger able and willing to do? More so as in this case the very same people who believe it are the ones who think they must act themselves because their God doesn't even exist yet.

Ask yourself, wouldn't you fly a plane into a tower if that was the only way to disable Skynet?

Yes. I would also drop a nuke on New York if it were the only way to prevent global nuclear war. These are both extremely unlikely scenarios.

It's very correct to be suspicious of claims that the stakes are that high, given that irrational memes have a habit of postulating such high stakes. However, assuming thereby that the stakes never could actually be that high, regardless of the evidence, is another way of shooting yourself in the foot.

I do not assume that. I've always been a vegetarian who's in favor of animal experiments. I'd drop a nuke to prevent less than what you described above.

Is it my imagination, or is "social construct" the sociologist version of "emergent phenomenon"?

Something weird is going on. Every time I check, virtually all my recent comments are being steadily modded up, but I'm slowly losing karma. So even if someone is on an anti-Silas karma rampage, they're doing it even faster than my comments are being upvoted.

Since this isn't happening on any recent thread that I can find, I'd like to know if there's something to this -- if I made a huge cluster of errors on thread a while ago. (I also know someone who might have motive, but I don't want to throw around accusations at this point.)

I tend to vote down a wide swath of your comments when I come across them in a thread such as this one or this one, attempting to punish you for being mean and wasting peoples' time. I'm a late reader, so you may not notice those comments being further downvoted; I guess I should post saying what I've done and why.

In the spirit of your desire for explanations, it is for the negative tone of your posts. You create this tone by the small additions you make that cause the text to sound more like verbal speech, specifically: emphasis, filler words, rhetorical questions, and the like. These techniques work significantly better when someone is able to gauge your body language and verbal tone of voice. In text, they turn your comments hostile.

That, and you repeat yourself. A lot.

This reminds me of something I mentioned as an improvement for LW a while ago, though for other reasons-- the ability to track all changes in karma for one's posts.
I see this as a feature request - would be great to have a view of your recent posts/comments that had action (karma or descendant comments). (rhetorically) If karma is meant as feedback, this would be a great way to get it.

So I was pondering doing a post on the etiology of sexual orientation (as a lead-in to how political/moral beliefs lead to factual ones, not vice versa).

I came across this article, which I found myself nodding along with, until I noticed the source...

Oops! Although they stress the voluntary nature of their interventions, NARTH is an organization devoted to zapping the fabulous out of gay people, using such brilliant methodology as slapping a rubber band against one's wrist every time one sees an attractive person with the wrong set of chromosomes. From the... (read more)

For what it's worth, rubber band snapping is a pretty popular thought-stopping technique in CBT for dealing with obsessive-type behaviors, though I believe there's some debate over how effective it is. I know it's been used to address morbid jealousy [], though I don't know to what extent or if more scientific studies have been conducted.

There is something that bother's me and I would like to know if it bothers anyone else. I call it "Argument by Silliness"

Consider this quote from the Allais Malaise post: "If satisfying your intuitions is more important to you than money, do whatever the heck you want. Drop the money over Niagara Falls. Blow it all on expensive champagne. Set fire to your hair. Whatever."

I find this to be a common end point when demonstrating what it means to be rational. Someone will advance a good argument that correctly computes/deduces how you... (read more)

Yeah-- argument by silliness (I think I'd describe it as finding something about the argument which can be made to sound silly) is one of the things I don't like about normal people.
That's why it can be such an effective tactic when persuading normal people. You can get them to commit to your side and then they rationalize themselves into believing it's truth (which it is) because they don't want to admit they were conned.

Luke Muehlhauser just posted about Friendly AI and Desirism at his blog. It tends to have a more general audience than LW, comments posted there could help spread the word. Desirism and the Singularity

Desirism and the Singularity, in which one of my favourite atheist communities is inching towards singularitarian ideas.

Looks like Emotiv's BCI is making noticeable progress (from the Minsky demo)

but still using bold guys :)

Do the various versions of the Efficient Market Hypothesis only apply to investment in existing businesses?

The discussions of possible market blind spots in clothing makes me wonder how close the markets are to efficient for new businesses.

I'm curious what peoples opinions are of Jeff Hawkins' book 'on intelligence', and specifically the idea that 'intelligence is about prediction'. I'm about halfway through and I'm not convinced, so I was wondering if anybody could point me to further proofs of this or something, cheers

With regards to further reading, you can look at Hawkins' most recent (that I'm aware of) paper, "Towards a Mathematical Theory of Cortical Micro-Circuits" []. It's fairly technical, however, so I hope your math/neuroscience background is strong (I'm not knowledgeable enough to get much out of it). You can also take a look at Hawkins' company Numenta [], particularly the Technology Overview []. Hierarchical Temporal Memory is the name of Hawkins' model of the neocortex, which IIRC he believes is responsible for some of the core prediction mechanisms in the human brain. Edit: I almost forgot, this video [] of a talk he presented earlier this year may be the best introduction to HTM.
Intelligence-as-prediction/compression is a pretty familiar idea to LWers; there are a number of posts on them which you can find by searching, or you can try looking into the bibliographies and links in: * [] * [] * [] (I have no comments anent On Intelligence specifically. I remember it as being pretty vague as to specifics, and not very dense at all - unobjectionable.)

I was examining some of the arguments for the existence of god that separate beings into contingent (exist in some worlds but not all) and necessary (exist in all worlds). And it occurred to me that if the multiverse is indeed true, and its branches are all possible worlds, then we are all necessary beings, along with the multiverse, a part of whose structure we are.

Am I retreating into madness? :D

taking this too seriously eh,, but note it's impossible for us to exist in All branches
I'm not saying we exist in all branches, just that all branches are necessary, and therefore we are necessary also. Essentially I'm saying that everything that is actual is necessary.

When thinking about my own rationality I have to identify problems. This means that I write statements like "I wait to long with making decisions, see X,Y". Now I worry that by stating this as a fact I somehow anchor it more deeply in my mind, and make myself act more in accordance with that statement. Is there actually any evidence for that? And if so, how do I avoid this problem?

I don't have any references on hand but cognitive behaviour therapy definitely frowns on people describing themselves using absolute statements like that. I would advise reframing it in a way that makes it clear that your undesirable behaviour is something that you do some of the time or that you did in the past but try not to do now, to avoid reinforcing any underlying beliefs, for example, that you are the kind of person who is bad at making timely decisions. Even better would be reframing to include some kind of resolution about how you will go about making more timely decisions in the future, even if it's just a resolution to try to be more aware of when you're putting off a decision.

If an AI does what Roko suggested, it's not friendly. We don't know what, if anything, CEV will output, but I don't see any reason to think CEV would enact Roko's scenario.

Until about a month ago, I would have agreed, but some posts I have since read on LW made me update the probability of CEV wanting that upwards.
Really, please explain (or PM me if it would require breaking the gag rule on Roko's scenario). Why would CEV want that?
Really, please explain (or PM me if it would require breaking the gag rule on Roko's scenario). Why would CEV want that?
Because 'CEV' must be instantiated on a group of agents (usually humans). Some humans are assholes. So for some value of aGroup, CEV does assholish things. Hopefully the group of all humans doesn't create a CEV that makes FAI> an outright uFAI from our perspective but we certainly shouldn't count on it.
That's not necessarily true. CEV isn't precisely defined but it's intended to represent the idealized version of our desires and meta-desires. So even if we take a group of assholes, they don't necessarily want to be assholes, or want to want to be assholes, or maybe they wouldn't want to if they knew more and were smarter.
I refer, of course, to people whose preferences really are different to our own. Coherent Extrapolated Assholes. I don't refer to people who would really have preferences that I would consider acceptable if they just knew a bit more. You asked for an explanation of how a correctly implemented 'CEV' could want something abhorrent. That's how. There is an unfortunate tendency to glorify the extrapolation process and pretend that it makes any given individual or group have acceptable values. It need not.
Upvoted for the phrase “Coherent Extrapolated Assholes”. Best. Insult. Ever. Seriously, though, I don't think there are many CEAs around [], anyway. (This doesn't mean there are none, either. (I was going to link to this [] as an example of one, but I'm not sure Hitler would have done what he did had he known about late-20th-century results about heterosis [], Ashkenazi Jew intelligence [], etc.)) This mean that I think it's very, very unlikely for CEV to be evil (and even less likely to be evil>), unless the membership criteria to aGroup are gerrymandered to make it so.
It seemed odd to me that so few people were bothered by the claims that CEV shouldn't care much about the inputs. If you expect it to give similar results if you put in a chimpanzee and a murderer and Archimedes, then why put in anything at all instead of just printing out the only results it gives?
If you believe in moral progress (and CEV seems to rely on that position), then there's every reason to think that future-society would want to make changes to how we live, if future-society had the capacity to make that type of intervention. In short, wouldn't you change the past to prevent the occurrence of chattel slavery if you could? (If you don't like that example, substitute preventing the October revolution or whatever example fits your preferences).
It's more agnostic on the issue. It works just as well for the ultimate conservative.
I wouldn't torture innocent people to prevent it, no.
Punishment from the future is spooky enough. Imagine what an anti-Guns of the South [] would be like for the temporal locals. Not pleasant, that's for sure.
It's more agnostic on the issue. It works just as well for the ultimate conservative.
Doesn't CEV implicitly assert that there exists a set of moral assertions M that is more reliably moral than anything humans assert today, and that it's possible for a sufficiently intelligent system to derive M? That sure sounds like a belief in moral progress to me. Granted, it doesn't imply that humans left to their own devices will achieve moral progress. But the same is true of technological progress.
The implicit assertion is "Greater or Equal", not "Greater". Run on a True Conservative it will return the morals that the conservative currently has.
Mm. I'll certainly agree that anyone for whom that's true deserves the title "True Conservative." I don't think I've ever met anyone who meets that description, though I've met people who would probably describe themselves that way. Presumably, someone who believes this is true of themselves would consider the whole notion of extrapolating the target definition for a superhumanly powerful optimization process to be silly, though, and consider the label CEV to be technically accurate, in the same sense that I'm currently extrapolating the presence of my laptop, but to imply falsehoods.
Roko thinks (or thought) it would. I do too. Can't argue it in detail here, sorry.

What's current thought about how you'd tell that AI is becoming more imminent?

I'm inclined to think that AI can't happen before the natural language problem is solved.

I'm trying to think of conflicts between subsystems of the brain to see if there's anything more than a simple gerontocratic system of veto power (i.e. evolutionarily older parts of the brain override younger parts). Help?

I've got things like:

  • Wanting to eat but not wanting to spend money on food but wanting to signal wealth.
  • Wanting to breathe when underwater but wanting to surface for breath first but wanting to signal willpower to watching friends.
  • Wanting to survive but wanting to die for one's country/ideals/beliefs. (This is a counterexample to the
... (read more)
I think you should: Eat. Refrain from breathing while underwater. Survive.
Er, right, but what decision making algorithm or heuristics do you think the brain typically uses when solving problems similar to those listed?
Hmmm... I wish I could help, but I don't seem to have conflicts in this reference class. I don't care about signaling wealth, especially not when that actually involves parting with money; I'd only care about how long I could hold my breath if I had a bet going and I'd never make such a bet unless I was sure I could win it comfortably; I have absolutely no desire whatsoever to die for any cause; and I want to be honest more than I want to appear confident or honest to the point where if I have inclinations towards either of the latter, they might as well not exist.

I think in dialogue. (More precisely, I think in dialogue about half the time, in more bland verbal thoughts a quarter of the time, and visually a quarter of the time, with lots of overlap. This also includes think-talking to myself when it seems internally that there are 2 people involved.)

Does anyone else find themselves thinking in dialogue often?

I think it probably has something to do with my narcissistic and often counterproductive obsession with other people's perceptions of me, but this hypothesis is the result of generalizing from one example. If ... (read more)

Monologues or disjointed verbal fragments. When I am mad at someone (hasn't really happened for a few years :) ) I get into dialogues with them, usually going in circles.
Dialogue and blog posts/essay format when I want to think about a particular topic but have no explicit goal in mind, regular verbal thoughts when I'm doing something cognitively challenging, fuzzy non-verbal and non-explicit concepts when I'm not thinking about anything in particular. Visual thought is something that I am capable of but only with conscious effort (eg. I can do those cube rotation tests just fine but I will convert everything to words if the problem allows it).
Almost entirely verbal and auditory (two different things; auditory includes music and meter.) Not very visual
I posted this [] just now, and then immediately saw this comment. So, yes, you're not alone.
Hahaha, I very nearly made a symmetric comment on your comment. It seems the way you think in dialogue is a lot less self-aggrandizing and thus more useful than my way of thinking (though I've been able to change it somewhat recently).
for a moment I was saying no way I'm like that, but then I actually thought about and it describes me quite well, big headed-ness and all :/
~95% - monologue, ~4.999% - nonverbal nonvisual processing, ~0.001% - dialogue. Personality is best described as ICD-10 F60.1.

I am tentatively interpreting your remark about "not wanting to leave out those I have *plonked" as an indication that you might read comments by such individuals. Therefore, I'm am going to reply to this remark. I estimate a small probability (< 5%) that you will actually consider what I have to say in this comment, but I also estimate that explicitly stating that estimate increase the probability rendering the estimate possibly as high as 10%. I estimate a much higher chance that this remark will be of some benefit to readers here, especial... (read more)

The problem with religious beliefs is not that they are false (they don't have to be), but that they are believed for the purpose of signaling belonging to a group, rather than because they are true. This does cause them to often be wrong or not even wrong, but the wrongness is not the problem, epistemic practices that lead to them are. Correspondingly, the reasons for a given religious belief turning out to be wrong are a different kind of story from the reasons for a given factual belief turning out to be wrong. The comparison of factual mistakes in religious beliefs and factual mistakes made by people who try to figure things out is a shallow analogy that glosses over the substance of the processes.

If you take this incident to its extreme, the important question is what people are willing to do in future based on the argument "it could increase the chance of an AI going wrong..."?

That is not the argument that caused stuff to be deleted from Less Wrong! Nor is it true that leaving it visible would increase the chance of an AI going wrong. The only plausible scenario where information might be deleted on that basis is if someone posted designs or source code for an actual working AI, and in that case much more drastic action would be required.

What was the argument then? This thread [] suggests my point of view. Here one of many comments from the thread above and elsewhere indicating that the deletion was due to the risk I mentioned: I've just read EY' comment. It's indeed mainly about protecting people from themselves causing unfriendly AI to blackmail them. This conclusion is hard to come by since it is deleted without explanation. Still, it's basically the same argument and quite a few people on LW seem to follow the argument I described, described to start a discussion about how far we want to go.
Agree in as much as I suggest Xi should revise to "decrease the chance of AI going right".
I noticed there is another deleted comment by EY where he explicitly writes:
I stand corrected.

A general question about decision theory:

Is it possible to assign a non-zero prior probability to statements like "my memory has been altered", "I am suffering from delusions", and "I live in a perfectly simulated matrix"?

Apologies if this has been answered elsewhere.

The first two questions aren't about decisions. This question is meaningless. It's equivalent to "There is a God, but he's unreachable and he never does anything."
No, it's not meaningless, because if it's true, the matrix's implementers could decide to intervene (or for that matter create an afterlife simulation for all of us). If it's true, there's also the possibility of the simulation ending prematurely.
Of course we have to assign non-zero probabilities to them, but I'm not quite sure how we'd figure out the right priors. Assuming that the hypotheses that your memory has been altered or you're delusional do not actually cause you to anticipate anything differently (see the bit about the blue tentacle in Technical Explanation []), you may as well live in whatever reality appears to you to be the outermost one accessible to your mind. (As for the last one, Nick Bostrom argues that we can actually assign a very high probability to a statement somewhat similar to "I live in a perfectly simulated matrix" — see the Simulation Argument []. I have doubts about the meaningfulness of that on the basis of modal realism, but I'm not too confident one way or the other.)
I disagree with the idea that modal realism, whether right or not, changes the chances of any particular hypothesis like that being true. I am not saying that we can never have a rational belief about whether or not modal realism is true: There may or may not be a philosophical justification for modal realism. However, I do think that whether modal realism applies has no bearing on the probability of you being in some situation, such as in a computer simulation. I think this issue needs debating, so for that purpose I have asserted this is a rule, which I call "The Principle of Modal Realism Equivalence", and that gives us something well-defined to argue for or against. I define and assert the rule, and give a (short) justification of it here: [].
But what if you should anticipate things very differently, if your memory has been altered? If I assigned a high probability to my memory having been altered, then I should expect that the technology exists to alter memories, and all manner of even stranger things that that would imply. Figuring out what prior to assign to a case like that, or whether it can be done at all, is what I'm struggling with.
It's not actually all that hard to mess with memories. []
Why not?
"Where'd you get your universal prior, Neo?" [] Eliezer seems to think (or, at least he did at the time) that this isn't a solvable problem. To phrase the question in a way more relevant to recent discussions, are those statements in any way similar to "a halting oracle exists"?
Solomonoff's prior can't predict something uncomputable, but I don't see anything obviously uncomputable about any of the 3 statements you asked about.
Right. But can it predict computable scenarios in which it is wrong?
Yes. Anything that can be represented by a turing machine gets a nonzero prior. And its model of itself goes in the same turing machine with the rest of the world.

I am pretty new to LW, and have been looking for something and have been unable to find it.

What I am looking for is a discussion on when two entities are identical, and if they are identical, are they one entity or two?

The context for this is continuity of identity over time. Obviously an entity that has extra memories added is not identical to an entity without those memories, but if there is a transform that can be applied to the first entity (the transform of experience over time), then in one sense the second entity can be considered to be an olde... (read more)

I am pretty new to LW, and have been looking for something and have been unable to find it.

What I am looking for is a discussion on when two entities are identical, and if they are identical, are they one entity or two?

The context for this is continuity of identity over time. Obviously an entity that has extra memories added is not identical to an entity without those memories, but if there is a transform that can be applied to the first entity (the transform of experience over time), then in one sense the second entity can be considered to be an olde... (read more)

Eliezer's sequence on quantum mechanics and personal identity [] is almost exactly what you're looking for, I think.
That's the kind of question that a traditional philosopher would try to answer by coming up with the Ultimate Perfect True Definition of Identity, while an LWer would probably try to dissolve it. This is actually a fairly easy problem and should make good practice — "Dissolving the Question", "Righting a Wrong Question", and "How An Algorithm Feels From the Inside" should be good places to start. The "Quantum Mechanics and Personal Identity" subsequence may also be useful if you're considering any concept of identity that involves continuity of constituent matter.
Hold on -- those are important articles to read, and they do move you toward a resolution of that problem. But I don't think they fully dissolve/answer the exact question daedalus2u is asking. For example, EY has written this article [], grappling with but ultimately not resolving the question of whether you should care about "other copies" of you, why you are not indifferent between yourself vs. someone else jumping off a cliff, etc. I don't deny that the existing articles do resolve some of the problems daedulus2u is posing, but they don't cover everything he asked. Unless I've missed something?
SilasBarta, yes, I was thinking about purely classical entities, the kind of computers that we would make now out of classical components. You can make an identical copy of a classical object. If you accept substrate independence for entities, then you can't “dissolve” the question. If Ebborians are classical entities, then exact copies are possible. An Ebborian can split and become two entities and accumulate two different sets of experiences. What if those two Ebborians then transfer memory files such that they now have identical experiences? (I appreciate this is not possible with biological entities because memories are not stored as discrete files). Turing Machines are purely classical entities. They are all equivalent, except for the data fed into them. If humans can be represented by a TM, then all humans are identical except for the data fed into the TM that is simulating them. Where is this wrong?
It's no more wrong than saying that all books are identical except for the differing number and arrangement of letters. It's also no more useful.
Except human entities are a dynamic object, unlike a static object like a book. Books are not considered to be “alive”, or “self-aware”. If two humans can both be represented by TM with different tapes, then one human can be turned into another human by feeding one tape in backwards then feeding in the other tape frontwards. If one human can be turned into another by a purely mechanical process, how does the “life”, or “entity identity”, or “consciousness change” as that transformation is occurring? I don't have an answer, I suspect that the problem is tied up in our conceptualization of what consciousness and identity actually is. My own feeling is that consciousness is an illusion, and that illusion is what produces the illusion of identity continuity over a person's lifetime. Presumably there is an “identity module”, and that “identity module” is what self-identifies an individual as “the same” individual over time (not complete one-to-one correspondence between entities which we know does not happen), even as the individual changes. If that is correct, then change the “identity module” and you change the self-perception of identity.
I don't see why the TM issue is essential to your confusion. If you are not a dualist then the fact that two human brains differ only in the precise arrangement of the same types of atoms present in very similar numbers and proportions raises the same questions.
I am not a dualist. I used the TM to avoid issues of quantum mechanics. TM equivalent is not compatible with a dualist view either. Only a part of what the brain does is conscious. The visual cortex isn't conscious. The processing of signals from the retina is not under conscious control. That is why optical illusions work, the signal processing happens a certain way, and that certain way cannot be changed even when consciously it is known that what is seen is counterfactual. There are many aspects of brain information processing that are like this. Sound processing is like this; where sounds are decoded and pattern matched to communication symbols. Since we know that the entity instantiating itself in our brain is not identical with the entity that was there a day ago, a week ago, a year ago, and will not be identical to the entity that will be there next year, why do we perceive there to be continuity of consciousness? Is that an illusion of continuity the same as the way the visual cortex fills in the blind spot on the retina? Is that an illusion of continuity the same as pareidolia? I suspect that the question of consciousness isn't so much why we experience consciousness, but why we experience a continuity of consciousness when we know there is no continuity.
You may be interested that I probed a similar question regarding how "qualia" come into play with this post [] about when two (classical) beings trade experiences.

My prior for the probability of winning the lottery by fraud is high enough to settle the question: the woman discussed in the article is cheating.

Does anyone disagree with this?

The appropriate question to ask is: Given the number of people who play all the different kinds of lotteries, what are the odds of there being some person who wins four (modest) jackpots? Incidentally, three wins came from scratch-off tickets, which seem inherently less secure than the ones with a central drawing. (And you can also do something akin to card-counting with them: the odds change depending on how many tickets have already been sold and how many prizes have been claimed. Some states make this information public, so you can sometimes find tickets with a positive expected value in dollars.)
I admit I don't know the odds of one person winning four jackpots of over a million dollars each by pure chance. However, my guess is that they are fairly low. But maybe I'm wrong. Regardless, one can just as easily ask "What are the odds that someone who knows how to cheat at lotteries by this time would have won four of them while cheating on at least one of them?" Surely the answer to this is: better odds than the answer to the previous question. There is something else involved as well. We can consider the two hypotheses: 1) she won four lotteries by pure luck; 2) she won four lotteries by cheating. The first hypothesis would predict that she will never win another lottery (like ordinary people.) The second hypothesis would predict that there is a good chance she will win another in her lifetime. Agreeing with the second hypothesis, I predict with significant probability that she will win another. If she does, your credence in the proposition that it happened by chance must take a huge blow. In fact, would you agree that in this event, you would admit it to be more likely that she cheated? If so, then consider what would have happened if I had raised the same issue after she had won three of them...
What's your secret? ;)
See my reply to CronoDAS regarding the possibility of a fifth lottery win.
My prior that the universe is not sufficiently uniformly described by typical reductionist reasoning like the kind found in Eliezer's reductionism sequence is high enough that in order to make distinctions between such low probability hypotheses as the ones described I would need to be more sure that my model was meant to deal with the relationship between hypotheses and observed evidence on the extreme ends of a log odds probability scale. (I would also have to be less aware of emotionally available and biased-reasoning-causing fun-theoretic-like anthropic-like not-explicitly-reasoned-through alternative hypotheses.)
What are the alternative hypotheses? Magic? A simulation with interference from the simulator? I'm not denying the possibility of alternatives, it's just that they all seem less likely the two low probability hypotheses originally considered (chance and cheating).

This is a brief excerpt of a conversation I had (edited for brevity) where I laid out the basics of a generalized anti-supernaturalism principle. I had to share this because of a comment at the end that I found absolutely beautiful. It tickles all the logic circuits just right that it still makes me smile. It’s fractally brilliant, IMHO.

(italics are not-me)

So you believe there is a universe where 2 + 2 = 4 or the law of noncontradiction does not obtain? Ok, you are free to believe that. But if you are wrong, I am sure that you can see that there is an or... (read more)

SiteMeter gives some statistics about number of visitors that LessWrong has, per hour/per day/per month, etc.

According to the SiteMeter FAQ, multiple views from the same IP address are considered to be the same "visit" only if the they're spaced by 30 minutes or less. It would be nice to know how many visitors LessWrong has over a given time interval, where two visits are counted to be the same if they come from the same IP address. Does anyone know how to collect this information?


00:18 BTW, I figured out why Eliezer looks like a cult leader to some people. It's because he has both social authority (he's a leader figure, solicits donations) and an epistemological authority (he's the top expert, and wrote the sequences which are considered canonical).

00:18 If, for example, Wei Dai kicked Eliezer's ass at FAI theory, LW would not appear cultish

00:18 This suggests that we should try to make someone else a social authority so that he doesn't have to be.


Said on #lesswrong: 00:18 BTW, I figured out why Eliezer looks like a cult leader to some people. It's because he has both social authority (he's a leader figure, solicits donations) and an epistemological authority (he's the top expert, and wrote the sequences which are considered canonical). 00:18 If, for example, Wei Dai kicked Eliezer's ass at FAI theory, LW would not appear cultish 00:18 This suggests that we should try to make someone else a social authority so that he doesn't have to be.


said on #lesswrong 00:18 BTW, I figured out why Eliezer looks like a cult leader to some people. It's because he has both social authority (he's a leader figure, solicits donations) and an epistemological authority (he's the top expert, and wrote the sequences which are considered canonical). 00:18 If, for example, Wei Dai kicked Eliezer's ass at FAI theory, LW would not appear cultish 00:18 This suggests that we should try to make someone else a social authority so that he doesn't have to be.

(I hope posting only a log is okay)

Yes, that is what you said you'd do. An 0.0001% existential risk is equal to 6700 murders, and that's what you said you'd do if you didn't get your way. The fact that you didn't understand what it meant doesn't make it acceptable, and when it was explained to you, you should've given an unambiguous retraction but you didn't. You are obviously bluffing, but if I had the slightest doubt about that, then I would call the police, who would track you down and verify that you were bluffing.

I would call the police, who would track you down and verify that you were bluffing.

And you'd probably be cited for wasting police time. This is the most ridiculous statement I've seen on here in a while.

It was a hypothetical, in the (10^-12 probability) event that waitingforgodel provided credible evidence that he was willing and able to carry through with the threat he made. I figured that following through the logical consequences would make him realize just how ridiculous and bad what he'd said was.

but if I had the slightest doubt about that, then I would call the police, who would track you down and verify that you were bluffing.

You appear to be confused. Wfg didn't propose to murder 6700 people. You did mathematics by which you judge wfg to be doing something as morally bad as 6700 murders. That doesn't mean he is breaking the law or doing anything that would give you the power to use the police to exercise your will upon him.

I disapprove of the parent vehemently.

Hey Jim, It sounds like my post rubbed you the wrong way, that wasn't my intention. I do understand your math (world pop / a mil), did you understand mine? Providing a credible threat reduces existential risk and saves lives... significantly more than the 6700 you cite. Check out this article [] and the wikipedia article on MAD [], then reread the post you're replying to and see if it makes more sense. The Wei Dai exchange might also help shed some light. If you ask questions here I'll do my best to walk you through anything you get stuck on. I don't feel comfortable talking in too much detail here about my list. If anyone knows a good way for me to reveal one or two methods safely I'm willing.. but it's not like they're not rocket science or anything. -wfg (edit: fixed awkward wording in last paragraph)
I am answering this by private message.
Aha! This has happened several times now, and waitingforgodel mentioned something in his reply which clarified what happened. He started from the link to Roko's name on the top contributors link, which produces only a vague comment about something having been deleted, without the reason, details, or link. Anyone with access to Google can track down the link, but it'll take them some time, during which they get to fume without an explanation; and it's pretty much random which part of the story they'll start out at. I don't really object to people who really want to see it tracking down the post and comments, and I realize they certainly can't be gotten rid of, having been public on the internet for awhile and recognized as controversial. But having people encounter a vague hint at first, and having to track it down - that generates negative emotion, and puts them in an irrational state of mind that makes them want to go start a flamewar about it. It would be much better if the first thing they encountered was a truthful but nonspecific overview of what happened, rather than a tantalizing hint. Therefore, the solution is for four people to pass 8082 karma. I am going through the archives and voting up worthy posts by contributors 8-10 (cousin_it, AnnaSalamon and Vladimir_Nesov). I will also try to pass that karma mark myself, by finishing up the collection of half-written article ideas I have lying around. (It's quite a stretch, but it's also a usable motivator for an otherwise worthy goal). (Edited to add: I don't normally support doing funky things with the karma system, but this is important.)

This seems like a highly suboptimal solution. It's an explicit attempt to remove Roko from the top contributors list... if you/we/EY feels that's a legitimate thing to do, well then we should just do it directly. And if it isn't a legitimate thing to do, then we shouldn't do it via gaming the karma system.

6Wei Dai13y
Wouldn't the easiest solution be just to have Eliezer agree to have Roko's posts and comments restored (the ones that he voluntarily deleted)? My understanding is that Roko already agreed [], and we're now just waiting [] on Eliezer's word. I don't see any reason why he wouldn't agree. Has anyone actually asked him directly?
Just to be clear, I didn't learn about this via the Roko link (nor did I say in PM that I did), I used the Roko link after finding out about it on messages higher up in this thread (July 2010 open thread pt 2). Without the link I would have used the LW search bar. No biggie, I wouldn't even mention it except that it seems to be your justification for voting weirdness.
Thankyou. Finding out about the issue via a link from the top posts sounded improbable so I was surprised. This confirmation makes jimrandomh's voting scheme even more outrageous. "People don't approve of what Eliezer did to Roko... lets hide all evidence that Roko ever existed!"