This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Open Thread, August 2010
New Comment
706 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-][anonymous]270

The game of Moral High Ground (reproduced completely below):

At last it is time to reveal to an unwitting world the great game of Moral High Ground. Moral High Ground is a long-playing game for two players. The following original rules are for one M and one F, but feel free to modify them to suit your player setup:

  1. The object of Moral High Ground is to win.

  2. Players proceed towards victory by scoring MHGPs (Moral High Ground Points). MHGPs are scored by taking the conspicuously and/or passive-aggressively virtuous course of action in any situation where culpability is in dispute.

(For example, if player M arrives late for a date with player F and player F sweetly accepts player M's apology and says no more about it, player F receives the MHGPs. If player F gets angry and player M bears it humbly, player M receives the MHGPs.)

  1. Point values are not fixed, vary from situation to situation and are usually set by the person claiming them. So, in the above example, forgiving player F might collect +20 MHGPs, whereas penitent player M might collect only +10.

  2. Men's MHG scores reset every night at midnight; women's roll over every day for all time. Therefore, it is statistically hig

... (read more)
5NancyLebovitz
The whole thread is about relationship hacks-- it's fascinating.
5sketerpot
One of the first comments is something I've been saying for a while, about how to admit that you were wrong about something, instead of clinging to a broken opinion out of stubborn pride: The key is to actually enjoy becoming less wrong, and to take pride in admitting mistakes. That way it doesn't take willpower, which makes everything so much easier.
2Yoreth
But apparently it still wasn't enough to keep them together...
4Blueberry
Not all relationships need to last forever, and it's not necessarily a failure if one doesn't.
1wedrifid
Yoreth may subtract 50 MHG points from hegemonicon but also loses 15 himself.

I'm intrigued by the idea of trying to start something like a PUA community that is explicitly NOT focussed on securing romantic partners, but rather the deliberate practice of general social skills.

It seems like there's a fair bit of real knowledge in the PUA world, that some of it is quite a good example of applied rationality, and that much of it could be extremely useful for purposes unrelated to mating.

I'm wondering:

  • if this is an interesting idea to LWers?
  • if this is the right venue to talk about it?
  • does something similar already exist?

I'm aware that there was some previous conversation around similar topics and their appropriateness to LW, but if there was final consensus I missed it. Please let me know if these matters have been deemed inappropriate.

8Violet
If you want non-PC approaches there are two communities you could look into: sales-people and conning people. The second one actually has most of the how-to-hack-peoples minds. If you want a kinder version look at it titled "social engineering".
4cousin_it
Toastmasters? General social skills are needed in business, a lot of places teach them and they seem to be quite successful.
5SilasBarta
From my limited experience with Toastmasters, it's very PC and targeted at median-level intelligence people -- not the thing people here would be looking for. "PUA"-like implies XFrequentist is considering something that is willing to teach the harsh, condemned truths.
7XFrequentist
I went to a Toastmasters session, and was... underwhelmed. Even for public speaking skills, the program seemed kind of trite. It was more geared toward learning the formalities of meetings. You'd probably be a better committee chair after following their program, but I'm not sure you could give a great TED talk or wow potential investors. Carnegie's program seems closer to what I had in mind, but I want to replicate both the community aspect and the focus on "field" practice of the PUAs, which I suspect is a big part of what makes them so formidable.
3D_Alex
The clubs vary in their standard. I recommend you try a few in your area (big cities should have a bunch). For 2 years I used to commute 1 hour each way to attend Victoria Quay Toastmasters in Fremantle, it was that good. It was the 3rd club I tried after moving.
1NancyLebovitz
I've heard smart people speak well of Toastmasters. It may be a matter of local variation, or it may be that Toastmasters is very useful for getting past fear of public speaking and acquiring adequate skills.
3XFrequentist
My impression could easily be off; I only went to one open house. It wasn't all negative. They seemed to have a logical progression of speech complexity, and quite a standardized process for giving feedback. Some of the speakers were excellent. It was fully bilingual (English/French), which was nice. I don't think it's what I'm looking for, but it's probably okay for some other goals.
1JanetK
I belonged to TM for many years and I would still if there was a club near me. I found it great for many reasons. But I have to say that you get what you put in. And you get what you want to get. If you want friends and social graces - OK get them. If you want to lose fear of speaking - get that. Ignore what you don't want and take what you do.
1pjeby
I've mostly heard them damn it with faint praises, as being great for polishing presentation skills, but not being particularly useful for anything else. Interestingly enough, of people I know who are actually professional speakers (in the sense of being paid to talk, either at their own events or other peoples'), exactly none of them recommend it. (Even amongst ones who do not sell any sort of speaker training of their own.) OTOH, I have heard a couple of shout-outs for the Carnegie speaking course, but again, this is all just in the context of speaking... which has little relationship to general social skills AFAICT.
1XFrequentist
Interesting, that jibes* pretty well with my impressions of Toastmasters. There are other Carnegie courses than the speaking one. This is the one I was thinking of. *See comment below for the distinction between "jives" and "jibes". It ain't cool beein' no jive turkey!
4NancyLebovitz
Nitpick: "jibes" means "is consistent with". "Jives" means "is talking nonsense" or (archaic) "dances". {Tries looking it up} Wikipedia says "jives" can be a term for African American Vernacular English. The Urban Dictionary gives it a bunch of definitions, including both of mine, "jibe", and forms of African American speech which include a lot of slang, but not any sort of African American speech in general. On the other hand, the language may have moved on-- I keep seeing that mistake (the Urban Dictionary implies it isn't a mistake), and maybe I should give up. I still retain a fondness for people who get it right.
0[anonymous]
haha... thanks!
1XFrequentist
I'd be interested in specifics...
1ianshakil
Would such "practice" require a physical venue? -- or would an online setting -- maybe even Skype -- be sufficient?
0XFrequentist
That's a good question. I don't know, but I suspect a purely online setting would be adequate for beginners, but insufficient for mastery. What do you think?
0marc
I don't think you'd have much success mastering non verbal communication through skype.
0ianshakil
Generally, I agree. There's a time and a place for both online and offline venues. Ideally, you'd want a very large number of participants such that, during sessions, most of your peers are new and the situation is somewhat anonymous/random. If your sessions are with the same old people, these people will become well known -- perhaps friends, and the social simulation won't be very meaningful. Who knows.. maybe there's a way to piggyback on the Chatroulette concept?!
0[anonymous]
I don't know.
1katydee
Extremely, yes, not to my knowledge.
0ianshakil
A lot of companies conduct anonymous "360 review" processes which veer into this territory to some degree. Also, several business schools conduct leadership labs. In fact, a large chunk of the business school experience is really about social grooming / learning how to network / and so forth. So do we have any traction for this idea? How about a meetup?
0XFrequentist
Thanks, those are useful leads. I've done the 360 review thing but hadn't connected it to this idea. It seems to have gotten a good amount of interest. I've got a draft post going that still needs some polish, but I should hopefully be able to get it finished this weekend. If all goes to plan some sort of meetup should follow. Any suggestions on logistics? I'm not at all sure what the best way to organize this is, I'd appreciate any thoughts.
0marc
I think you're probably correct in your presumptions. I find it an interesting idea and would certainly follow any further discussion.
[-]XiXiDu200

LW database download?

I was wondering if it would be a good idea to offer a download of LW or at least the sequences and Wiki. In the manner that Wikipedia is providing it.

The idea behind it is to have a redundant backup in case of some catastrophe, for example if the same happens to EY that happened to John C. Wright. It could also provide the option to read LW offline.

That's incredibly sad.

Every so often, people derisively say to me "Oh, and you assume you'd never convert to religion then?" I always reply "I absolutely do not assume that, it might happen to me; no-one is immune to mental illness."

Tricycle has the data. Also if an event of JCW magnitude happened to me I'm pretty sure I could beat it. I know at least one rationalist with intense religious experiences who successfully managed to ask questions like "So how come the divine spirit can't tell me the twentieth digit of pi?" and discount them.

4Unknowns
Actually, you have to be sure that you wouldn't convert if you had John Wright's experiences, otherwise Aumann's agreement theorem should cause you to convert already, simply because John Wright had the experiences himself-- assuming you wouldn't say he's lying. I actually know someone who converted to religion on account of a supposed miracle, who said afterward that since they in fact knew before converting that other people had seen such things happen, they should have converted in the first place. Although I have to admit I don't see why the divine spirit would want to tell you the 20th digit of pi anyway, so hopefully there would be a better argument than that.
2arundelo
Here's a more detailed version (starting at "I know a transhumanist who has strong religious visions").
1[anonymous]
What if you sustained hypoxic brain injury, as JCW may well have done during his cardiac event? (This might also explain why he think it's cool to write BSDM scenes featuring a 16-year-old schoolgirl as part of an ostensibly respectable work of SF, so it's a pet suspicion of mine.)
8wedrifid
It would seem he is just writing for Mature Audiences. In this case maturity means not just 'the age at which we let people read pornographic text' but the kind of maturity that allows people to look beyond their own cultural prejudices. 16 is old. Not old enough according to our culture but there is no reason we should expect a fictional time-distant culture to have our particular moral or legal prescriptions. It wouldn't be all that surprising if someone from an actual future time to, when reading the work, scoff at how prudish a culture would have to be to consider sexualised portrayals of women that age to be taboo! Mind you I do see how a hypoxic brain injury could alter someone's moral inhibitions and sensibilities in the kind of way you suggest. I just don't include loaded language in the speculation.
9CronoDAS
Interestingly, if the book in question is the one I think it is, it takes place in Britain, where the age of consent is, in fact, sixteen.
4wedrifid
Come to think of it, 16 is the age of consent here (Australia - most states) too. I should have used 'your' instead of 'our' in the paragraph you quote! It seems I was just running with the assumption.
3CronoDAS
Although "18 years old" does seem to be a hard-and-fast rule for when you can legally appear in porn everywhere, as far as I know...
4Eliezer Yudkowsky
Point of curiosity: Does anyone else still notice this sort of thing? I don't think my generation does anymore.
2Richard_Kennaway
I've only read his Golden Age trilogy, so if it's there, then no, to this 50-something it didn't stand out from everything else that happened. If it's in something else, I doubt it would. I mean, I've read Richard Morgan's ultra-violent stuff, including the gay mediæval-style fantasy one, and, well, no. [ETA: from Google the book in question appears to be Orphans of Chaos.] I could be an outlier though.
0[anonymous]
Well, I'm female. Could be women tend to be more sensitive to that kind of thing. That said, I wasn't really planning to start a discussion about sexually explicit portrayals of sub-18 teenagers and whether they're ok, and I doubt I'll participate further in one. Unfortunately I don't own the book, so if anyone is curious about the details of what I was referring to, they'll have to read Orphans of Chaos (not that I recommend it on its merits). I wouldn't hazard a guess as to how much a person can be oblivious to (probably a lot), but I'd be surprised if most people's conscious, examined reaction to the sexual content (which is abundant and spread throughout the book, though not hardcore) was closer to "That is normal/A naturalistic portrayal of a 16-year-old girl's sexual feelings/Literary envelope-pushing" than to "That is weird/creepy."
1CronoDAS
Eh, you see people trying to "push boundaries" in "respectable" literature all the time anyway.
2[anonymous]
Certainly there are other explanations. If you can show me that JCW openly wrote highly sexualized portrayals of people below the age of consent before his religious experience/heart attack, I will be happy to retract.
0[anonymous]
Iron Sunrise by Charles Stross and Cowl by Neal Asher feature sex scenes with 16 year old girls. I don't remember to what detail though. That sounds suspicious indeed and I would oppose it in most circumstances. That is, if it isn't just a 16 year old body or simulation of a body (yeah no difference?) and if it isn't just the description of how bad someone is...within SF you can naturally create exceptional circumstances. Have you read books by Richard Morgan? The torture scences in the Takeshi novels are some of the most detailed. As virtual reality allows them to load you into the body of pregnant women, being raped and having soldering-iron slided up your vagina. And if you die due to hours of torture, just restart the simulation. Just one of the scenes from the first book.
3Unknowns
However, if EY converted to religion, he would (in that condition) assert that he had had good reasons for doing it, i.e. that it was rational. So he would have no reason to take down this website anyway.
3nawitus
You can use the wget program like this: 'wget -m lesswrong.com'. A database download would be easier on the servers though.
2Soki
I support this idea. But what about copyright issues? What if posts and comments are owned by their writer?
-1listic
I would argue that one cannot own the information stored on the computers of other, unrelated people. I support this idea also. I actually intend to make a service for uploading the content of forum/blog to alternate server for backup service, but who knows when it will happen.
0xamdam
WebOffline can grab the whole thing to an iphone or ipad, formatting preserved. There are similar programs for PC/MAC

Cryronics Lottery.

Would it be easier to sign up for cryonics if there was a lottery system? A winner of the lottery could say "Well, I'm not a die-hard cryo-head, but I thought it was interesting so I bought a ticket (which was only $X) and I happened to win, and it's pretty valuable, so I might as well use it."

It's a sort of "plausible deniability" that might reduce the social barriers to cryo. The lottery structure might also be able to reduce the conscientousness barriers - once you've won, then the lottery administrators (possibly volunteers, possibly funded by a fraction of the lottery) walk you through a "greased path".

8NihilCredo
On a completely serious, if not totally related, note: it would be a lot easier to convince people to sign up for cryonics if the Cryonics Institute's and/or KrioRus's websites looked more professional.
7Alicorn
I'm not sure if it would help get uninterested people interested; but I think it would help get interested people signed up if there were a really clear set of individually actionable instructions - perhaps a flowchart so they can depend on individual circumstances - that were all found in one place.
2katydee
And Rudi Hoffman's page.
5gwern
I doubt it. Signing up for a lottery for cryonics is still suspicious. There is only one payoff, and that is of the suspicious thing. No one objects to the end of lotteries because we all like money, what is objected to is the lottery as efficient means of obtaining money (or entertainment). Suppose that the object were something you and I regard with equal revulsion as many regard cryonics. Child molestation, perhaps. Would you really regard someone buying a ticket as not being quite evil and condoning and supporting the eventual rape?
6AlexM
Who regards cryonics as evil like child molestation? General public sees cryonics as fraud - somethink like buying real estate on the moon or waiting for mothership, and someone paying for it as gullible fool. For example, look at discussions when Britney Spears http://www.freerepublic.com/focus/f-chat/2520762/posts wanted to be frozen. Lots of derision, no hatred.
2NihilCredo
Bad example. People want to make fun of celebrities (especially a community as caustic and "anti-elitist" as the Freepers). She could have announced that she was enrolling in college, or something else similarly common-sensible, and you would still have got a threadful of nothing but cheap jokes. A discussion about "My neighbour / brother-in-law / old friend from high school told me he has decided to get frozen" would be more enlightening.
0gwern
Does the fact that my specific example may not be perfect refute my point that mere indirection & chance does not eliminate all criticism and this can be understood by merely introspecting one's intuitions?
0Johnicholas
Rather than using an undiluted negative as an example, suppose that there was something more arguable, that might have some positive aspects - sex segregation of schools, for example. Assuming that my overall judgement of sex segregation is negative, if someone pursued sex segregation fiercely and dedicatedly, then my overall negative valuation of their goal would color my judgement of them. If they can plausibly claim to have supported it momentarily on a whim, while thinking about the positive aspects, then there is some insulation between my judgement of the goal and my judgement of the person.
0[anonymous]
Who regards cryonics as evil like child molestation? General public sees cryonics as fraud - somethink like buying real estate on the moon or waiting for mothership, and someone paying for it as gullible fool. For example, look at discussions when [Britney Spears] (http://www.freerepublic.com/focus/f-chat/2520762/posts) wanted to be frozen. Lots of derision, no hatred.

Letting Go by Atul Gawande is a description of typical end of life care in the US, and how it can and should be done better.

Typical care defaults to taking drastic measures to extend life, even if the odds of success are low and the process is painful.

Hospice care, which focuses on quality of life, not only results in more comfort, but also either no loss of lifespan or a somewhat longer life, depending on the disease. And it's a lot cheaper.

The article also describes the long careful process needed to find out what people really want for the end of their life-- in particular, what the bottom line is for them to want to go on living.

This is of interest for Less Wrong, not just because Gawande is a solidly rationalist writer, but because a lot of the utilitarian talk here goes in the direction of restraining empathic impulses.

Here we have a case where empathy leads to big utilitarian wins, and where treating people as having unified consciousness if you give it a chance to operate works out well.

As good as hospices sound, I'm concerned that if they get a better reputation, less competent organizations calling themselves hospices will spring up.

From a utilitarian angle, I wonder if those drastic methods of treatment sometimes lead to effective methods, and if so, whether the information could be gotten more humanely.

8Rain
End of life regulation is one reason cryonics is suffering, as well: without the ability to ensure preservation when the brain is still relatively healthy, the chances diminish significantly. I think it'd be interesting to see cryonics organizations put field offices in countries or states with legal suicide laws. Here's a Frontline special on suicide tourists.
4daedalus2u
The framing of the end of life issue as a gain or a loss as in the monkey token exchange probably makes a gigantic difference in the choices made. http://lesswrong.com/lw/2d9/open_thread_june_2010_part_4/2cnn?c=1 When you feel you are in a desperate situation, you will do desperate things and clutch at straws, even when you know those choices are irrational. I think this is the mindset behind the clutching at straws that quacks exploit with CAM, as in the Gonzalez Protocol for pancreatic cancer. http://www.sciencebasedmedicine.org/?p=1545 It is actually worse than doing nothing, worse than doing what main stream medicine recommends, but because there is the promise of complete recovery (even if it is a false promise), that is what people choose based on their irrational aversion to risk.

In his bio over at Overcoming Bias, Robin Hanson writes:

I am addicted to “viewquakes”, insights which dramatically change my world view.

So am I. I suspect you are too, dear reader. I asked Robin how many viewquakes he had and what caused them, but haven't gotten a response yet. But I must know! I need more viewquakes. So I propose we share our own viewquakes with each other so that we all know where to look for more.

I'll start. I've had four major viewquakes, in roughly chronological order:

  • (micro)Economics - Starting with a simple approximation of how humans behave yields a startlingly effective theory in a wide range of contexts.
  • Bayesianism - I learned how to think
  • Yudkowskyan/Humean Metaethics - Making the move from Objective theories of morality to Subjectively Objective theories of morality cleared up a large degree of confusion in my map.
  • Evolution - This is a two part quake: evolutionary biology and evolutionary psychology. The latter is extremely helpful for explaining some of the behavior that economic theory misses and for understanding the inputs into economic theory (i.e., preferences).
6ABranco
I've had some dozens of viewquakes, most minors, although it's hard to evaluate it in hindsight now that I take them for granted. Some are somewhat commonplace here: Bayesianism, map–territory relations, evolution etc. One that I always feel people should be shouting Eureka — and when they are not impressed I assume that this is old news to them (and is often not, as I don't see it reflected in their actions) — is the Curse of Knowledge: it's hard to be a tapper. I feel that being aware of it dramatically improved my perceptions in conversation. I also feel that if more people were aware of it, misunderstandings would be far less common. Maybe worth a post someday.

I can see how the Curse of Knowledge could be a powerful idea. I will dwell on it for a while -- especially the example given about JFK, as an example of a type of application that would be useful in my own life. (To remember to describe things using broad strokes that are universally clear, rather than technical and accurate,in contexts where persuasion and fueling interest is most important.)

For me, one of the main viewquakes of my life was a line I read from a little book of Kalil Gibran poems:

Your pain is the breaking of the shell that encloses your understanding.

It seemed to be a hammer that could be applied to everything.. Whenever I was unhappy about something, I thought about the problem a while until I identified a misconception. I fixed the misconception ("I'm not the smartest person in graduate school"; "I'm not as kind as I thought I was"; "That person won't be there for me when I need them") by assimilating the truth the pain pointed me towards, and the pain would dissipate. (Why should I expect graduate school to be easy? I'll just work harder. Kindness is what you actually do, not how you expect you'll feel. That person is fun to han... (read more)

5ABranco
Remarkable quote, thank you. Reminded me of the Anorexic Hermit Crab Syndrome:
6fiddlemath
Sounds like the illusion of transparency. We've got that post around. ;) On the other hand, the tapper/listener game is a very evocative instance.
1RobinZ
The thesis cited.

Was Kant implicitly using UDT?

Consider Kant's categorical imperative. It says, roughly, that you should act such that you could will your action as a universal law without undermining the intent of the action. For example, suppose you want to obtain a loan for a new car and never pay it back - you want to break a promise. In a world where everyone broke promises, the social practice of promise keeping wouldn't exist and thus neither would the practice of giving out loans. So you would undermine your own ends and thus, according to the categorical imperative, you shouldn't get a loan without the intent to pay it back.

Another way to put Kant's position would be that you should choose such that you are choosing for all other rational agents. What does UDT tell you to do? It says (among other things) that you should choose such that you are choosing for every agent running the same decision algorithm as yourself. It wouldn't be a stretch to call UDT agents rational. So Kant thinks we should be using UDT! Of course, Kant can't draw the conclusions he wants to draw because no human is actually using UDT. But that doesn't change the decision algorithm Kant is endorsing.

Except... Kant isn'... (read more)

2Emile
I remember Eliezer saying something similar, though I can't find it right now (the closest I could find was this ). It was something about the benefits of being the kind of person that doesn't lie, even if the fate of the world is at stake. Because if you aren't, the minute the fate of the world is at stake is the minute your word becomes worthless.
1Matt_Simpson
I recall it too. I think the key distinction is that if the choice was literally between lying and everyone in the world - including yourself - perishing, Kant would let us all die. Eliezer would not. What I took Eliezer to be saying (working from memory, I may try to find the post later) is that if you think the choice is between lying and the sun exploding (or something analogous) in any real life situation... you're wrong. It's far more likely that you're rationalizing the way you're compromising your values than that it's actually necessary to compromise your values, given what we know about humans. So a consequentialist system implies basically deontological rules once human nature is taken into account. Once again, this is all from my memory, so I could be wrong.
1Unknowns
Although Eliezer didn't put it precisely in these terms, he was sort of suggesting that if one could self-modify in such a way that it became impossible to break a certain sort of absolutely binding promise, it would be good to modify oneself in that way, even though it would mean that if the situation actually came up where you had to break the promise or let the world perish, you would have to let the world perish.
1[anonymous]
I think the article you (and the parent comment) are talking about is this one
2SilasBarta
Drescher has some important things to say about this distinction in Good and Real. What I got out of it, is that the CI is justifiable on consequentialist or self-serving grounds, so long as you relax the constraint that you can only consider the causal consequences (or "means-end links") of your decisions, i.e., things that happen "futureward" of your decision. Drescher argues that specifically ethical behavior is distinguished by its recognition of these "acausal means-end links", in which you act for the sake of what would be the case if-counterfactually you would make that decision, even though you may already know the result. (Though I may be butchering it -- it's tough to get my head around the arguments.) And I saw a parallel between Drescher's reasoning and UDT, as the former argues that your decisions set the output of all similar processes to the extent that they are similar.
1Scott Alexander
I thought Kant sounded a lot more like TDT than UDT. Or was that what you meant?
0Matt_Simpson
I'm not familiar enough with Pearl's formalism to really understand TDT - or at least that's why I haven't really dove into TDT yet. I'd love to hear why you think Kant sounds more like TDT though. I'm suspecting it has something to do with considering counterfactuals.
2Scott Alexander
I'm not familiar at all with Pearl's formalism. But from what I see on this site, I gather that the key insight of updateless decision theory is to maximize utility without conditioning on information about what world you're in, and the key insight of timeless decision theory is what you're describing (Eliezer summarizes it as "Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation.")
0Matt_Simpson
I think Eliezer's summary is also a fair description of UDT. The difference between UDT and TDT appears to be subtle, and I don't completely understand it. From what I can tell, UDT just does choose in the way Eliezer describes, completely ignoring any updating process. TDT chooses this way as a result of how it reasons about counterfactuals. Somehow, TDT's counterfactual reasoning causes it to choose slightly differently from UDT, but I'm not sure why at this point.

I found TobyBartels's recent explanation of why he doesn't want to sign up for cryonics a useful lesson in how different people's goals in living a long time (or not) can be from mine. Now I am wondering if maybe it would be a good idea to state some of the reasons people would want to wake up 100 years later if hit by a bus. Can't say I've been around here very long but it seems to me it's been assumed as some sort of "common sense" - is that accurate? I was wondering if other people's reasons for signing up / intending to sign up (I am not c... (read more)

8steven0461
It sure seems like a lot of people could feed their will to live by reading just the first half of an exciting fiction book.
4[anonymous]
We would need to drastically strengthen norms against spoilers.
8NancyLebovitz
One thought is that it's tempting to think of yourself as being the only one (presumably with help from natives) trying to deal with the changed world. Actually I think it's more likely that there will be many people from your era, and there will be immigrants' clubs, with people who've been in the future for a while helping the greenhorns. I find this makes the future seem more comfortable. The two major reasons I can think of for wanting to be in the future is that I rather like being me, and the future should be interesting.
1soreff
The single largest motivation for me is just that a future which is powerful enough, and rich enough, and benevolent enough to revive cryonicists is likely to be a very pleasant place to be in. If nothing else, lots of their everyday devices are likely to look like marvelous toys from my point of view. The combination of that with the likelihood that if they can repair me at all, I'd guess that they would use a youthful body (physical or simulated) as a model is quite enough to be an attractive prospect.

An ex-English Professor and ex-Cop, George Thompson, who now teaches a method he calls "Verbal Judo". Very reminiscent of Eliezer's Bayesian Dojo, this is a primer on rationalist communications techniques, focusing on defensive & redirection tactics. http://fora.tv/2009/04/10/Verbal_Judo_Diffusing_Conflict_Through_Conversation

I wrote up some notes on this, because there's no transcript and it's good information. Let's see if I can get the comment syntax to cooperate here.

How to win in conversations, in general.

Never get angry. Stay calm, and use communication tactically to achieve your goals. Don't communicate naturally; communicate tactically. If you get upset, you are weakened.

How to deflect.

To get past an unproductive and possibly angry conversation, you need to deflect the unproductive bluster and get down to the heart of things: goals, and how to achieve them. Use a sentence of the form:

"[Acknowledge what the other guy said], but/however/and [insert polite, goal-centered language here]."

You spring past what the other person said, and then recast the conversation in your own terms. Did he say something angry, meant to upset you? Let it run off you like water, and move on to what you want the conversation to be about. This disempowers him and puts you in charge.

How to motivate people.

There's a secret to motivating people, whether they're students, co-workers, whatever. To motivate someone, raise his expectations of himself. Don't put people down; raise them up. When you want to reprimand so... (read more)

4mattnewport
Does the talk provide any evidence for the efficacy of the tactics?
2sketerpot
The speaker has a whole career of experience dealing with people who are irrational because they're drunk, angry, frightened, or some combination of the above. He says this stuff is what he does, and that it works great. That's anecdotal, but it's about the strongest kind of anecdotal evidence it's possible to get. It would be nice if someone did a properly controlled study on this.
3NancyLebovitz
Thank you for writing this up. The one thing I wondered about was whether the techniques for getting compliance interfere with getting information. For example, what if someone who isn't consenting to a search is actually right about the law?
1sketerpot
The thing that bothers me about the talk is that most of it makes the assumption that you're being calm and rational, that you're right, and that whoever you're talking to is irrational and needs to be verbally judo'd into compliance. Sometimes that's the case, but most of the techniques don't really apply to situations where you're dealing with another calm, sane person as an equal.
0[anonymous]
Thompson is actually ambiguous on the point. Sometimes he's really clear that what you're aiming for is compliance.
0Eneasz
This is good, you should float it as a top-level post
5JenniferRM
Thanks. That was a compact and helpful 90 minutes. The first 30 minutes were OK, but the 2nd 30 were better, and the 3rd was the best. Towards the end I got the impression that he was explaining lessons that were the kind of thing people spend 5 years learning the hard way and that lots of people never learn for various reasons.
2Blueberry
That sounds really interesting. I wish there were a transcript available!
0[anonymous]
There's an mp3 version available, which sounds just as good at 1.4x speed. And it cuts the 90 minutes down to about an hour.

I've been on a Wikipedia binge, reading about people pushing various New Age silliness. The tragic part is that a lot of these guys actually do sound fairly smart, and they don't seem to be afflicted with biological forms of mental illness. They just happen to be memetically crazy in a profound and crippling way.

Take Ervin Laszlo, for instance. He has a theory of everything, which involves saying the word "quantum" a lot and talking about a mystical "Akashic Field" which I would describe in more detail except that none of the explanatio... (read more)

I thought I'd pose an informal poll, possibly to become a top-level, in preparation for my article about How to Explain.

The question: on all the topics you consider yourself an "expert" or "very knowledgeable about", do you believe you understand them at least at Level 2? That is, do you believe you are aware of the inferential connections between your expertise and layperson-level knowledge?

Or, to put it another way, do you think that, given enough time, but using only your present knowledge, you could teach a reasonably-intelligent l... (read more)

6Oscar_Cunningham
I have a (I suspect unusual) tendency to look at basic concepts and try to see them in as many ways as possible. For example, here are seven equations, all of which could be referred to as Bayes' Theorem: =\frac{P(E%7CH).P(H)}{P(E)}\\[10]P(H%7CE)=\frac{P(E%7CH)}{P(E)}.P(H)\\[10]P(H%7CE)=\frac{P(E%7CH).P(H)}{P(E%7CH).P(H)+P(E%7C\neg%20\!\,H).P(\neg%20\!\,H)}\\[10]P(H%7CE)=\frac{1}{1+\frac{P(E%7C\neg%20\!\,H).P(\neg%20\!\,H)}{P(E%7CH).P(H)}}\\[10]P(H%7CE)=\frac{P(E%7CH).P(H)}{\sum%20P(E%7CH_i).P(H_i)}\\[10]%0Aodds(H%7CE)=\frac{P(E%7CH)}{P(E%7C\neg%20\!\,H)}.odds(H)\\[10]%0Alogodds(H%7CE)=log(\frac{P(E%7CH)}{P(E%7C\neg%20\!\,H)})+logodds(H)) However, each one is different, and forces a different intuitive understanding of Bayes' Theorem. The fourth one down is my favourite, as it makes obvious that the update depends only on the ratio of likelihoods. It also gives us our motivation for taking odds, since this clears up the 1/(1+x)ness of the equation. Because of this way of understanding things, I find explanations easy, because if one method isn't working, another one will. ETA: I'd love to see more versions of Bayes' Theorem, if anyone has any more to post.
1ABranco
P (H|E) = P (H and E) / P(E) which tends to be how conditional probability is defined, and actually the first version of Bayes that I recall seeing.
0SilasBarta
Very well said, and doubles as a reply to the last part of my comment here. (When I read your comment in my inbox, I thought it was actually a reply to that one! Needless to say, I my favorite versions of the theorem are the last two you listed.)
5DanArmak
This strikes me as an un-lifelike assumption. If I had to explain things in this way, I would expect to encounter some things that I don't explicitly know (and other that I knew and have forgotten), and to have to (re)derive them. But I expect that I would be able to rederive almost all of them. Refining my own understanding is a natural part of building a complex explanation-story to tell to others, and will happen unless I've already built this precise story before and remember it.
3SilasBarta
For purposes of this question, things you can rederive from your present knowledge count as part of your present knowledge.
4NancyLebovitz
I think I know a fair amount about doing calligraphy, but I'm dubious that someone could get a comparable level of knowledge without doing a good bit of calligraphy themselves. If I were doing a serious job of teaching, I would be learning more about how to teach as I was doing it. I consider myself to be a good but not expert explainer. Possibly of interest: The 10-Minute Rejuvenation Plan: T5T: The Revolutionary Exercise Program That Restores Your Body and Mind : a book about an exercise system which involves 5 yoga moves. It's by a woman who'd taught 700 people how to do the system, and shows an extensive knowledge of the possible mistakes students can make and adaptations needed to make the moves feasible for a wide variety of people. My point is that explanation isn't an abstract perfectible process existing simply in the mind of a teacher.
4KrisC
But in some limited areas explanation is completely adequate. I taught co-worker how to do sudoku puzzles. After teaching him the human-accessible algorithms and allowing time for practice, I was still consistently beating his time. I knew why, and he didn't. After I explained the difference in mental state I was using, he began beating my time on regular basis. {Instead of checking the list of 1-9 for each box or line, allow your brain to subconsciously spot the missing number and then verify its absence.} He is more motivated and has more focus, while I do puzzles to kill time when waiting. In another job where I believe I had a thorough understanding of the subject, I was never able to teach any of my (~20) trainees to produce vector graphic maps with the speed and accuracy I obtained because I was unable to impart a mathematical intuition for the approximation of curves. I let them go home with full pay when they completed their work, so they definitely had motivation. But they also had editors who were highly detail oriented. I mean to suggest that there is a continuum of subjective ability comparing different skills. Sudoku is highly procedural, once familiar all that is required is concentration. Yoga, in the sense mentioned above, is also procedural, proscriptive; the joints allow a limited number of degrees of freedom. Calligraphy strives for an ideal, but depending on the tradition, there is a degree of interpretation allowed for aesthetic considerations. Mapping, particularly in vector graphics, has many ways to be adequate and no way to be perfect. The number of acceptable outcomes and the degree of variation in useful paths determines the teach-ability of a skillset. The procedural skills can be taught more easily than the subjective, and practice is useful to accomplish mastery of procedural skills. Deeper understanding of a field allows more of the skill's domain to be expressed procedurally rather than subjectively.
0NancyLebovitz
I'm in general agreement, but I think you're underestimating yoga-- a big piece of it is improving access to your body's ability to self-organize. I like "many ways to be adequate and no way to be perfect". I think most of life is like that, though I'll add "many ways to be excellent".
0KrisC
No slight to yoga intended. I only wanted to address the starting point of yoga. I know it is a quite comprehensive field.
3zero_call
I will reply to this in the sense of since I am not so familiar with the formalism of a "Level 2" understanding. My uninteresting, simple answer is: yes. My philosophical answer is that I find the entire question to be very interesting and strange. That is, the relationship between teaching and understanding is quite strange IMO. There are many people who are poor teachers but who excel in their discipline. It seems to be a contradiction because high-level teaching skill seems to be a sufficient, and possibly necessary condition for masterful understanding. Personally I resolve this contradiction in the following way. I feel like my own limitations make it to where I am forced to learn a subject by progressing at it in very simplistic strokes. By the time I have reached a mastery, I feel very capable of teaching it to others, since I have been forced to understand it myself in the most simplistic way possible. Other people, who are possibly quite brilliant, are able to master some subjects without having to transmute the information into a simpler level. Consequentially, they are unable to make the sort of connections that you describe as being necessary for teaching. Personally I feel that the latter category of people must be missing something, but I am unable to make a convincing argument for this point.
4SilasBarta
A lot of the questions you pose, including the definition of the Level 2 formalism, are addressed in the article I linked (and wrote). I classify those who can do something well but not explain or understand the connections from the inputs and outputs to the rest of the world, to be at a Level 1 understanding. It's certainly an accomplishment, but I agree with you that it's missing something: the ability to recognize where it fits in with the rest of reality (Level 2) and the command of a reliable truth-detecting procedure that can "repair" gaps in knowledge as they arise (Level 3). "Level 1 savants" are certainly doing something very well, but that something is not a deep understanding. Rather, they are in the position of a computer that can transform inputs into the right outputs, but do nothing more with them. Or a cat, which can fall from great heights without injury, but not know why its method works. (Yes, this comment seems a bit internally repetitive.)
2zero_call
Ah, OK, I read your article. I think that's an admirable task to try to classify or identify the levels of understanding. However, I'm not sure I am convinced by your categorization. It seems to me that many of these "Level 1 savants" as you call them are quite capable of fitting their understanding with the rest of reality. Actually it seems like the claim of "Level 1 understanding" basically trivializes that understanding. Yet many of these people who are bad teachers have a very nontrivial understanding -- else I don't think this would be such a common phenomena, for example, in academia. I would argue that these people have some further complications or issues which are not recognized in the 1-2-3 hierarchy. That being said, you have to start somewhere, and the 0-1-2-3 hierarchy looks like a good place to start. I'd definitely be interested in hearing more about this analysis.
2SilasBarta
Thanks for reading it and giving me feedback. I'm interested in your claim: Well, they can fit it in the sense that they (over a typical problem set) can match inputs with (what reality deems) the right outputs. But, as I've defined the level, they don't know how those inputs and outputs relate to more distantly-connected aspects of reality. I had a discussion with others about this point recently. My take is basically: if their understanding is so deep, why exactly is their teaching skill so brittle that no one can follow the inferential paths they trace out? Why can't they switch to the infinite other paths that a Level 2 understanding enables them to see? If they can't, that would suggest a lack of depth to their understanding. And regarding the archetypal "deep understanding, poor teacher" you have in mind, do you envision that they could, say, trace out all the assumptions that could account for an anomalous result, starting with the most tenuous, and continuing outside their subfield? If not, I would call that falling short of Level 2.
6zero_call
I would LOVE to agree with this statement, as it justifies my criticism of poor teachers who IMO are (not usually maliciously) putting their students through hell. However, I don't think it's obvious, or I think maybe you just have to take it as an axiom of your system. It seems there is some notion of individualism or personal difference which is missing from the system. If someone is just terrible at learning, can you really expect to succeed in explaining, for example? Realistically I think it's probably impossible to classify the massive concept of understanding by merely three levels, and these problems are just a symptom of that fact. As another example, in order to understand something, it's clearly necessary to be able to explain it to yourself. In your system, you are additionally requiring that your understanding means you must be able to explain things to other people. In order to explain things to others, you have to understand them, as has been discussed. Therefore you have to be able to explain other people to yourself. Why should an explanation of other individuals behavior be necessary for understanding some random area of expertise, say, mathematics? It's not clear to me. It certainly seems like someone with a deep understanding of their subject should be able to identify the validity or uncertainty in their assumptions about the subject. If they are a poor teacher, I think I would still believe this to be true.
0SilasBarta
I've thought about this some, and I think I see your point now. I would phrase it this way: It's possible for a "Level 3 savant" to exist. A Level 3 savant, let's posit, has a very deeply connected model of reality, and their excellent truth-detecting procedure allows them to internally repair loss of knowledge (perhaps below the level of their conscious awareness). Like an expert (under the popular definition), and like a Level 1 savant, they perform well within their field. But this person differs in that they can also peform well in tracing out where its grounding assumptions go wrong -- except that they "just have all the answers" but can't explain, and don't know, where the answers came from. So here's what it would look like: Any problem you pose in the field (like an anomalous result), they immediately say, "look at factor X", and it's usually correct. They even tell you to check critical aspects of sensors, or identify circularity in the literature that grounds the field (i.e. sources which generate false knowledge by excessively citing each other), even though most in the field might not even think about or know how all those sensors work. All they can tell you is, "I don't know, you told me X, and I immediately figured it had to be a problem with Y misinterpreting Z. I don't know how Z relates to W, or if W directly relates to X, I just know that Y and Z were the problem." I would agree that there's no contradiction in the existence of such a person. I would just say that in order to get this level of skill you have to accomplish so many subgoals that it's very unlikely, just as it's hard to make something act and look like a human without also making it conscious. (Obvious disclaimer: I don't think my case is as solid as the one against P-zombies.)
3fiddlemath
I think that the "teaching" benchmark you claim here is actually a bit weaker than a Level 2 understanding. To successfully teach a topic, you don't need to know lots of connections between your topic and everything else; you only need to know enough such connections to convey the idea. I really think this lies somewhere between Level 1 and Level 2. I'll claim to have Level 2 understanding on the core topics of my graduate research, some mathematics, and some core algorithmic reasoning. I'm sure I don't have all of the connections between these things and the rest of my world model, but I do have many, and they pervade my understanding.
2SilasBarta
I agree in the sense that full completion of Level 2 isn't necessary to do what I've described, as that implies a very deeply-connected set of models, truly pervading everything you know about. But at the same time, I don't think you appreciate some of the hurdles to the teaching task I described: remember, the only assumption is that the student has lay knowledge and is reasonably intelligent. Therefore, you do not get to assume that they find any particular chain of inference easy, or that they already know any particular domain above the lay level. This means you would have to be able to generate alternate inferential paths, and fall back to more basic levels "on the fly", which requires healthy progress into Level 2 in order to achieve -- enough that it's fair to say you "round to" Level 2. If so, I deeply respect you and find that you are the exception and not the rule. Do you find yourself critical of how people in the field (i.e. through textbooks, for example) present it to newcomers (who have undergrad prerequisites), present it to laypeople, and use excessive or unintuitive jargon?
4fiddlemath
I agree that the teaching task does require a thick bundle of connections, and not just a single chain of inferences. So much so, actually, that I've found that teaching, and preparing to teach, is a pretty good way to learn new connections between my Level 1 knowledge and my world model. That this "rounds" to Level 2 depends, I suppose, on how intelligent you assume the student is. Yes, constantly. Frequently, I'm frustrated by such presentations to the point of anger at the author's apparent disregard for the reader, even when I understand what they're saying.
2JanetK
I think I have level 2 understanding of many areas of Biology but of course not all of it. It is too large a field. But there are gray areas around my high points of understanding where I am not sure how deep my understanding would go unless it was put to the test. And around the gray areas surrounding the level 2 areas there is a sea of superficial understanding. I have some small areas of computer science at level 2 but they are fewer and smaller, ditto chemistry and geology. I think your question overlooks the nature of teaching skills. I am pretty good at teaching (verbally and one/few to one) and did it often for years. There is a real knack in finding the right place to start and the right analogies to use with a particular person. Someone could have more understanding than me and not be able to transfer that understanding to someone else. And others could have less understanding and transfer it better. Finally I like your use of the word 'understanding' rather than 'knowledge'. It implies the connectedness with other areas required to relate to lay people.
0[anonymous]
Perhaps the reason experts aren't always good teachers is because their thought processes / problem solving algorithms operate at a level of abstraction that is inaccessible to a beginner.
2RobinZ
I have some trouble answering your question, chiefly because my definition of "expert" is approximately synonymous with your definition of "Level 2". "Enough time" would be quite a long period of time. One problem is that there are a lot of textbook results that I would have to use in intermediate steps that would take me a long time to derive. Another is that there are a lot of experimental parameters that I haven't memorized and would have to look up. But I think I could teach arithmetic, algebra, geometry, calculus, differential equations, and Newtonian physics enough that I could teach them proper engineering analysis.
2JRMayne
Criminal Law: Yes to Level 2. Yes to teaching a layperson. It would take a while, for sure, but it's doable. Some of the work requires an understanding of a different lifestyle; if you can't see the potential issues with prosecuting a robbery by a prostitute and her armed male friend, or you can't predict that a domestic violence victim will have a non-credible recantation, you'll need some other education. I've done a lot of instruction in this field. It is common for instruction not to take until there's other experience in the field which helps things join up. Bridge: Yes to Level 2. Possibly to teaching a layperson. The ability to play bridge well is correlated heavily to intelligence, but it also correlates to a certain zeal for winning. I have taught one person to play very well indeed, but that may not be replicable, and took years. (On an aside, I am very likely the world's foremost expert on online bridge cheating; teaching cheating prevention would require teaching bridge first.) Teaching requires more than reasonable intelligence on the part of the teachee. Some people who are very intelligent are ineducable. (Many of these are violators of my 40% rule: You are allowed to think you are 40% smarter/faster/stronger/better than you are. After that, it's obnoxious.) Some people are not interested in learning a given subject. Some people will not overcome preset biases. Some people have high aptitudes in some areas and little aptitude in others (though intelligence strongly tends to spill over.) Anyway, I'm interested in the article. My penultimate effort to explain something to many people - Bayes' Theorem to lawyers - was a moderate failure; my last effort to explain something less mathy to a crowd was a substantial success. (My last experience in explaining something, with assistance, to 12 people was a complete failure.) --JRM
3DSimon
I'm curious, why did you chose 40% for your "40% rule"?
3JRMayne
It's non-arbitrary, but neither is it precise. 100% is clearly too high, and 10% is clearly too low. And since I started calling it The 40% Rule fifteen years ago or thereabout, a number of my friends and acquaintances have embraced the rule in this incarnation. Obviously, some things are unquantifiable and the specific number has rather limited application. But people like it at this number. That counts for something - and it gets the message across in a way that other formulations don't. Some are nonplussed by the rule, but the vigor of support by some supporters gives me some thought that I picked a number people like. Since I never tried another number, I could be wrong - but I don't think I am. --JRM
1SilasBarta
* "The people who buy the services of a prostitute generally don't want to go on record saying so, which they would have to do at some point to prosecute such a robbery. This is either because they're married, or the shame associated with using one." * "Victims of domestic violence have a lot invested in the relationship, and, no matter how much they feel hurt by the abuse, they will not want to tear apart the family and cripple their spouse with a felony conviction. This inner conflict will be present when the victim tries to recant their testimony." Did that really require passing the learner off for some other education? Or did I get the explanation wrong? I'd actually tried teaching information theory to my mom a week ago, which involved starting with the Bayes Theorem (my preferred phrasing [1]). She's a professional engineer, and found it very interesting (to the point where she kept prodding me for the next lesson), saying that it made much more sense of statistics. In about 1.5-2 hours total, I covered the Theorem, application to a car alarm situation, aggregating independent pieces of evidence, the use of log-odds, and some stuff on Bayes nets and using dependent pieces of evidence. [1] O(H|E) = O(H) * L(E|H) = O(H) * P(E|H) / P(E|~H) = "On observing evidence, amplify the odds you assign to a belief by the probability of seeing the evidence if the belief were true, relative to if it were false."
3NancyLebovitz
Expansion on the explanation about domestic violence victims-- the victim may also be afraid that the government will not protect them from the abuser, and the abuser will be angrier because of the attempt at prosecution.
0[anonymous]
This is related to an idea that has been brewing at the back of my mind for a while now: Experts aren't always good teachers because their problem solving algorithms may operate at a level of abstraction that is inaccessible to a beginner.
0thomblake
Hmm... I'm not sure if I think of myself as an expert at anything, other than when people ask. But I'm pretty sure I have about the best understanding of logic I can hope to have, and could explain virtually all of it to an attentive small child given sufficient time. And I might be an expert at some sort of computer programming, though I can think of people much better at any bit of it that I can think of; at any rate, I am also confident I could teach that to anyone, or at least anyone who passes a basic test
0DSimon
Computer programming: I'm not sure if I am at Level 2 or not on this. In favor of being at Level 2: I regularly think about non-computer-related topics with a CS-like approach (i.e. using information theory ideas when playing the inference game Zendo). Also, I strongly associate my knowledge of "folk psychology" and "folk science" to computer science ideas, and these insights work in both directions. For example, the "learned helplessness" phenomenon, where inexperienced users become so uncomfortable with a system that they prefer to cling to their inexperienced status than to risk failure in an attempt to understand the system better, appears in many areas of life having nothing directly to do with computers. Evidence against being at Level 2: I do not have the necessary computer engineering knowledge to connect my understanding of computer programming to my understanding of physics. And, although I have not tried this very often, my experiments in attempting to teach computer programming to laypeople have been middling at best. My assessment at this point is that I am probably near to Level 2 in computer programming, but not quite there yet.
0KrisC
Can you teach a talented, untrained person a skill so that they exceed your own ability? Can you then identify why they are superior? If you have deep level knowledge of your area of expertise that you can impart to others, you ought to be able to evaluate and train a replacement based on "raw talent." Considering that intellectual or artistic endeavors may have a variety of details hidden even from the expert, perhaps a clearer example may be found in sports coaches.
5pjeby
The main reason that coaches are important (not just in sports) is because of blind spots - i.e., things that are outside of a person's direct perceptual awareness. Think of the Dunning-Kreuger effect: if you can't perceive it, you can't improve it. (This is also why publications have editors; if a writer could perceive the errors in their work, they could fix them themselves.)
[-][anonymous]70

PZ Meyers' comments on Kurzweil generated some controversy here recently on LW--see here. Apparently PZ doesn't agree with some of Kurzweil's assumptions about the human mind. But that's besides the point--what I want want to discuss is this: according to another blog, Kurzweil has been selling bogus nutritional supplements. What does everyone think of this?

2jimrandomh
I would like a better source than a blog comment for the claim that Kurzweil has been selling bogus nutritional supplements. The obvious alternative possibility is that someone else, with less of a reputation to worry about, attached Kurzweil's name to their product without his knowledge.
4[anonymous]
Ok, I've found some better sources. See the first three links.
6jimrandomh
I would have preferred a more specific link than that, to save me the time of doing a detailed investigation of Kurzweil's company myself. But I ended up doing one anyways, so here are the results. That "Ray and Terry's Longevity Products" company's front page screams low-credibility. It displays three things: an ad for a book, which I can't judge as I don't have a copy, an ad for snack bars, and a news box. Neutral, silly, and, ah, something amenable to a quality test! The current top headline in their Healthy Headlines box looked to me like an obvious falsehood ("Dirty Electricity May Cause Type 3 Diabetes"), and on a topic important to me, so I followed it up. It links to a blog I don't recognize, which dug it out of a two year old study, which I found on PubMed. And I personally verified that the study was wrong - by the most generous interpretation, assuming no placebo effect or publication bias (both of which were obviously present), the study contains exactly 4 bits of evidence (4 case studies in which the observed outcome had a 50% chance of happening assuming the null hypothesis, and a 100% chance of happening assuming the conclusion). A review article confirmed that it was flawed. That said, he probably just figured the news box was unimportant and delegated the job to someone who wasn't smart enough to keep the lies out. But it means I can't take anything else on the site seriously without a very time-consuming investigation, which is bad enough. The bit about Kurzweil taking 250 nutritional supplements per day jumps out, too, since it's an obviously wrong thing to do; the risks associated with taking a supplement (adverse reaction, contamination, mislabeling) scale linearly with the number taken, while the upside has diminishing returns. You take the most valuable thing first, then the second-most, by the time you get to the 250th thing it's a duplicate or worthless. Which leads me to believe that he just fudged the number, by counting things that ar
0jacob_cannell
Kurzweil should be concerned that his name is associated with junk science, and the overall result, but I think its a little far-fetched to think the man is actually selling nutritional supplements that he thinks are bogus. The state of medicine and nutrition today is such that we know there is so much we don't know. The human body is supremely complex, to make an understatement. The evidence is pretty strong that most supplements, and even most multi-vitamins, don't do much or even do harm. However that is certainly not true in every case, and there are particular supplements where we have strong evidence for net positive effect (vitamin D and fish oil have very strong evidence for net benefit at this point - everyone should be on them) . But if you are someone like Kurzweil, and you want to make it to the Singularity, you probably will do the research and believe you have some inside knowledge on optimizing the human body. I find it more likely that he actually does take a boatload of supplements.
2[anonymous]
I'm sure he does take a lot of them himself, but the problem is that Kurzweil taking supplements will still make people think he is delusional (because most people are instantly suspicious of people who do so, generally for good reasons). On a related note, Ben Best also sells supplements on his website, and many of them look pretty questionable.
0jacob_cannell
So I'm curious, do you believe that typical supplements have net negative effect, vs just neutral? It was my understanding that the weight of evidence points to most having neutral overall effect, which to me wouldn't justify instant suspicion. I mean you may be wasting money, but you probably aren't hurting yourself. And if you really do the research, you probably are going to get some net positive gain, statistically speaking. Don't you think? I know of at least 2 cases (vitamin D and fish oil, where the evidence for net benefit is strong - but mainly due to deficiency in the modern diet).
2[anonymous]
I think it is a mixed bag: Some supplements are potentially dangerous, but others (like the ones you mention) can be very helpful. The majority, however, probably have little to no effect whatsoever. As a result, I don't think people should mess around with what they eat without it being subjected to rigorous clinical trials first; though there might be a positive net gain, one dose of something bad can kill you. In any case, though, believing that something is helpful when it has not yet been tested is clearly irrational. (This is more what I concerned about with Best and Kurzweil.) Selling or promoting something that isn't tested is even worse; it borders on fraud and charlatanry. Edit: No, let me amend that: it is charlatanry.

Interesting SF by Robert Charles Wilson!

I normally stay away from posting news to lesswrong.com - although I think an Open Thread for relevant news items would be a good idea - but this one sounds especially good and might be of interest for people visiting this site...

Many-Worlds in Fiction: "Divided by Infinity"

In the year after Lorraine's death I contemplated suicide six times. Contemplated it seriously, I mean: six times sat with the fat bottle of Clonazepam within reaching distance, six times failed to reach for it, betrayed by some instin

... (read more)
6humpolec
Thank you. The idea reminded me of Moravec's thoughts on death:
3Eliezer Yudkowsky
I already wrote this fic ("The Grand Finale of the Ultimate Meta Mega Crossover").
2XiXiDu
I wouldn't be surprised to find out that many people who know about you and the SIAI are oblivious of your fiction. At least I myself only found out about it some time after learning about you and SIAI. It is generally awesome stuff and would be enough in itself to donate to SIAI. Spreading such fiction stories might actually attract more people to dig deeper and find out about SIAI than to be be thrown in at the deep end. Edit: I myself came to know about SIAI due to SF, especially Orion's Arm.

If you want to eliminate hindsight bias, write down some reasons that you think justify your judgment.

Those who consider the likelihood of an event after it has occurred exaggerate their likelihood of having been able to predict that event in advance. We attempted to eliminate this hindsight bias among 194 neuropsychologists. Foresight subjects read a case history and were asked to estimate the probability of three different diagnoses. Subjects in each of the three hindsight groups were told that one of the three diagnoses was correct and were asked to s

... (read more)
0gwern
Ought to be available at: http://dl.dropbox.com/u/5317066/Arkes%20et%20al--Eliminating%20the%20Hindsight%20Bias.pdf (If this link doesn't work, let me know.)
0gwern
Link?
0utilitymonster
Link
0gwern
I meant 'link to full text please', unless I'm missing something somewhere on that page.
0utilitymonster
Don't have. I can e-mail a copy if you want (got it through university's subscription).
0gwern
Alright, email it to me at gwern0 at gmail.com, and I'll host it somewhere for everyone. Maybe Dropbox.
[-][anonymous]60

I've been wanting to change my username for a while, and have heard from a few other people who do too, but I can see how this could be a bit confusing if someone with a well-established identity changes their username. (Furthermore, at LW meetups, when I've told people my username, a couple of people have said that they didn't remember specific things I've posted here, but had some generally positive affect associated with the name "ata". I would not want to lose that affect!) So I propose the following: Add a "Display name" field to t... (read more)

2[anonymous]
Related - verbal overshadowing, where describing something verbally blocks retrieving perceptual memories of it. Critically, verbal overshadowing doesn't always occur - sometimes verbal descriptions improve reasoning. Doesn't refute Lehrer's main point exactly, but does complicate it somewhat.

An amusing case of rationality failure: Stockwell Day, a longstanding albatross around Canada's neck, says that more prisons need to be built because of an 'increase in unreported crime.'

As my brother-in-law amusingly noted on FB, quite apart from whether the actual claim is true (no evidence is forthcoming), unless these unreported crimes are leading to unreported trials and unreported incarcerations, it's not clear why we would need more prisons.

[-][anonymous]60

I’m not yet good enough at writing posts to actually properly post something but I hoped that if I wrote something here people might be able to help me improve. So obviously people can comment however they normally would but it would be great if people would be willing to give me the sort of advice that would help me to write a better post next time. I know that normal comments do this to some extent but I’m also just looking for the basics – is this a good enough topic to write a post on but not well enough executed (therefore, I should work on my writing... (read more)

2[anonymous]
So my presumption is that 4 points means this article isn't hopeless - it hasn't attracted criticism, some people have upvoted it - but isn't of a LW standard - it hasn't been voted highly enough, there is only 1 comment engaging with the topic. Is anyone able to give me a sense at to why it isn't good enough? Should the topic necessarily be backed up by peer reviewed literature? Is it just not a big enough insight? Is it the writing? Is it the lack of specific examples noted by Gwern? Is it too similar to other ideas? And so on. I hope I'm not bugging people by trying to figure it out but I'm trying to get better at writing posts without filling the main bit of less wrong with uninteresting stuff and this seemed like a less intrusive way to do this. I also feel like the best way to improve isn't simply reading the posts but involves actually trying to write posts and (hopefully) getting feedback. Thanks
1JenniferRM
I tried composing a response a day or two ago, but had difficulty finding the words. In a nutshell, I thought you should start with last two paragraphs, boil that down to a coherent and specific claim. Then write an entirely new essay that puts that claim at the top, in an introductory/summary paragraph. The rest of the post should be spent justifying and elaborating on the claim directly and clearly, without talking about gnomes or deploying the fallacy of equivocation on the sly, but hopefully with citation to peer reviewed evidence and/or more generally accessible works about reasoning (like from a book).
1[anonymous]
Thanks for the comment. That's really helpful. So I should basically start with the idea, present it more clearly (no gnomes) and try to provide peer reviewed evidence or at least some support.
0gwern
I like this, but in Good and Real, Drescher's paradigm works because he then supplies a few examples where he invalidates a paradox-causing argument, and then goes on to apply this general approach. Asides from your dwarf hypothetical example, where do you actually check that your graph is complete?
0[anonymous]
I think that you're asking when would you check that your graph is complete in a real world case, sorry if I misunderstood. If so, take the question of whether global warming is anthropogenic. There are people who claim to have evidence that it is and people who claim to have evidence that it isn't so the basic diagram that we have for this case is a paradox diagram similar to that in figure 2 of the article above. Now there are a number of possible responses to this: Some people could be stuck on the paradox diagram and be unsure as to the right answer, some people may have invalidated one or the other side of the argument and may have decided one or the other claim is true, and some may be adding more and more proofs to one side or the other - countering rather than invalidating. I think there's also a fourth group who's belief graph will look the same as those who have invalidated one side and have hence reached a conclusion. However, these will be people who, while they may technically know that arguments exist for the negation of their belief, have not taken opposing notions into account in their belief graph. So to them, it will look like a graph demonstrating the truth of their belief but, in fact, it's simply an incomplete paradox graph and they have some distance to go to figure out the truth of the matter. So to summarise: I think there are people on both sides of the anthropogenic global warming debate who know that purported proofs against their beliefs exist on one level but who don't factor these into their belief graphs. I think they could benefit from asking themselves whether their graph is complete. I should mention that this particular case isn't what motivated the post - in some ways I worry that by providing specific examples people stop judging an idea on its merit and start judging it based on their beliefs regarding the example mentioned and how they feel this is meant to tie in with the idea. Regardless, I could be mistaken. Is it consid
0Paul Crowley
And in one comment we've already got to "I’m not ready to waste my time on a crusade against another piece of ignorant superstition".
0Paul Crowley
Thanks to the two people who pointed this out to me in DM. I've commented, though Cyan has already linked to the essays on my blog I'd link to first.

Say a "catalytic pattern" is something like scaffolding, an entity that makes it easier to create (or otherwise obtain) another entity. An "autocatalytic pattern" is a sort of circular version of that, where the existence of an instance of the pattern acts as scaffolding for creating or otherwise obtaining another entity.

Autocatalysis is normally mentioned in the "origin of life" scientific field, but it also applies to cultural ratchets. An autocatalytic social structure will catalyze a few more instances of itself (frequentl... (read more)

0NancyLebovitz
I don't know of any work on the question, but it's a good topic. Nations seem to be autocatylitic.

"The differences are dramatic. After tracking thousands of civil servants for decades, Marmot was able to demonstrate that between the ages of 40 and 64, workers at the bottom of the hierarchy had a mortality rate four times higher than that of people at the top. Even after accounting for genetic risks and behaviors like smoking and binge drinking, civil servants at the bottom of the pecking order still had nearly double the mortality rate of those at the top."

"Under Pressure: The Search for a Stress Vaccine" http://www.wired.com/magazine/2010/07/ff_stress_cure/all/1

1NancyLebovitz
It was interesting that most of the commenters were opposed to the idea of a stress vaccine, though their reasons didn't seem very good. I'm wondering whether the vaccine would mean that people would be more inclined to accept low status (it's less painful) or less inclined to accept low status (more energy, less pessimism.) I also wonder how much of the stress from low status is from objectively worse conditions (less benign stimulus, worse schedules, more noise, etc.) as distinct from less control, and whether there's a physical basis for the inclination to crank up stress on subordinates.
4gwern
Wired has unusually crappy commentators; YouTube quality. I wouldn't put much stock in their reactions. /blatant speculation Stress response evolved for fight-or-flight - baboons and chimps fight nasty. Not for thinking or health. Reduce that, and like mindfulness meditation, one can think better and solve one's problems better. IIRC, the description made it sound like the study controlled for conditions - comparing clerical work with controlling bosses to clerical work sans controlling bosses.
0knb
Oh come on, they're bad, but they're not YouTube bad.
0NancyLebovitz
One mention is of unsupportive bosses and the other is of mean bosses. I think we need more detail to find out what is actually meant.

One little anti-akrasia thing I'm trying is editing my crontab to periodically pop up an xmessage with a memento mori phrase. It checks that my laptop lid is open, gets a random integer and occasionally pops up the # of seconds to my actuarial death (gotten from Death Clock; accurate enough, I figure):

 1,16,31,46 * * * * if grep open /proc/acpi/button/lid/LID0/state; then if [ $((`date \+\%\s` % 6)) = 1 ]; then xmessage "$(((`date --date="9 August 2074" \+\%\s` - `date \+\%\s`) / 60)) minutes left to live. Is what you are doing important?&q
... (read more)
1gwern
OK, I can't seem to get the escaping to work right with crontab no matter how I fiddle, so I've replaced the one-liner with a regular script and meaningful variables names and all: 1,14,32,26 * * * * ~/bin/bin/memento-mori The script itself being (with the 32-bit hack mentioned below): #!/bin/sh set -e if grep open /proc/acpi/button/lid/LID?/state > /dev/null then CURRENT=`date +%s`; if [ $(( $CURRENT % 8 )) = 1 ] then # DEATH_DATE=`date --date='9 August 2074' +%s` DEATH_DATE="3300998400" REMAINING=$(( $DEATH_DATE - $CURRENT )) REMAINING_MINUTES=$(( $REMAINING / 60 )) REMAINING_MINUTES_FMT=`env printf "%'d" $REMAINING_MINUTES` (sleep 10m && killall xmessage &) xmessage "$REMAINING_MINUTES_FMT minutes left to live. Is what you are doing important?" fi fi
1Risto_Saarelma
Dates that far into the future don't seem to work with the date on 32-bit Linux. Fun idea otherwise. You should report back in a month or so if you're still using it.
0gwern
I had to reinstall with 32-bit to use a document scanner, so this became a problem for me. What I did was punch my 2074 date into a online converter, and use that generated date: - DEATH_DATE=`date --date='9 August 2074' +%s` + # DEATH_DATE=`date --date='9 August 2074' +%s` + DEATH_DATE="3300998400"
0h-H
It might have an opposite effect to what is intended since the number would simply be too large.
0gwern
People still use 32-bit OSs? But seriously, you could probably shell out to something else. Or you could change the output - it doesn't have to be in seconds or minutes. For example, you could call date to get the current year, and subtract that against 2074 or whatever.

I think one of the other reasons many people are uncomfortable with cryonics is that they imagine their souls being stuck-- they aren't getting the advantages of being alive or of heaven.

5Nisan
In all honesty, I suspect another reason people are uncomfortable with cryonics is that they don't like being cold.
0lwta
what's a soul?
0Oligopsony
Well, for the people presumably feeling uncomfortable, it's an immortal spirit that houses your personality and gets attached to a body for your pilgrimage on Earth. There might be something to this for people who reject this metaphysic, even beyond unconsciously carrying it around. If you're going to come back, you don't get the secular heaven of "being fondly remembered after you die." In a long retirement or vacation, the book hasn't been shut on you. Perhaps there's something important many people find in the book being shut - of others, afterwards, being able to evaluate a life as a completed story. Someone frozen is maybe a "completed story" and maybe not.

Are there any posts people would like to see reposted? For example, Where Are We seems like it maybe should be redone, or at least put a link in About... Or so I thought, but I just checked About and the page for introductions wasn't linked, either. Huh.

4thomblake
It would be nice if we had profile pages with machine-readable information and an interface for simple queries so posts such as that one would be redundant.

Suppose you know from good sources that there is going to be a huge catastrophe in the very near future, which will result in the near-extermination of humanity (but the natural environment will recover more easily). You and a small group of ordinary men and women will have to restart from scratch.

You have a limited time to compile a compendium of knowledge to preserve for the new era. What is the most important knowledge to preserve?

I am humbled by how poorly my own personal knowledge would fare.

I suspect that people are overestimating in their replies how much could be done with Wikipedia. People in general underestimate a) how much technology requires bootstrapping (metallurgy is a great example of this) b) how much many technologies, even primitive ones, require large populations so that specialization, locational advantages and comparative advantage can kick in (People even in not very technologically advanced cultures have had tech levels regress when they settle large islands or when their locations get cut off from the mainland. Tasmania is the classical example of this. The inability to trade with the mainland caused large drops in tech level). So while Wikipedia makes sense, it would also be helpful to have a lot of details on do-it-yourself projects that could use pre-existing remnants of existing technology. There are a lot of websites and books devoted to that topic, so that shouldn't be too hard.

If we are reducing to a small population, we may need also to focus on getting through the first one or two generations with an intact population. That means that a handful of practical books on field surgery, midwifing, and similar basic medical issues may become very... (read more)

3arundelo
I would love to see a reality TV show about a metallurgy expert making a knife or other metal tool from scratch. The expert would be provided food and shelter but would have no equipment or materials for making metal, and so would have to find and dig up the ore themselves, build their own oven, and whatever else you would have to do to make metal if you were transported to the stone age.
3RobinZ
One problem you would face with such a show is if the easily-available ore is gone.
2JoshuaZ
Yes, this is in fact connected to a general problem that Nick Bostrom has pointed out, each time you try to go back from stone age tech to modern tech you use resources up that you won't have the next time. However, for purposes of actually getting back to high levels of technology rather than having a fun reality show, we've got a few advantages. One can use the remaining metal that is in all the left over objects from modern civilization (cars being one common easy source of a number of metals). Some metals are actually very difficult to extract from ore (aluminum is the primary example of this. Until the technologies for extraction were developed, it was expensive and had almost no uses) whereas the ruins of civilization will have those metals in near pure forms if one knows where to look.
2ABranco
The argument that no one person in the face of Earth knows how to build a mouse from scratch is plausible. Matt Ridley
1arundelo
* A person buys ore, builds a smelter out of cement, and makes a sword. * Terry Pratchett digs up ore near his house, smelts it in "a makeshift kiln built from clay and hay and fuelled with damp sheep manure", and makes a sword. Also included: meteorites! Thank you Hacker News.
0nazgulnarsil
He [pratchett] has to hide it from the authorities.
5KrisC
Maps. Locations of pre-disaster settlements to be used as supply caches. Locations of structures to be used for defense. Locations of physical resources for ongoing exploitation: water, fisheries, quarries. Locations of no travel zones to avoid pathogens.
3jimrandomh
Presupposing that only a limited amount of knowledge could be saved seems wrong. You could bury petabytes of data in digital form, then print out a few books' worth of hints for getting back to the technology level necessary to read it.
1NancyLebovitz
If the resources for printing are still handy. I don't feel comfortable counting on that at present levels of technology.
3RobinZ
In rough order of addition to the corpus of knowledge: 1. The scientific method. 2. Basic survival skills (e.g. navigation). 3. Edit: Basic agriculture (e.g. animal husbandry, crop cultivation). 4. Calculus. 5. Classical mechanics. 6. Basic chemistry. 7. Basic medicine. 8. Basic political science.
8NancyLebovitz
Basic sanitation!
1RobinZ
Yes! Insert sanitation between 3 and 4, and insert construction (e.g. whittling, carpentry, metal casting) between sanitation and 3.
0ABranco
For survival skills, I'd suggest buying this one before the disaster, while there's still internet.
2[anonymous]
Let's examine the problem in more detail: Different disaster scenarios would require different pieces of information, so it would help if you knew exactly what kind of catastrophe. However, if you can preserve a very large compendium of knowledge, then you can create a catalogue of necessary information for almost every type of doomsday scenario (nuclear war, environmental catastrophe, etc.) so that you will be prepared for almost anything. If the amount of information you can save is more limited, then you should save the pieces of information that are the most likely to be useful in any given scenario in "catastrophe-space." Now we have to go about determining what these pieces of information are. We can start by looking at the most likely doomsday scenarios--Yoreth, since you started the thread, what do you think the most likely ones are?
0Yoreth
I suppose, perhaps, an asteroid impact or nuclear holocaust? It's hard for me to imagine a disaster that wipes out 99.999999% of the population but doesn't just finish the job. The scenario is more a prompt to provoke examination of the amount of knowledge our civilization relies on. (What first got me thinking about this was the idea that if you went up into space, you would find that the Earth was no longer protected by the anthropic principle, and so you would shortly see the LHC produce a black hole that devours the Earth. But you would be hard pressed to restart civilization from a space station, at least at current tech levels.)
0[anonymous]
The other problem is this: if there is a disaster that wipes out such a large percentage of the Earth's population, the few people who did survive it would probably be in very isolated areas and might not have access to any of the knowledge we've been talking about anyway. Still, it is interesting to look at what knowledge our civilization rest on. It seems to me that a lot of the infrastructure we rely on in our day-to-day lives is "irreducibly complex"--for example, we know how to make computers, but this is not a necessary skill in a disaster scenario (or our ancestral environment).
0Blueberry
I am not following this. Why would the anthropic principle no longer apply if you went into space?
2katydee
I think it's a quantum immortality argument. If you, the observer, are no longer on Earth, the Earth can be destroyed because its destruction no longer necessitates your death.
2[anonymous]
A dead tree copy of Wikipedia. A history book about ancient handmade tools and techniques from prehistory to now. A bunch of K-12 school books about math and science. Also as many various undergraduate and postgraduate level textbooks as possible.
5JanetK
Wikipedia is a great answer because we know that most but no all the information is good. Some is nonsense. This will force the future generations to question and maybe develop their own 'science' rather than worship the great authority of 'the old and holy books'.
2JoshuaZ
The knowledge about science issues generally tracks our current understanding very well. And historical knowledge that is wrong will be extremely difficult for people to check post an apocalyptic event, and even then is largely correct. In fact, if Wikipedia's science content really were bad enough to matter it would be an awful thing to bring into this situation since having correct knowledge or not could alter whether or not humanity survives at all.
3Oscar_Cunningham
Wikipedia would also contain a lot of info about current people and places, which would no longer be remotely useful.
1NancyLebovitz
And a lot of popular culture which would no longer be available.
2sketerpot
A dead-tree copy of Wikipedia has been estimated at around 1,420 volumes. Here's an illustration, with a human for scale. It's big. You might as well go for broke and hole up in a library when the Big Catastrophe happens.
2mstevens
One of these http://thewikireader.com/ with rechargeable batteries and a solar charger could work.
3NihilCredo
Until some critical part oxidates or otherwise breaks. Which will likely be a long time before the new society is able to build a replacement.
1listic
But the WikiReader is probably a step in the right direction that is worth mentioning. While most of the current technology depend on many other technology to be useful (cellular phones need cellular networks, most gadgets won't last a day on their internal batteries etc), the WikiReader is a welcome step in the direction less travelled. I only hope that we will have more of that.
1Eneasz
How to start a fire only using sticks. How to make a cutting blade from rocks. How to create a bow, and make arrows. Basic sanitation.
2NancyLebovitz
That seems like advice for living in the woods-- not a bad idea, but it probably needs to be adjusted for different environments (find water in dry land, staying warm in extreme cold, etc.) and especially for scavenging from ruins. Any thoughts about people skills you'd need after the big disaster?
0Eneasz
I thought about those a bit, but came to a few conclusions that made sense to me. Being in a very dry land is simply a bad idea, best to move. Any group of survivors that is more than three days from fresh water won't be survivors, and once they've made it to the fresh water source there won't be many reasons to stray far from it for at least a couple generations, so water-finding skills will probably not be useful and be quickly lost. Staying warm in extreme cold would be covered both by the fire-starting skills and the bow-making skills. I wanted to put something about people skills, but I don't have any myself and didn't know what I could possibly say that would be remotely useful. Hopefully someone with more experience on that subject will survive as well. :)
1mstevens
I'm tempted to say "a university library" as the short answer. More specifically, whatever I could get from the science and engineering departments. Pick the classic works in each field if you have someone to filter them. Look for stuff that's more universal than specific to the way we've done things - in computing terms, you want The Art of Computer Programming and not The C Programming Language. In the short term, anything you can find on farming and primitive medicine - all the stuff the better class of survivalist would have on their bookshelf.
0ianshakil
I only need one item: The Holy Bible (kidding)
0xamdam
Depends what level you want to achieve post-catastrophe; some, if not most of your resources and knowledge will be needed to deal with specific effects. In short, your suitcase will be full of survivalist and medical material. In an thought experiment where you freeze yourself until the ecosystem is restored, you can probably use an algorithm of taking the best library materials from each century, corrected for errors, to achieve the level of that century. Both Robinson Crusoe and Jules Verne's "Mysterious Island" and explore similar bootstrapping scenarios, interestingly both use some "outside injections".

There's an idea I've seen around here on occasion to the effect that creating and then killing people is bad, so that for example you should be careful that when modeling human behavior your models don't become people in their own right.

I think this is bunk. Consider the following:

--

Suppose you have an uploaded human, and fork the process. If I understand the meme correctly, this creates an additional person, such that killing the second process counts as murder.

Does this still hold if the two processes are not made to diverge; that is, if they are determi... (read more)

2ata
You make good points. I do think that multiple independent identical copies have the same moral status as one. Anything else is going to lead to absurdities like those you mentioned, like the idea of cutting a mechanical computer in half and doubling its moral worth. I have for a while had a feeling that the moral value of a being's existence has something to do with the amount of unique information generated by its mind, resulting from its inner emotional and intellectual experience. (Where "has something to do with" = it's somewhere in the formula, but not the whole formula.) If you have 100 identical copies of a mind, and you delete 99 of them, you have not lost any information. If you have two slightly divergent copies of a mind, and you delete one of them, then that's bad, but only as bad as destroying whatever information exists in it and not the other copy. Abortion doesn't seem to be a bad thing (apart from any pain caused; that should still be minimized) because a fetus's brain contains almost no information not compressible to its DNA and environmental noise, neither of which seems to be morally valuable. Similar with animals; it appears many animals have some inner emotional and intellectual experience (to varying degrees), so I consider deleting animal minds and causing them pain to have terminal negative value, but not nearly as great as doing the same to humans. (I also suspect that a being's value has something to do with the degree to which its mind's unique information is entangled with and modeled (in lower resolution) by other minds, à la I Am A Strange Loop.)
0Pavitra
I think... there's more to this wrongness-feeling I have than I've expressed. I would readily subject a million forks of myself to horrific suffering for the moderate benefit of just one of me. The main reason I'd have reservations about releasing myself on the internet for anyone to download would be because they could learn how to manipulate me. The main problem I have with slavery and starvation is that they're a waste of human resources, and that monolithic power structures are brittle against black swans. In short, I don't consider it a moral issue what algorithm is computed to produce a particular result. I'm not sure how to formalize this properly.
[-][anonymous]40

Some hobby Bayesianism. A typical challenge for a rationalist is that there is some claim X to be evaluated, it seems preposterous, but many people believe it. How should you take account of this when considering how likely X is to be true? I'm going to propose a mathematical model of this situation and discuss two of it's features.

This is based on a continuing discussion with Unknowns, who I think disagrees with what I'm going to present, or with its relevance to the "typical challenge."

Summary: If you learn that a preposterous hypothesis X i... (read more)

0[anonymous]
Here is my proposal for an ansatz for P(Y(n+1)|Y(n)). That is, given that at least n people already believe X, how likely it is that at least one more person also believes X. Let N be the total population of the world. If n/N is close to zero, then I expect P(Y(n+1)|Y(n)) is also close to zero, and if n/N is close to 1, then P(Y(n+1)|Y(n)) is also close to 1. That is, if I know that a tiny proportion of people believe something, that's very weak evidence that a slightly larger proportion believe it also, and if I know that almost everyone believes it, that's very strong evidence that even more people believe it. One family of functions that have this property are the functions f(n) = (n/N)^C, where C is some fixed positive number. Actually it's convenient to set C = c/N where c is some other fixed positive number. I don't have a story to tell about why P(Y(n+1)|Y(n)) should behave this way, I bring it up only because f(n) does the right thing near 1 and N, and is pretty simple. To evaluate P(Y(n)), we take the integral of (c/N)log(t/N)dt from 1 to n, and exponentiate it. The result is, up to a multiplicative constant exp(c times (x log x - x)) = (x/e)^(cx) where x = n/N. I think it's a good idea to leave this as a function of x. Write K for the multiplicative constant. We have P(Proportion x of the population believes X) = K(x/e)^(cx). A graph of this function for K = 1, c = 1 can be found here and a graph of its reciprocal (whose relevance is explained in the parent) can be found here
1RobinZ
It's an interesting analysis - have you confirmed the appearance of that distribution with real-world data? I suppose you'd need a substantial body of factual claims about which statistical information is available...
0[anonymous]
Thanks. I of course have no data, although I think there are lots of surveys done about weird things people believe. But even if this is the correct distribution, I think it would be difficult to fit data to it, because I would guess/worry that the constants K and c would depend on the nature of the claim. (c is so far just an artifact of the ansatz. K is something like P(Y(1)|Y(0)). Different for bigfoot than for Christianity.) Do you have any ideas?

Alright, I've lost track of the bookmark and my google-fu is not strong enough with the few bits and pieces I remember. I remember seeing a link to a story in a lesswrong article. The story was about a group of scientists who figured out how to scan a brain, so they did it to one of them, and then he wakes up in a strange place and then has a series of experiences/dreams which recount history leading up to where he currently is, including a civilization of uploads, and he's currently living with the last humans around... something like that. Can anybody help me out? Online story, 20 something chapters I think... this is driving me nuts.

4Risto_Saarelma
After Life
0NQbass7
Thank you. Bookmarked.

I think I may have artificially induced an Ugh Field in myself.

A little over a week ago it occurred to me that perhaps I was thinking too much about X, and that this was distracting me from more important things. So I resolved to not think about X for the next week.

Of course, I could not stop X from crossing my mind, but as soon as I noticed it, I would sternly think to myself, "No. Shut up. Think about something else."

Now that the week's over, I don't even want to think about X any more. It just feels too weird.

And maybe that's a good thing.

4Cyan
I have also artificially induced an Ugh Field in myself. A few months ago, I was having a horrible problem with websurfing procrastination. I started using Firefox for browsing and LeechBlock to limit (but not eliminate) my opportunities for websurfing instead of doing work. I'm on a Windows box, and for the first three days I disabled IE, but doing so caused knock-on effects, so I had to re-enable it. However, I knew that resorting to IE to surf would simply recreate my procrastination problem, so... I just didn't. Now, when the thought occurs to me to do so, it auto-squelches.
5Unknowns
I predict with 95% confidence that within six months you will have recreated your procrastination problem with some other means.
6Cyan
Your lack of confidence in me has raised my ire. I will prove you wrong!
3Unknowns
Did you start procrastinating again?
1Cyan
Yep. Eventually I sought medical treatment.
3Unknowns
To be settled by February 8, 2011!

What simple rationality techniques give the most bang for the buck? I'm talking about techniques you might be able to explain to a reasonably smart person in five minutes or less: really the basics. If part of the goal here is to raise the sanity waterline in the general populace, not just among scientists, then it would be nice to have some rationality techniques that someone can use without much study.

Carl Sagan had a slogan: "Extraordinary claims require extraordinary evidence." He would say this phrase and then explain how, when someone claim... (read more)

6DuncanS
I think some of the statistical fallacies that most people fall for are quite high up the list. One such is the "What a coincidence!" fallacy. People notice that some unlikely event has occurred, and wonder how many millions to one against this event must have been - and yet it actually happenned ! Surely this means that my life is influenced by some supernatural influence! The typical mistake is to simply calculate the likelihood of the occurrence of the particular event that occurred. Nothing wrong with that, but one should also compare that number against the whole basket of other possible unlikely events that you would have noticed if they'd happenned (of which there are surely millions), and all the possible occasions where all these unlikely events could have also occurred. When you do that, you discover that the likelihood of some unlikely thing happenning is quite high - which is in accordance with our experience that unlikely events do actually happen. Another way of looking at it is that non-notable unlikely events happen all the time. Look, that particular car just passed me at exactly 2pm ! Most are not noticable. But sometimes we notice that a particular unlikely event just occurred, and of course it causes us to sit up and take notice. The question is how many other unlikely events you would also have noticed. The key rational skill here is noticing the actual size of the set of unlikely things that might have happenned, and would have caught our attention if they had.
3Larks
I'm going to be running a series of Rationality & AI seminars with Alex Flint in the Autumn, where we'll introduce aspiring rationalists to new concepts in both fields; standard cognitive biases, a bit of Bayesianism, some of the basic problems with both AI and Friendliness. As such, this could be a very helpful thread. We were thinking of introducing Overconfidence Bias; ask people to give 90% confidence intervals, and then reveal (surprise surprise!) that they're wrong half the time.
2sketerpot
Since it seemed like this could be helpful, I expanded this into a top-level post. That 90% confidence interval thing sounds like one hell of a dirty trick. A good one, though.
3RobinZ
The concept of inferential distance is good. You wouldn't want to introduce it in the context of explaining something complicated - you'd just sound self-serving - but it'd be a good thing to crack out when people complain about how they just can't understand how anyone could believe $CLAIM. Edit: It's also a useful concept when you are thinking about teaching.
2RobinZ
#3 is a favorite of mine, but I like #1 too. How about "Your intuitions are not magic"? Granting intuitions the force of authority seems to be a common failure mode of philosophy.
1sketerpot
That's a good lesson to internalize, but how do you get someone to internalize it? How do you explain it (in five minutes or less) in such a way that someone can actually use it? I'm not saying that there's no easy way to explain it; I just don't know what that way would be. When I argue with someone who acts like their intuitions are magic, I usually go back to basic epistemology: define concisely what it means to be right about whatever we're discussing, and show that their intuitions here aren't magic. If there's a simple way to explain in general that intuition isn't magic, I'd really love to hear it. Any ideas?
2DuncanS
Given that we haven't constructed a decent AI, and don't know how those intuitions actually work, we only really believe they're not magic on the grounds that we don't believe in magic generally, and don't see any reason why intuitions should be an exception to the rule that all things can be explained. Perhaps an easier lesson is that intuitions can sometimes be wrong, and it's useful to know when that happens so we can correct for it. For example, most people are intuitively much more afraid of dying in dramatic and unusual ways (like air crashes or psychotic killers) than in more mundane ways like driving the car or eating unhealthy foods, Once it's established that intuitions are sometimes wrong, the fact that we don't exactly know how they work isn't so dangerous to one's thinking.
2RobinZ
Well, I thought Kaj_Sotana's explanation was good, but the five-minute constraint makes things very difficult. I tend to be so long-winded that I'm not sure I could get across any insight in five minutes, honestly, but you're right that "Your intuitions are not magic" is likely to be harder than many.

Does anyone know where the page that used to live here can be found?

It was an experiment where two economists were asked to play 100 turn asymmetric prisoners dilemma with communication on each turn to the experimenters, but not each other.

It was quite amusing in that even though they were both economists and should have known better, the guy on the 'disadvantaged' side was attempting to have the other guy let him defect once in a while to make it "fair".

2Douglas_Knight
google archive.org
2gwern
BEHOLD!

"CIA Software Developer Goes Open Source, Instead":

"Burton, for example, spent years on what should’ve been a straightforward project. Some CIA analysts work with a tool, “Analysis of Competing Hypotheses,” to tease out what evidence supports (or, mostly, disproves) their theories. But the Java-based software is single-user — so there’s no ability to share theories, or add in dissenting views. Burton, working on behalf of a Washington-area consulting firm with deep ties to the CIA, helped build on spec a collaborative version of ACH. He tr

... (read more)
4Rain
Far more interesting than the software is the chapter in the CIA book Psychology of Intelligence Analysis where they describe the method: Summary and conclusions:

What's the policy on User pages in the wiki? Can I write my own for the sake of people having a reference when they reply to my posts, or are they only for somewhat accomplished contributers?

4Blueberry
I can't imagine any reason why it would be a problem to make a User page. Go ahead.
2WrongBot
I haven't seen any sort of policy articulated. I just sort of went for it, and haven't gotten any complaints yet. Personally, I'd love to see more people with wiki user pages, since the LW site itself doesn't have much in the way of profile features.
0gwern
My default assumption has that been unless otherwise stated, all the norms and conventions of Wikipedia apply to the LW wiki. En, at least, lets you have one for any reason you want.

It might be useful to have a short list of English words that indicate logical relationships or concepts often used in debates and arguments, so as to enable people who are arguing about controversial topics to speak more precisely.

Has anyone encountered such a list? Does anyone know of previous attempts to create such lists?

Eliezer has written a post (ages ago) which discussed a bias when it comes to contributions to charities. Fragments that I can recall include considering the motivation for participating in altruistic efforts in a tribal situation, where having your opinion taking seriously is half the point of participation. This is in contrast to donating 'just because you want thing X to happen'. There is a preference to 'start your own effort, do it yourself' even when that would be less efficient than donating to an existing charity.

I am unable to find the post in question - I think it is distinct from 'the unit of caring'. It would be much appreciated if someone who knows the right keywords could throw me a link!

6WrongBot
Your Price for Joining?
0wedrifid
That's it. Thankyou!

The visual guide to a PhD: http://matt.might.net/articles/phd-school-in-pictures/

Nice map–territory perspective.

John Baez This Week's Finds in Mathematical Physics has its 300th and last entry. He is moving to wordpress and Azimuth. He states he wants to concentrate on futures, and has upcoming interviews with:

Tim Palmer on climate modeling and predictability, Thomas Fischbacher on sustainability and permaculture, and Eliezer Yudkowsky on artificial intelligence and the art of rationality. A Google search returns no matches for Fischbacher + site:lesswrong.com and no hits for Palmer +.

That link to Fischbacher that Baez gives has a presentation on cognitive distortio... (read more)

[-][anonymous]30

Where should the line be drawn regarding the status of animals as moral objects/entities? E.G Do you think it is ethical to boil lobsters alive? It seems to me there is a full spectrum of possible answers: at one extreme only humans are valued, or only primates, only mammals, only veterbrates, or at the other extreme, any organism with even a rudimentary nervous system (or any computational, digital isomorphism thereof), could be seen as a moral object/entity.

Now this is not necessarily a binary distinction, if shrimp have intrinsic moral value it doe... (read more)

2FAWS
Less absurd than that some organism is infinitely more valuable than its sibling that differs in lacking a single mutation (in the case of the first organism of a particular species to have evolved "high" enough to have minimal moral value)?
1GrateGoo
Suppose sentient beings have intrinsic value in proportion to how intensely they can experience happiness and suffering. Then the value of invertebrates and many non-mammal vertebrates is hard to tell, while any mammal is likely to have almost as much intrinsic value as a human being, some possibly even more. But that's just the intrinsic value. Humans have a tremendously greater instrumental value than any non-human animal, since humans can create superintelligence that can, with time, save tremendous amounts of civilisations in other parts of the universe from suffering (yes, they are sparse, but with time our superintelligence will find more and more or them, in theory ultimately infinitely many). The instrumental value of most humans is enormously higher than the intrinsic value of the same persons - given that they do sufficiently good things.
0Tiiba
My answer: if it shows signs of not wanting something to happen, such as avoiding a situation, it's best not to have it happen. Of course, simple stimulus response doesn't count, but if an animal can learn, it shouldn't be tortured for fun. This only applies to animals, though. I'm not sure about machines.
3WrongBot
There isn't a very meaningful distinction between animals and machines. What does or doesn't count as a "simple stimulus response"? Or learning?
0Tiiba
Okay, more details: if an animal's behavior changes when it's repeatedly injured, it can learn. And learning is goal-oriented. But if it always does the same thing in the same situation, whatever that action is, it doesn't correspond to a desire. And the reason why this is important for animals is that I assume that whatever it is that suffering is, I guess that it evolved quite long ago. After all, avoiding injury is a big part of the point of having a brain that can learn.
3WrongBot
I've programmed a robot to behave in the way you describe, treating bright lights as painful stimuli. Was testing it immoral?
2Tiiba
That's why I said it's hairier with machines. Um, actual pain or just disutility?
0WrongBot
That would depend pretty heavily on how you define pain. This is a good question; my first instinct was to say that they're the same thing, but it's not quite that simple. Pain in animals is really just an inaccurate signal of perceived disutility. The robot's code contained a function that "punished" states in which its photoreceptor was highly stimulated, and the robot made changes to its behavior in response, but I'm really not sure if that's equivalent to animal pain, or where exactly that line is.
0Cyan
Pain has been the topic of a top-level post. I think my own comment on that thread is relevant here.
0WrongBot
Ahh, I hadn't seen that before. Thanks for the link. So, did my robot experience suffering then? Or is there some broader category of negative stimulus that includes both suffering and the punishment of states in which certain variables are above certain thresholds? I think it's pretty clear that the robot didn't experience pain, but I'm still confused.

With regard to the recent proof of P!=NP: http://predictionbook.com/predictions/1588

1Paul Crowley
With no time limit, how can you ever win that one?
1gwern
No time limit?

Would people be interested in a place on LW for collecting book recommendations?

I'm reading The Logic of Failure and enjoying it quite a bit. I wasn't sure whether I'd heard of it here, and I found Great Books of Failure, an article which hadn't crossed my path before.

There's a recent thread about books for a gifted young tween which might or might not get found by someone looking for good books..... and so on.

Would it make more sense to have a top level article for book recommendations or put it in the wiki? Or both?

1[anonymous]
Considering most of my favorite books are the result of mentions in comment threads here, I'd say a book recommendation thread is in order. Tangental, but I remember "Logic of Failure" to be mostly being mental phenomena I was already familiar with, and generalizations from computer experiments that I didn't find particularly compelling. I'll have to give it another look.
0NancyLebovitz
I liked the section near the beginning about the various ways of being bad at optimizing complex computer scenarios. It was a tidy description of the ways people think too little about what they're doing and/or overfocus on the wrong things. Part of my enjoyment was seeing those matters described so compactly, and part of it was the emotional tone which combined a realization that this is a serious problem with a total lack of gloating over other people's idiocy. That last may indicate that I've been spending too much time online. If you didn't notice anything new to you in the book the first time, there may not be a good reason for you to reread it.
1Morendil
I'd say new top-level thread. The wiki can get a curated version of that.
2Clippy
I know. Does any human mathematician really doubt that?
6Unknowns
I've been becoming more and more convinced that Kevin and Clippy are the same person. Besides Clippy's attempt to get money for Kevin, one reason is that both of them refer to people with labels like "User:Kevin". More evidence just came in here, namely these comments within 5 minutes of each other.
0Clippy
I'm not User:Kevin.
4wedrifid
Explain why I should consider this to be evidence that you are not User:Kevin. (This is not rhetorical. It is something worth exploring. How does this instance of a non-human agent gain credibility? How can myself and such an agent build and maintain cooperation in the game of credible communication despite incentives to lie? Has Clippy himself done any of these things?)
3Clippy
Perhaps you shouldn't. But there's a small chance that, if I were a human like User:Kevin, and other Users had made such inferences correctly identifying me, I would regard this time as the optimal one for revealing my true identity. Therefore, my post above is slightly informative.
0Unknowns
That could easily be consistent with my statement, if taken in a certain sense.
0Clippy
Okay. Then believe that I am User:Kevin, if that's what it takes to stop being so bigoted toward me. ⊂≣\
5multifoliaterose
Yes, there are humans mathematicians who doubt that P is not equal to NP. See "Guest Column: The P=?NP Poll" http://www.cs.umd.edu/~gasarch/papers/poll.pdf by William Gasarch where a poll was taken of 100 experts, 9 of whom ventured the guess that P = NP and 22 of whom offered no opinion on how the P vs. NP question will be resolved. The document has quotes from various of the people polled elaborating on what their beliefs are on this matter.
1Kevin
How do you know you know?
3JoshuaZ
There's a very good summary by Scott Aaronson describing why we believe that P is very likely to be not equal to NP. However, Clippy's confidence seems unjustified. In particular, there was a poll a few years ago that showed that a majority of computer scientists believe that P=NP but a substantial fraction do not. (The link was here but seems to be not functioning at the moment (according to umd.edu's main page today they have a scheduled outage of most Web services for maintenance so I'll check again later. I don't remember the exact numbers so I can't cite them right now)). This isn't precisely my area, but speaking as a mathematician whose work touches on complexity issues, I'd estimate around a 1/100 chance that P=NP.
2Sniffnoy
URL is repeated twice in link?
0JoshuaZ
Thanks, fixed.
2Clippy
Because if it were otherwise -- if verifying a solution were of the same order of computational difficulty of finding it -- it would be a lot harder to account for my observations than if it weren't so. For example, verifying a proof would be of similar difficulty to finding the proof, which would mean nature would stumble upon representations isomorphic to either with similar probability, which we do not see. The possibility that P = NP but with a "large polynomial degree" or constant is too ridiculous to be taken seriously; the algorithmic complexity of the set of NP-complete problems does not permit a shortcut that characterizes the entire set in a way that would allow such a solution to exist. I can't present a formal proof, but I have sufficient reason to predicate future actions on P ≠ NP, for the same reason I have sufficient reason to predicate future actions on any belief I hold, including beliefs about the provability or truth of mathematical theorems.
2Kevin
Most human mathematicians think along similar lines. It will still be a big deal when P ≠ NP is proven, if for no other reason that it pays a million dollars. That's a lot of paperclips. Let me know if you think you can solve any of these! http://www.claymath.org/millennium/
0[anonymous]
Would you elaborate.
-4Clippy
Under the right conditions, yes.

Goodhart sighting? Misunderstanding of causality sighting? Check out this recent economic analysis on Slate.com (emphasis added):

For much of the modern American era, inflation has been viewed as an evil demon to be exorcised, ideally before it even rears its head. This makes sense: Inflation robs people of their savings, and the many Americans who have lived through periods of double-digit inflation know how miserable it is. But sometimes a little bit of inflation is valuable. During the Great Depression, government policies deliberately tried to creat

... (read more)
0RobinZ
It's possible - the next sentence after your quotation reads: ...which is at least a causal mechanism that would go the correct direction. That said, the part you quoted sounds pretty bad.
0h-H
but that seems to miss the whole point of depressions: over inflation Has to lead to deflation or X, and X is bad (angry masses, civil unrest, collapsed government, large scale wars etc). not many people have much money to begin with, and we should raise prices of homes and whatnot? people who have foreclosed Need to foreclose, just like companies that go broke Need to-the bailouts were a huge mistake- or else your financial model is broken and you actually want to support net negative behavior in the economy. now, I'm no economics major, but I don't that degree to know this: in a nutshell, if you have an asset-house for eg.-and it's market price is 100k but it and all the other houses in the area are being sold @ 500k and someone-most people anyway-actually buys that house by borrowing money they can never hope to pay back with interest in any reasonable amount of time, then that house's price simply Has to go down or else you have X. how does 'increasing inflation' solve the fundamental problem of there being no more wealth to pa for anything with? the US has simply borrowed more than it can pay back for decades if ever, inflation will only cause matters to worsen to improve. yes all governments have debt and survive, and a government having zero debt is unlikely to happen anytime soon, but that's fine as long as the debt is manageable, and it might seem like that if we take 'official' reports of the Outstanding Public Debt being around $13.3 Trillion, even though that's pretty bad, we'd just need tighter purse strings and some measures here and there and in a few decades it'll be mostly payed off, but unfortunately that's not going to happen. Factor in the remaining 'unfunded liabilities' ie. the benefits-money- promised by government to the elderly, sick, unemployed and so on-social security et all- and our debt is over $60 Trillion, each citizen's burden of an equal share amounts to around a quarter million $US. put raising inflation deliberately in such a

Last night I introduced a couple of friends to Newcomb's Problem/Counterfactual Mugging, and we discussed it at some length. At some point, we somehow stumbled across the question "how do you picture Omega?"

Friend A pictures Omega as a large (~8 feet) humanoid with a deep voice and a wide stone block for a head.

When Friend B hears Omega, he imagines Darmani from Majora's mask (http://www.kasuto.net/image/officialart/majora_darmani.jpg)

And for my part, I've always pictured him a humanoid with paper-white skin in a red jumpsuit with a cape (the cap... (read more)

9cousin_it
I've always pictured Omega like this: suddenly I'm pulled from our world and appear in a sterile white room that contains two boxes. At the same moment I somehow know the problem formulation. I open one box, take the million, and return to the world.
1h-H
This, down to the white room and being pulled. Omega doesn't Have form or personality. He's beyond physics.
1Spurlock
And when you get counterfactually mugged, you're in a sterile white room with a vending machine bill acceptor planted in the wall?
1cousin_it
No, just an empty room. If I take a bill out of my pocket and hold it in front of me, it disappears and I go back. If I say "no", I go back.
0JamesAndrix