Sorted by New


Rationality Boot Camp

"Walking on the moon is power! Being a great wizard is power! There are kinds of power that don't require me to spend the rest of my life pandering to morons!"

Rationality Boot Camp

If you are solving an equation, debugging a software system, designing an algorithm, or any number of other cognitive tasks, understanding the methods of rationality involved in interacting with other people will be of no use to you (unless it just happens to be that some of the material applies across the domains). These are things that have to be done in and of yourself.

Rationality Boot Camp

It appears that the majority of the activities and the primary focus of this boot camp is on rationality when interacting with others... social rationality training. While some of this may apply across domains, my interest is strictly in "selfish" rationality... the kind of rationality that one uses internally and entirely on your own. So I don't really know if this would be worth the grandiose expense of 10 "all-consuming" weeks. Maybe it would help if I had more information on the exact curriculum you are proposing.

Normal Cryonics

where the hell would you find a group like that!?

That Magical Click

well in that case, can you explain that emoticon (:3)? I have yet to hear any explanation that makes sense :)

That Magical Click

Is this really relevant ...

That Magical Click

Does anyone know if Blink: The Power of Thinking Without Thinking is a good book?


Amazon.com Review

Blink is about the first two seconds of looking--the decisive glance that knows in an instant. Gladwell, the best-selling author of The Tipping Point, campaigns for snap judgments and mind reading with a gift for translating research into splendid storytelling. Building his case with scenes from a marriage, heart attack triage, speed dating, choking on the golf course, selling cars, and military maneuvers, he persuades readers to think small and focus on the meaning of "thin slices" of behavior. The key is to rely on our "adaptive unconscious"--a 24/7 mental valet--that provides us with instant and sophisticated information to warn of danger, read a stranger, or react to a new idea.

Gladwell includes caveats about leaping to conclusions: marketers can manipulate our first impressions, high arousal moments make us "mind blind," focusing on the wrong cue leaves us vulnerable to "the Warren Harding Effect" (i.e., voting for a handsome but hapless president). In a provocative chapter that exposes the "dark side of blink," he illuminates the failure of rapid cognition in the tragic stakeout and murder of Amadou Diallo in the Bronx. He underlines studies about autism, facial reading and cardio uptick to urge training that enhances high-stakes decision-making. In this brilliant, cage-rattling book, one can only wish for a thicker slice of Gladwell's ideas about what Blink Camp might look like. --Barbara Mackoff

Reference class of the unclassreferenceable

If you actually look a little deeper into cryonics you can find some more useful reference classes than "things promising eternal (or very long) life"


  1. Cells and organisms need not operate continuously to remain alive. Many living things, including human embryos, can be successfully cryopreserved and revived. Adult humans can survive cardiac arrest and cessation of brain activity during hypothermia for up to an hour without lasting harm. Other large animals have survived three hours of cardiac arrest near 0°C (+32°F ) (Cryobiology 23, 483-494 (1986)). There is no basic reason why such states of "suspended animation" could not be extended indefinitely at even lower temperatures (although the technical obstacles are enormous).

  2. Existing cryopreservation techniques, while not yet reversible, can preserve the fine structure of the brain with remarkable fidelity. This is especially true for cryopreservation by vitrification. The observations of point (a) make clear that survival of structure, not function, determines survival of the organism.

  3. It is now possible to foresee specific future technologies (molecular nanotechnology and nanomedicine) that will one day be able to diagnose and treat injuries right down to the molecular level. Such technology could repair and/or regenerate every cell and tissue in the body if necessary. For such a technology, any patient retaining basic brain structure (the physical basis of their mind) will be viable and recoverable.

I up-voted the post because you talked about two good, basic thinking skills. I think that paying attention to the weight of priors is a good thinking technique in general- and I think your examples of cryonics and AI are good points, but your conclusion fails- the argument you made does not mean they have 0 chance of happening, but you could take out of that more usefully, for example that any given person claiming to have created AI probably has close to 0 chance of having actually done it (unless you have some incredibly good evidence:

"Sorry Arthur, but I'd guess that there is an implicit rule about announcement of an AI-driven singularity: the announcement must come from the AI, not the programmer. I personally would expect the announcement in some unmistakable form such as a message in letters of fire written on the face of the moon." - Dan Clemmensen

). The thinking technique of abstracting and "stepping back from" or "outside of" or using "reference class forecasting" for your current situation also works very generally. Short post though, I was hoping you would expand more.

Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions

As a question for everyone (and as a counter argument to CEV),

Is it okay to take an individual human's rights of life and property by force as opposed to volitionally through a signed contract?

And the use of force does include imposing on them without their signed volitional consent such optimizations as the coherent extrapolated volition of humanity, but could maybe(?) not include their individual extrapolated volition.

A) Yes B) No

I would tentatively categorize this as one possible empirical test for Friendly AI. If the AI chooses A, this could to an Unfriendly AI which stomps on human rights, which would be Really, Really Bad.

Load More