User Profile


Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

No posts to display.

Recent Comments

<p>One possibility, given my (probably wrong) interpretation of the ground rules of the fictional universe, is that the humans go to the baby-eaters and tell them that they're being invaded. Since we cooperated with them, the baby-eaters might continue to cooperate with us, by agreeing to:</p>

<p>1...(read more)

@Wei: <i>p(n) will approach arbitrarily close to 0 as you increase n.</i>

This doesn't seem right. A sequence that requires knowledge of BB(k), has O(2^-k) probability according to our Solomonoff Inductor. If the inductor compares a BB(k)-based model with a BB(k+1)-based model, then BB(k+1) will o...(read more)

<i>If humanity unfolded into a future civilization of infinite space and infinite time, creating descendants and hyperdescendants of unlimitedly growing size, what would be the largest Busy Beaver number ever agreed upon?</i>

Suppose they run a BB evaluator for all of time. They would, indeed, have...(read more)

1. One difference between optimization power and the folk notion of "intelligence": Suppose the Village Idiot is told the password of an enormous abandoned online bank account. The Village Idiot now has vastly more optimization power than Einstein does; this optimization power is not based on social...(read more)

Chip, I don't know what you mean by "The AI Institute", but such discussion would be more on-topic at the SL4 mailing list than in the comments section of a blog posting about optimization rates.

The question of whether trying to consistently adopt meta-reasoning position A will raise the percentage of time you're correct, compared with meta-reasoning position B, is often a difficult one.

When someone uses a disliked heuristic to produce a wrong result, the temptation is to pronounce the he...(read more)

<a href="">CERN</a> on its LHC:

<i>Studies into the safety of high-energy collisions inside particle accelerators have been conducted in both Europe and the United States by physicists who are not themselves involved in experiments at the LHC......(read more)

<i>Wilczek was asked to serve on the committee "to pay the wages of his sin, since he's the one that started all this with his letter."</i>

Moral: if you're a practicing scientist, don't admit the possibility of risk, or you will be punished. (No, this isn't something I've drawn from this case stud...(read more)

@Vladimir: <i>We can't bother to investigate every crazy doomsday scenario suggested</i>

This is a strawman; nobody is suggesting investigating "every crazy doomsday scenario suggested". A strangelet catastrophe is <i>qualitatively</i> possible according to accepted physical theories, and was propo...(read more)