User Profile


Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

No posts to display.

Recent Comments

... and stuns Akon (or everyone). He then opens a channel to the Superhappies, and threatens to detonate the star - thus preventing the Superhappies from "fixing" the Babyeaters, their highest priority. He uses this to blackmail them into fixing the Babyeaters while leaving humanity untouched.

No, he says "you're the first person who etc..."

Is this a "failed utopia" because human relationships are too sacred to break up, or is it a "failed utopia" because the AI knows what it should really have done but hasn't been programmed to do it?

that can support the idea that the much greater incidence of men committing acts of violence is "natural male aggression" that we can't ever eliminate.

The whole point of civilisation is to defeat nature and all its evils.

<i>... how isn't atheism a religion? It has to be accepted on faith, because we can't prove there isn't a magical space god that created everything.</i>

I think there's a post somewhere on this site that makes the reasonable point that "is atheism a religion?" is not an interesting question. The in...(read more)

<i>My issue with this is that we don't, actually, have a philosophical/rational/scientific vision of capital-T Truth yet, despite all of our efforts. (Descartes, Spinoza, Kant, etc.)</i>

Truth is whatever describes the world the way it is.

<i>Even the capital-T Truth believers will admit that we d...(read more)

Paul, that's a good point.

Eliezer: <i>If all I want is money, then I will one-box on Newcomb's Problem.</i>

Mmm. Newcomb's Problem features the rather weird case where the relevant agent can predict your behaviour with 100% accuracy. I'm not sure what lessons can be learned from it for the more n...(read more)

<i>If a serial killer comes to a confessional, and confesses that he's killed six people and plans to kill more, should the priest turn him in? I would answer, "No." If not for the seal of the confessional, the serial killer would never have come to the priest in the first place.</i>

It's importa...(read more)

<p>Benja: <i>But it doesn't follow that you should conclude that the other people are getting shot, does it?</i></p>

<p>I'm honestly not sure. It's not obvious to me that you shouldn't draw this conclusion if you already believe in MWI.</p>

<p><i>(Clearly you learned nothing about that, because wh...(read more)

<p>Benja: <i>Allan, you are right that if the LHC would destroy the world, and you're a surviving observer, you will find yourself in a branch where LHC has failed, and that if the LHC would not destroy the world and you're a surviving observer, this is much less likely. But contrary to mostly every...(read more)