User Profile


Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

The Practical Argument for Free Will

Show Highlightsubdirectory_arrow_left

Alien Implant: Newcomb's Smoking Lesion

Show Highlightsubdirectory_arrow_left

David Allen vs. Mark Forster

Show Highlightsubdirectory_arrow_left

[Link] Minimizing Motivated Beliefs

Show Highlightsubdirectory_arrow_left

Recent Comments

"But of course the claims are separate, and shouldn't influence each other."

No, they are not separate, and they should influence each other.

Suppose your terminal value is squaring the circle using Euclidean geometry. When you find out that this is impossible, you should stop trying. You should g...(read more)

Nope. There is no composition fallacy where there is no composition. I am replying to your position, not to mine.

I do care about tomorrow, which is not the long run.

I don't think we should assume that AIs will have any goals at all, and I rather suspect they will not, in the same way that humans do not, only more so.

Not really. I don't care if that happens in the long run, and many people wouldn't.

I considered submitting an entry basically saying this, but decided that it would be pointless since obviously it would not get any prize. Human beings do not have coherent goals even individually. Much less does humanity.

Right. Utilitarianism is false, but Eliezer was still right about torture and dust specks.

>Can we agree that I am not trying to prosthelytize anyone?

No, I do not agree. You have been trying to proselytize people from the beginning and are still doing trying.

>(2) Claiming authority or pointing skyward to an authority is not a road to truth.

This is why you need to stop pointing to "C...(read more)

I basically agree with this, although 1) you are expressing it badly, 2) you are incorporating a true fact about the world into part of a nonsensical system, and 3) you should not be attempting to proselytize people.

Nothing to see here; just another boring iteration of the absurd idea of "shifting goalposts."

There really is a difference between a general learning algorithm and specifically focused ones, and indeed, anything that can generate and test and run experiments will have the theoretical capability to...(read more)

>Do you not think the TCS parent hasn't also heard this scenario over and over? Do you think you're like the first one ever to have mentioned it?

Do you not think that I am aware that people who believe in extremist ideologies are capable of making excuses for not following the extreme consequenc...(read more)