User Profile

star4
description12
message504

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

Less Wrong on Twitter

6y
1 min read
Show Highlightsubdirectory_arrow_left
31

I Stand by the Sequences

6y
2 min read
Show Highlightsubdirectory_arrow_left
250

[link] TEDxYale - Keith Chen - The Impact of Language on Economic Behavior

6y
Show Highlightsubdirectory_arrow_left
9

The Best Comments Ever

6y
1 min read
Show Highlightsubdirectory_arrow_left
20

[Pile of links] Miscommunication

6y
Show Highlightsubdirectory_arrow_left
3

On Saying the Obvious

6y
Show Highlightsubdirectory_arrow_left
46

[Transcript] Richard Feynman on Why Questions

6y
Show Highlightsubdirectory_arrow_left
44

[Transcript] Tyler Cowen on Stories

6y
Show Highlightsubdirectory_arrow_left
14

Recent Comments

I suggest a new rule: the source of the quote should be at least three months old. It's too easy to get excited about the latest blog post that made the rounds on Facebook.

>It is because a mirror has no commitment to any image that it can clearly and accurately reflect any image before it. The mind of a warrior is like a mirror in that it has no commitment to any outcome and is free to let form and purpose result on the spot, according to the situation.

—Yagyū Muneno...(read more)

You may find it felicitous to link directly to the tweet.

This reminds me of how I felt when I learned that a third of the passengers of the Hindenburg survived. Went something like this, if I recall:

Apparently if you drop people out of the sky in a ball of fire, that's not enough to kill all of them, or even 90% of them.

I have become 30% confident that my comments here are a net harm, which is too much to bear and so I am discontinuing my comments here unless someone cares to convince me otherwise.

Edit: Good-bye.

Which is not the same thing as expecting a project to take much less time than it actually will.

Edit: I reveal my ignorance. Mea culpa.

Parts of this I think are brilliant, other parts I think are absolute nonsense. Not sure how I want to vote on this.

>there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.

This strikes me as probably true but unproven.

>My own investigations suggest th...(read more)

That isn't the planning fallacy.

This is a better explanation than I could have given for my intuition that physicalism (i.e. "the universe is made out of physics") is a category error.

>Whether or not a non-self-modifying planning Oracle is the *best* solution in the end, it's not such an *obvious* privileged-point-in-solution-space that someone should be alarmed at SIAI not discussing it. This is empirically verifiable in the sense that 'tool AI' wasn't the obvious solution to e...(read more)