User Profile

star0
description0
message11

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

No posts to display.

Recent Comments

When life gives you lemons, order miracle berries.

ph'nglui mglw'nafh Eliezer Yudkowsky Clinton Township wgah'nagl fhtagn

Doesn't really roll off the tongue, does it.

(http://en.wikipedia.org/wiki/Cryonics_Institute)

Considering the ridiculous context of the rest of the conversation, (i.e. Dumbledore either pretending to be insane or actually letting some real insanity slip through) is it too far outside the realm of possibility for that comment to be a joke? It seemed like Dumbledore was going out of his way to...(read more)

I had to look it up, but I definitely agree. Especially considering how quickly the karma changes reversed after I edited in that footnote.

I wish I could upvote this post back into the positive.

(It seems pretty obvious to me that is a direct satire of the previous post by a similar username. What, no love for sarcasm?)

Such a great game. Seeing this makes me want to play it again, having discovered this site and done some actual reading on transhumanism and AI. It might change the choice I'd make at the end...

Of course, this goes even further than just proving the old saying about Deus Ex, considering you never ...(read more)

Thanks! I hadn't read that article yet, but I became familiar with the concept when reading one of Eliezer Yudkowsky's papers on existential risk assessment. (Either [this one](http://intelligence.org/upload/cognitive-biases.pdf) or [this one](http://intelligence.org/upload/artificial-intelligence-r...(read more)

Now that I think about it, "natural selection" seems more appropriate.

Exactly. I also suspect that logical overconfidence, i.e. knowing a little bit about bias and thinking it no longer affects you, is magnified with higher intelligence.

I can't help but remember that saying about great power and great responsibility.

Hello, Less Wrong.

Like some others, I eventually found this site after being directed by fellow nerds to HPMOR. I've been working haphazardly through the Sequences (getting neck-deep in cognitive science and philosophy before even getting past the preliminaries for quantum physics, and loving ever...(read more)