User Profile


Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

No posts to display.

Recent Comments

>Yet who prohibits? Who prevents it from happening?

Eliezer seems absurdly optimistic to me. He is relying on some unseen entity to reach in and keep the laws of physics stable in our universe. We already see lots of evidence that they are not truly stable, for example we believe in both the ele...(read more)

> Moreover, we know of examples where natural selection has caused drastic decreases in organismal complexity – for example, canine venereal sarcoma, which today is an infectious cancer, but was once a dog.

Or human selection. Henrietta Lax (or her cancer) is now many tonnes of cultured cells; she ...(read more)

I love the idea of an intelligence explosion but I think you have hit on a very strong point here: > In fact, as it picks off low-hanging fruit, new ideas will probably be harder and harder to think of. There's no guarantee that "how smart the AI is" will keep up with "how hard it is to think of way...(read more)

>Even if such worlds do 'exist', whether I believe in magic within them is unimportant, since they are so tiny;

Since there is a good deal of literature indicating that our own world has a surprisingly tiny probabilty (ref: any introduction to the Anthropic Principle), I try not to dismiss the fat...(read more)

> Is there an underlying problem of crying wolf; too many warning messages obscure the ones that are really matters of life and death?

This is certainly an enormous problem for interface design in general for many systems where there is some element of danger. The classic "needle tipping into the...(read more)

> "There is an object one foot across in the asteroid belt composed entirely of chocolate cake."

This is a lovely example, which sounds quite delicious. It reminds me strongly of the famous example of Russell's Teapot (from his 1952 essay "Is There a God?"). Are you familiar with his writing...(read more)

> stay away from this community I responded to this suggestion but deleted the response as unsuitable because it might embarass you. I would be happy to email my reply if you are interested.

>we'd probably convince you such perma-death would be the highly probable outcome

Try reading what I said i...(read more)

>>Now, whether that distributed information is 'experiencing' anything is arguable,

>As far as I know, the latter is what people are worrying about when they worry about ceasing to exist.

Ahhh... that never occurred to me. I was thinking entirely in terms of risk of data loss.

> (Which is presu...(read more)

> People talk about the grey goo scenario, but I actually think that is quite silly because there is already grey goo all over the planet in the form of life" ... " nothing CAN do this, because nothing HAS done it."

The grey goo scenario isn't really very silly. We seem to have had a green goo s...(read more)

Yes, I am sorry for the mistakes, not sure if I can rectify them. I see now about protecting special characters, I will try to comply.

I am sorry, I have some impairments and it is hard to make everything come out right.

Thank you for your help