User Profile


Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

No posts to display.

Recent Comments

I don't believe in the existence of morals, which is to say there is no "right" or "wrong" in the universe. However, I'll still do actions that most people would rate "moral". The reasons I do this are found in my brain architecture, and are not simple. Also, I don't care about utilitarianism. One c...(read more)

I believe that "nothing is right or wrong", but that doesn't affect my choices much. There is nothing inconsistent with that.

Roko, morals are in the end arbitrary, and there is no "correct" moral code for the AI to choose. The AI can be programmed to generalize a moral code from all humans though.

You can have real X-Men, check out a discovery special about "real superhumans". There was one guy who could withstand cold so well that the doctors thought it shouldn't be possible. A single mutation sometimes does create significant changes (and in this case advantages). more)

If you believe that p-zombies are logically impossible, you're claiming that when one does an atom simulation, and those atoms happen to form a human brain, then it creates a pathway to the consciousness-stuff, and not only that, but that consciousness-stuff has a precise, causal effect on your atom...(read more)

"3. Intuitively, it sure seems like my inward awareness is causing my internal narrative to say certain things."

Intuitively maybe, but in the epiphenomenalism you only have conscious experience of the 'inward awareness', and it is in reality a physical function which creates the experience, so the...(read more)

"In worlds where it is impossible to measure a difference in principle, it shouldn't have any impact on what's the correct action to take, for any sane utility function."

Wrong, since it may be possible to estimate the probability of being in a p-zombie world, or more generally the probability that...(read more)

"However, this will necessarily mean that they're shown to refer to things that are actually measurable."

Things that cannot be measured can still be very important, especially in regard to ethics. One may claim for example that it is ok to torture philosophical zombies, since after all they aren't...(read more)

"We've already found the flaw."

What exactly is the logical flaw you've found? The zombie argument tells among other things that there can be no test that will tell if a person is really conscious or just a zombie. You might "know" that you're conscious yourself, but there can be no rational argume...(read more)

If a theory does not make you 'less confused', it doesn't mean that the theory is wrong or bad. It could just be that it is the way how the world really functions, that some things are truly unknowable. Consciousness might be one of those things that will never be solved (yes, I know that a statemen...(read more)