User Profile


Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
Personal Blogposts
personPersonal blogposts by LessWrong users (as well as curated and frontpage).
rss_feed Create an RSS Feed

Errors in the Bostrom/Kulczycki Simulation Arguments

Show Highlightsubdirectory_arrow_left

Recent Comments

Lukas, I wish you had a bigger role in this community.

I've kept fairly up to date on progress in neural nets, less so in reinforcement learning, and I certainly agree at how limited things are now.

What if protecting against the threat of ASI requires huge worldwide political/social progress? That could take generations.

Not an example of that (which...(read more)

He might be willing to talk off the record. I'll ask. Have you had Darklight on? See

If my own experience and the experiences of the people I know is indicative of the norm, then thinking about ethics, the horror that is the world at large, etc, tends to encourage depression. And depression, as you've realized yourself, is bad for doing good (but perhaps good for not doing bad?). I'...(read more)

For Bostrom's simulation argument to conclude the disjunction of the two interesting propositions (our doom, or we're sims), you need to assume there are simulation runners who are motivated to do very large numbers of ancestor simulations. The simulation runners would be ultrapowerful, probably ric...(read more)

If my anecdotal evidence is indicative of reality, the attitude in the ML community is that people concerned about superhuman AI should not even be engaged with seriously. Hopefully that, at least, will change soon.

I'm not sure either. I'm reassured that there seems to be some move away from public geekiness, like using the word "singularity", but I suspect that should go further, e.g. replace the paperclip maximizer with something less silly (even though, to me, it's an adequate illustration). I suspect getti...(read more)

heh, I suppose he would agree

A guy I know, who works in one of the top ML groups, is literally less worried about superintelligence than he is about getting murdered by rationalists. That's an extreme POV. Most researchers in ML simply think that people who worry about superintelligence are uneducated cranks addled by sci fi.

...(read more)