User Profile


Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

[Link] Zack Weinersmith's One-Liner Generator

1 min read
Show Highlightsubdirectory_arrow_left

Recent Comments

For the interests of identity obfuscation, I have rolled a random number between 1 and 100, and have waited for some time afterwards.

On a 1-49: I have taken the survey, and this post was made after a uniformly random period of up to 24 hours.

On a 50-98: I will take the survey after a uniformly r...(read more)


* Given both demographics and recent discourse, you are going to want vegetarian and vegan options for food. * HPMOR has a large hatedom, for various reasons. Key vectors for trolls are photos, videos, and flyers. Be more conscious than usual about personal boundaries and privacy. * Public...(read more)

> The timeline continues with legal actions and arguments about what happened, but has no additional allegations.

You forgot me.

> August 13th, 2013

> Dallas J. Haugh

> Dallas posts a suicide note which includes allegations of rape against Shermer. It is taken down by a relative when he is secur...(read more)


I don't really feel the need to write that when I am aware of it from personal experience.

Keep in mind the fact that he is a serial rapist, which kind of undermines his thesis.

I actually calibrated my P(God) and P(Supernatural) based on P(Simulation), figuring that getting an exact figure for cases where (~Simulation & Supernatural) are basically noise.

I forgot what I actually defined "God" as for my probability estimation, as well as the actual estimation.

Your updates to your blog as of this post seem to replace "Less Wrong", or "MIRI", or "Eliezer Yudkowsky", with the generic term "AI risk advocates".

This just sounds more insidiously disingenuous.

I've had to deal with the stress you are contributing to putting on the broader perception of transhumanism for the weekend, and that is *on top of* preexisting mental problems. (Whether MIRI/LW is actually representative to this is entirely orthogonal to the point; public perception has and is shif...(read more)

Paperclip maximizer, obviously. Basilisks typically are static entities, and I'm not sure how you would go about making a credible anti-paperclip 'infohazard'.

I completed the survey. (Did not do the digit ratio questions due to lack of available precise tools.)