User Profile


Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
Personal Blogposts
personPersonal blogposts by LessWrong users (as well as curated and frontpage).
rss_feed Create an RSS Feed

[Link] NYT: A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit

Show Highlightsubdirectory_arrow_left

[Link] ICO to Build Next Generation AI Raises $36 Million in 60 Seconds

1 min read
Show Highlightsubdirectory_arrow_left

Pathological utilitometer thought experiment

Show Highlightsubdirectory_arrow_left

Recent Comments

I agree it fits well here. However, it has a very different tone from other posts on the MIRI blog, where it has also been posted.

Laziness. Though I note Stuart_Armstrong had the same opinion as me, and offered even fewer means of improvement, and got upvoted. I should have also said I agree with all points contained herein, and that the message is an important one. That would have reduced the bite.

This article is very heavy with Yudkosky-isms, repeats of stuff he's posted before, and it needs a good summary, and editing to pare it down. I'm surprised they posted it to the MIRI blog in its current form.

Edit: As stated below, I agree with all the points of the article, and consider it an impo...(read more)

Even in that case, whichever actor has the most processors would have the largest "AI farm", with commensurate power projection.

That interview is indeed worrying. I'm surprised by some of the answers.

More likely, he also "always thought that way," and the extreme story was written to provide additional drama.