User Profile

star0
description0
message20

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

No posts to display.

Recent Comments

I'm fairly interested but don't really want to be around children.

I feel like this comment belongs on the LessWrong 2.0 article (to the point that I assumed that's where it was when I was told about it), but it doesn't actually matter.

I'd be interested to read another take on it if there's some novel aspect to the explanation. Do you have a particular approach to explaining it that you think the world doesn't have enough of?

The tournament definitely made for an interesting game, but more as a conversation starter than a game itself. I suspect part of the issue was how short this particular game was, but the constant cooperation got old pretty quickly. Some ideas for next time someone wants to run a dilemma tournament...(read more)

> I doubt there is a sharp distinction between them

Actually, let's taboo weak and strong AI for a moment.

By weak AI I mean things like video game AI, self driving cars, WolframAlpha, etc.

By strong AI I think I mean something that can create weak AIs to solve problems. Something that does what...(read more)

What's your reason for thinking weak AI leads to strong AI? Generally, weak AI seems to take the form of domain-specific creations, which provide only very weak general abstractions.

One example that people previously thought would lead to general AI was chess playing. And sure, the design of che...(read more)

AGI takeoff is an event we as a culture have never seen before, except in popular culture. So, that in mind, reporters draw on the only good reference points the population has, sci fi.

What would sane AI reporting look like? Is there a way to talk about AI to people who have **only** been expose...(read more)

Perhaps people who are a step removed from the actual AI research process? When I say that, I'm thinking of people like Robin Hanson and Nick Bostrom, whose work depends on AI but isn't explicitly about it.

The answer to this question depends really heavily on my estimation of MIRI's capability as an organization and on how hard the control problem turns out to be. My current answer is "the moment the control problem is solved and not a moment sooner", but I don't have enough of a grip on the other di...(read more)

I'm not really seeing the importance of the separate construct Theorem (A → B) vs (A → B). It seems to get manipulated in the exact same way in the code above. Is there some extra capability of functions that Theorem + postulates doesn't include?

Also, I think you're misunderstanding what a funct...(read more)