User Profile

star51
description7
message623

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
Personal Blogposts
personPersonal blogposts by LessWrong users (as well as curated and frontpage).
rss_feed Create an RSS Feed

What Evidence Is AlphaGo Zero Re AGI Complexity?

6mo
2 min read
Show Highlightsubdirectory_arrow_left
47

What Program Are You?

9y
Show Highlightsubdirectory_arrow_left
43

Least Signaling Activities?

9y
Show Highlightsubdirectory_arrow_left
103

Rationality Toughness Tests

9y
Show Highlightsubdirectory_arrow_left
17

Most Rationalists Are Elsewhere

9y
Show Highlightsubdirectory_arrow_left
34

Rational Me or We?

9y
Show Highlightsubdirectory_arrow_left
156

The Costs of Rationality

9y
Show Highlightsubdirectory_arrow_left
81

Test Your Rationality

9y
Show Highlightsubdirectory_arrow_left
87

Recent Comments

I disagree with the claim that "this single simple tool gives a bigger advantage on a wider range of tasks than we have seen with previous tools."

Consider the example of whether a big terror attack indicates that there has been an increase in the average rate or harm of terror attacks. You could easily say "You can't possibly claim that big terror attack yesterday is no evidence; and if it is evidence it is surely in the direction of the ave ...(read more)

This seems to me a reasonable statement of the kind of evidence that would be most relevant.

As I said, I'm treating it as the difference of learning N simple general tools to learning N+1 such tools. Do you think it stronger evidence than that, or do you think I'm not acknowledging how big that is?

Why assume AGI doesn't have problems analogous to agency problems? It will have parts of itself that it doesn't understand well, and which might go rogue.

If it is the possibility of large amounts of torture that bothers you, instead of large ratios of torture experience relative to other better experience, then any growing future should bother you, and you should just want to end civilization. But if it is ratios that concern you, then since torture ...(read more)

Even if you use truth-promoting norms, their effect can be weak enough that other effects overwhelm this effect. The "rationalist community" is different in a great many ways from other communities of thought.

I said I haven't seen this community as exceptionally accurate, and you say that you have seen that, and called my view "uncharitable". I then mentioned a track record as a way to remind us that we lack the sort of particularly clear evidence that we agree would be persuasive. I didn't mean that to ...(read more)

Yes believe fewer things and believe them less strongly. On abstract beliefs I'm not following you. The usual motive for most people is that they don't need most abstract beliefs to live their lives.

"charitable" seems an odd name for the tendency to assume that you and your friends are better than other people, because well it just sure seems that way to you and your friends. You don't have an accuracy track record of this group to refer to, right?