User Profile

star0
description9
message442

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

Attacking machine learning with adversarial examples

1y
Show Highlightsubdirectory_arrow_left
0

Gates 2017 Annual letter

1y
Show Highlightsubdirectory_arrow_left
4

Raymond Smullyan has died

1y
Show Highlightsubdirectory_arrow_left
0

A Few Billionaires Are Turning Medical Philanthropy on Its Head

1y
Show Highlightsubdirectory_arrow_left
0

Newcomb versus dust specks

2y
1 min read
Show Highlightsubdirectory_arrow_left
104

The guardian article on longevity research [link]

3y
1 min read
Show Highlightsubdirectory_arrow_left
27

Discussion of AI control over at worldbuilding.stackexchange [LINK]

3y
1 min read
Show Highlightsubdirectory_arrow_left
0

Rodney Brooks talks about Evil AI and mentions MIRI [LINK]

4y
1 min read
Show Highlightsubdirectory_arrow_left
7

Recent Comments

It's not just the one post, it's the whole sequence of related posts.

It's hard for me to summarize it all and do it justice, but it disagrees with the way you're framing this. I would suggest you read some of that sequence and/or some of the decision theory papers for a defense of "should" notion...(read more)

Have you read http://lesswrong.com/lw/rb/possibility_and_couldness/ and the related posts and have some disagreement with them?

>If they can perfectly predict your actions, then you have no choice, so talking about which choice to make is meaningless.

This was argued against in the Sequences and in general, doesn't seem to be a strong argument. It seems perfectly compatible to believe your actions follow deterministically a...(read more)

What part of physics implies someone cannot scan your brain and simulate inputs so as to perfectly predict your actions?

>An evolved system is complex and dynamic, and can lose its stability. A created system is presumed to be static and always stable, so Christians don't consider LUC to be an issue with respect to the environment.

The distinction here would be that a created system's complexity is *designed* to be ...(read more)

The "directly relevant information" is the information you *know*, and not any information you don't know.

If you want to construct a bet, do it among all possibly existing people that, as far as they know, could be each other. So any information that one person knows at the time of the bet, everyo...(read more)

If you don't know the current time, you obviously can't reason as if you did. If we were in a simulation, we wouldn't know the time in the outside world.

Reasoning of the sort "X people exist in state A at time t, and Y people exist in state B at time t, therefore I have a X:Y odds ratio of being i...(read more)

Has anyone rolled the die more than once? If not, it's hard to see how it could converge on that outcome unless everybody that's betting saw a 3 (even a single person seeing differently should drive the price downward). Therefore, it depends on how many people saw rolls, and you should update as if ...(read more)

> But is this because of a fault of the Hollywood system, or is it because there are few significant movie story ideas left that have not been done?

Neither: revealed preferences of consumers are in favor of reboots so that's what gets made. That's only a "fault" if your preferences differ from tha...(read more)

The other tells you that it will be between $50 and $70 million, with an average of $10 million.

Typo?