User Profile

star0
description0
message4

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
Personal Blogposts
personPersonal blogposts by LessWrong users (as well as curated and frontpage).
rss_feed Create an RSS Feed

No posts to display.

Recent Comments

> At which point the humans running this NN will notice that it likes to go around risk control measures and will... persuade it that it's a bad idea.

How? By instituting more complex control measures? Then you're back to the problem Kaj mentioned [above](http://lesswrong.com/r/discussion/lw/ne1/...(read more)

Yes, but this means that a lot of very rich people are very incorrect as to what is important for their wealth.

They know about the factors they can control. After all, those are the ones they actually focus on.

So? An opaque neural net would quickly learn how to get around trade size restrictions if given the proper motivations.

Consider that if it had been the opposite - IQ was more a personal benefit than a country benefit - we'd be explaining it as "obviously smart people benefit themselves at the expense of others".

Yes, it's called basing your beliefs on the evidence.