User Profile


Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
Personal Blogposts
personPersonal blogposts by LessWrong users (as well as curated and frontpage).
rss_feed Create an RSS Feed

Cambridge LW: Post-mortem

1 min read
Show Highlightsubdirectory_arrow_left

New prisoner's dilemma and chicken tournament

Show Highlightsubdirectory_arrow_left

Rationality Quotes: April 2011

Show Highlightsubdirectory_arrow_left

I want to learn programming

Show Highlightsubdirectory_arrow_left

Freaky unfairness

Show Highlightsubdirectory_arrow_left

The Fallacy of Dressing Like a Winner

Show Highlightsubdirectory_arrow_left

Recent Comments

Would it be possible to make those clearer in the post?

I had thought, from the way you phrased it, that the assumption was that for any game, I would be equally likelly to encounter a game with the choices and power levels of the original game reversed. This struck me as plausible, or at least a g...(read more)

I don't therefore see strong evidence I should reject my informal proof at this point.

I think you and I have very different understandings of the word 'proof'.

> In the real world, agent's marginals vary a lot, and the gains from trade are huge, so this isn't likely to come up.

I doubt this claim, particularly the second part.

True, many interactions have gains from trade, but I suspect the weight of these interactions is overstated in most people's mind...(read more)

You're right, I made a false statement because I was in a rush. What I meant to say was that as long as Bob's utility was linear, whatever utility function Alice has there is no way to get all the money.

> Are you enforcing that choice? Because it's not a natural one.

It simplifies the scenario, a...(read more)

> It does not. See this post ( ): any player can lie about their utility to force their preferred outcome to be chosen (as long as it's admissible). The weaker player can thus lie to get the maximum possible out of the stronger pla...(read more)

If situation A is one where I am more powerful, then I will always face it at high-normalisation, and always face its complement at low normalisation. Since this system generally gives almost everything to the more powerful player, if I make the elementary error of adding the differently normalised ...(read more)

You x+y > 2h proof is flawed, since my utility may be normalised differently in different scenarios, but this does not mean I will personally weight scenarios where it is normalised to a large number higher than those where it is normalised to a small number. I would give an example if I had more ti...(read more)

I didn't interpret the quote as implying that it would actually work, but rather as implying that (the Author thinks) Hanson's 'people don't actually care' arguments are often quite superficial.