Posts

Sorted by New

Wiki Contributions

Comments

On the topic of "utilities in the prisoner dilemma coinciding with jailtime" I quote one of my guest blog posts: http://phd.kt.pri.ee/2009/01/27/the-real-prisoner-dilemma/

Two hardened criminals are taken to interrogation in separate cells. They are offered the usual deal: If neither confesses, both get one year probation. If both confess, both do 5 years in jail. If one confesses, he goes free but the other does 10 years hard time.

Here’s what actually goes through their minds: “Okay, if neither of us confesses, we have to go back to the real world. But its so hard there! But if I confess, he will kill me when he gets out.. so thats bad… If both of us confess, then we can just get back to jail and continue our lives!”

Lateral thinking, people ;)

Im just reading Thomas Schelling's Theory of Conflict and one of his key tenets is that providing an identifiable point around which the discussion can be centered will tend to lead the discussion to be centered around that (classical anchoring). However, he brings out that in many cases, having a "line in the sand" brings benefits to all sides by allowing intermediate deals to be struck when only extremes were possible before.

This article, however, clearly demonstrates that having a line in the sand can be just as bad as it can be good, as it is with all of biases. However, I really recommend Schelling hit on "what is good" (in the evolutionary sense) about this phenomenon.

But three people should do already. Im fairly convinced that this game is unstable in the sense it would not make sense for any of them to agree to get 1/3 as they can always guarantee themselves more by defecting with someone (even by offeing them 1/6 - epsilon which is REALLY hard to turn down). It seems that a given majority getting 1/2 each would be a more probable solution but you would really need to formalize the rules before this can be proven. Im a cryptologist so this is sadly not really my area...

Sorry. I thought about things a little and realized that a few things about prospect theory definately need to be scrapped as bad ideas.. The probability weighing for instance. But other quirks (such as loss aversion or having different utilities for loss vs gain) might be useful to retain...

It would really be good if I knew a bit more about the different descision theories at this point. Does anyone have any good references from where one would get an overview and good references?

One thing that came to mind just this morning: Why is expected utility maximization the most rational thing to do? As I understand it (and Im a CS, not Econ. major), prospect theory and the utility function weighing used in it are usually accepted as how most "irrational" people make their descisions. But this might not be because they are irrational but rather because our utility functions do actually behave that way in which case we should abandon EU and just try to maximize well being with all the quirks PT introduces (such as loss being more costly than gain and so on)...

Or is this how most people here already do things? Any and all feedback on this idea would be really apreciated (especially links to relevant discussion of this idea as I am sure Im not the first to come up with it).

Hello,

My name is Margus Niitsoo and Im a 22 year old Computer Science doctorial student in Tartu, Estonia. I have wide interests that span religion and psychology as well (I am a pantheist by the way.. so somewhat religious but unaffected by most of the classical theism bashing). I got here through OB which I got to when reading about AI and the thing that shall not be named.

I do not identify myself as a rationalist for I only recently understood how emotional a person I really am and id like to enjoy it before trying to get it under control again. However, I am interested in understanding human behaviour as best I can and this blog has given me many new insights I doubt I could have gotten somewhere else.

Another thing that comes off the top of my head is that one might try to get some groups already interested in this topic (in theory) to read LW and OB. One such group I can think of are LaVeyan Satanists. In theory, it is a religion of rationality (although, in practice, it is rather far from it quite often.. Im just lucky to know a specimen who embodies the theory)... Then again, this might not be an association we want (especially in US.. it would even be rather bad here in Estonia where most of the country is atheistic).. but there should be some other groups who hold rationality as one of their core values but know relatively little about it. These people should be rather easy to get - just by stressing that it is one of their own core values...

The game of "Paranoid Debating" ( http://lesswrong.com/lw/77/selecting_rationalist_groups/6lb ) would make for a great gameshow and it would definately increase the popularity of rationality. Someone should try pitching it to a TV station...

Just reminding everyone of one more sad thing - every good cause to rally people under generally needs an enemy. And if there isnt one, it usually develops or is found. People somehow just want to be against things rather than for them..

Also, atheism seems to be one of the few things most of us here have in common so Matt Newports post hits a nail there. We have a tradition of bashing theism. Traditions go a long way towards cementing a sense of community, so they do have a positive side. But the fact is that once a tradition has developed, people who break it are usually viewed as outsiders in some sense so it makes sense for people to stick to the traditons.

Mental energy is actually a limiting factor and I believe that this is the cause for more failures than people care to admit. That is, we as humans have a tendency to pick our battles as we have a limited amount of time and thinking resources and as such only invest large amounts of both only on a very small set of descisions. This means that most descisions do get done rather automatically.. which (as has been argued in previous articles) is rather normal. However, I think that a rationalist should be able to determine wether the thing he messed up was something he clearly did without paying it much attention (and thus did as best as he could given his very limited resources of time) or wether he really did invest a lot of consideration into it and just messed up. In both cases, lessons of course need to be learned and priorities adjusted (the fact you did it automatically might need to be corrected so that the next time you WOULD actually think in that situation) but I still believe that we cannot hold ourselves to the highest of rational standards for every single descision we make..

Maybe this is what the original author meant by saying that his mental energy budget is limited. Anyways, I thought that this aspect required further discussion...

Load More