All of pricetheoryeconomist's Comments + Replies

"These are people whose utility function does not place a higher utility on 'dieing but not having to take my meds'."

Why are you making claims about their utility functions that the data does not back? Either people prefer less to more, knowingly, or they are making rational decisions about ignorance, and not violating their "ugh" field, which is costly for them.

How is that any different than a smoker being uncomfortable quitting smoking? (Here I recognize that smoking is obviously a rational behavior for people who choose to smoke).

4JenniferRM14y
Your posts under this name have the potential for some hilarious and educational trolling, though you have some stiff competition if you want to be the best. You should probably refine your approach a little bit. Links to the literature would give you more points for style. Also, the parenthetical aside was a bit much - it made the trolling too obvious.
3Blueberry14y
It's pretty similar, actually: just as a smoker may prefer to quit but find doing so psychologically difficult, someone with a terminal illness may prefer to take their meds but also find it difficult. It's not clear how to assign utility in such a case, as the agent involved isn't a unified whole. There's the sub-agent who is addicted and the sub-agent who wants to quit.
6wedrifid14y
I get it. You define humans as rational agents with utility functions of whatever it is that they happen to do because it was convenient for the purposes of a model they taught you in Economics 101. You are still just wrong.

You have it all wrong. Your "ugh" field should go into their utility function! Whether or not they invest the resources to overcome that "ugh" field and save their life is endogenous to their situation!

You are making the case for rationality, it seems to me. Your suggestion may be that people are emotional, but not that they are irrational! Indeed, this is what the GMU crowd calls "rationally irrational." Which makes perfect sense--think about the perfectly rational decision to get drunk (and therefore be irrational). It... (read more)

8Psychohistorian14y
I appreciate the Devil's Advocacy. The simple issue, though, is that if you use a definition of "rational" that encompasses this behaviour, you've watered the word down to oblivion. If the behaviour I described is rational, then, "People who act always act rationally," is essentially indistinguishable from, "People who act always act." It's generally best to avoid having a core concept with a definition so vacuous it can be neatly excised by Occam's Razor.
4wedrifid14y
You are just wrong. These are people whose utility function does not place a higher utility on "dieing but not having to take my meds". If your preferred theory takes a human and forces the self-contradictions into a simple rational agent with a coherent utility function you must resolve the contradictions the way the agent would prefer them to be resolved if they were capable of resolving them intelligently. If your preferred theory does not do this then it is a crap theory. A map that does not describe the territory. A map that is better used as toilet paper.

Great idea. Very clever.

Perhaps someone has said this already, but it's worth noting that if you did this in the car dealer example, car dealers could sign similar contracts--your deal would not go through.

Then, negotiating with car dealers would have a game theoretic hawk/dove or snowdrift equilibrium. Similarly with potential wives. They could sign contracts that agree they will never sign prenups--another hawk/dove equilibrium.

3pjeby14y
I'm baffled by this logic. If the two people want to be together, but have incompatible contracts, then they still get to be together. They just won't marry. This is still a total win for the pro-prenup-contractor, who doesn't risk losing any money. But it's a loss for the anti-prenup contractor, since they don't get to gain any money. In contrast, the person without an anti-prenup contract gets to be married, but can't get their hands on the pro-prenup contractor's money. Which of these two sounds more socially acceptable? And which is more likely to seem desirable to the type of person the pro-prenup contractor would prefer to marry in the first place?

Is Samuel Johnson's quote a valid or true statement? I understand your central thrust--the inability to do something personally (such as control one's sexual urges) and the disposition to encourage others to overcome that inability are not necessarily contradictory--indeed, they may fall together naturally.

However, in Samuel Johnson's world, and the world in which this "issue" comes up the most, politics, we might imagine that there exist two types of people: sociopathic individuals hungry for power, and individuals who are sincere.

If sociopath... (read more)

4utilitymonster14y
Fair enough. Maybe it is typically reasonable to charge people with hypocrisy when they neglect to follow their professed ethical codes. I still like the quote, even if it is hyperbolic. It is useful to be reminded that there are important cases where failure to live up to one's professed code does not warrant this kind of criticism. Being overly concerned with hypocrisy can make you be unconcerned with living up to a meaningful ethical code. This is especially important in the context of consequentialist morality. This is just a hunch, but I think there are a fair number of intelligent people who shy away from a demanding code for fear of being charged with hypocrisy. But there need be no genuine hypocrisy, at least in any deeply regrettable sense, in professing a demanding ethical code and failing to live up to it. Better to try to live up to a demanding code and fail than meet the demands of an uninspiring and mundane one. (In this kind of case, of course, you aren't just professing the code to curry political favor.)

A reasonable an idea for this and other problems that don't' seem to suffer from ugly asymptotics would simply to mechanically test it.

That is to say that it may be more efficient, requiring less brain power, to believe the results of repeated simulations. After going through the Monty Hall tree and statistics with people who can't really understand either, then end up believing the results of a simulation whose code is straightforward to read, I advocate this method--empirical verification over intuition or mathematics that are fallible (because you yourself are fallible in your understanding, not because they contain a contradiction).

0casebash8y
Depending on what you're testing and a decent level of maths ability, empirics doesn't help you here.
3Morendil14y
This is an interesting idea, that appeals to me owing to my earlier angle of attack on intuitions about "subjective anticipation". The question then becomes, how would we program a robot to answer the kind of question that was asked of Sleeping Beauty? This comment suggests one concrete way of operationalizing the term "credence". It could be a wrong way, but at least it is a concrete suggestion, something I think is lacking in other parts of this discussion. What is our criterion for judging either answer a "wrong" answer? More specifically still, how do we distinguish between a robot correctly programmed to answer this kind of question, and one that is buggy? As in the robot-and-copying example, I suspect that which of 1/2 or 1/3 is the "correct" answer in fact depends on what (heretofore implicit) goals, epistemic or instrumental, we decide to program the robot to have.

I don't see this as a valid criticism, if it intended to be a dismissal. The addendum "beware this temptation" is worth highlighting. While this is a point worth making, the response "but someone would have noticed" is shorthand for "if your point was correct, others would likely believe it as well, and I do not see a subset of individuals who also are pointing this out."

Let's say there are ideas that are internally inconsistent or rational or good (and are thus not propounded) and ideas that are internally consistent or irr... (read more)

1dilaudid14y
Yes - this is exactly the point I was about to make. Another way of putting it is that an argument from authority is not going to cut mustard in a dialog (i.e. in a scientific paper, you will be laughed at if your evidence for a theory is another scientist's say so) but as a personal heuristic it can work extremely well. While people sometimes "don't notice" the 900 pound gorilla in the room (the Catholic sex abuse scandal being a nice example), 99% of the things that I hear this argument used for turn out to be total tosh (e.g. Santill's Roswell Alien Autopsy film, Rhine's ESP experiments). As Feynman probably didn't say, "Keep an open mind, but not so open that your brains fall out".