Comparative Advantage is Not About Trade

There is a weaker condition to trade with 10x better economy. If the other economy is 11 times more efficient in one good but only 9 times more efficient in another by offering to be a trade partner they get closer to their good good efficiency on the bad good. Essentially you need a resource they can exploit you for. They don't care if you work a buttload for a penny, but you agree to it if your shambling technique is worse than getting screwed over.

Comparative Advantage is Not About Trade

On previous post about comparative advantage there was a true distinction between a trade concept and a non-trade concept.

I think there are two concdepts and not making them name collide would be proper. If you are dictator and just a thing done asap you allocate all your best workers that can be allocated to do that and disuse people that do not have a specialization that shines above the usability baseline. If you want to produce as much as possible you migth be tempted to overwork your people to have abundance of items. But in the case that overworking everyone leads to revolt, making the people that would most get angry/suffer from work not do so can be easy source of "peace points". Thus an unfair work allocation where skilled work is utilised more than what would be arrived with equalization of misery can be used. There is less total misery but it is concentrated on fewer individuals.

I think the original critism was that if we assume different conditions then on top of that we can have "voluntary" or "free" trade. But in order to have the conditions we need to have differentation which is often upheld violently against tendency to share and dissolve. Consider that if there were no toll toll on the river people could trade salt at lower prices. Blocking access makes both sides artificially rely on local resources. So a deal that is worse than uncontrolled river but better than complete separation makes sense to accept.

Comparing Utilities

"Contrarian" is a good adjective on it. I don't think it makes anyone suffer so "monster" is only reference to utility monster but the general class of "conceptual tripstones" being called "monster" doesn't seem the most handy.

If the particular members is ambivalent about something then there might still be room to weak pareto improve along that axis. Totally opposed ambivalence is ambivalence.

There is a slight circularity in that if the definition what the agent wants rests on what the social choice is going to be it can seem a bit unfair. If it can be "fixed in advance" then allowed attempts to make a social choice function is fairer. It seems that if we can make a preference then the preference in the other direction should be able to exist as well. If there are more state pairs to prefer over than agents then the Diagonal Opposer could be constructed by pairing each state pair with an agent and taking the antipreference of that. One conception would be Public Enemy - no matter who else you are, you are enemies with this agent, you have atleast 1 preference in the opposite direction. There are many ways to construct a public enemy. And it might be that there are public enemies that 1 on 1 are only slight enemies to agent but are in conflict over more points with what the other agents would have formed as social choice. Say there are yes and no questions over A, B and C. Other agents answer to two yes and to one no. Then answering all in yes would leave all in 2/3 agreement. But a stance of all no is in 3/3 disagreement over the compromise despite being only 2/3 disagreement with individual agents.

I thought that the end result is that since any change would not be a pareto improvement the function can't recommend any change so it must be completely ambivalent about everything thus is the constant function of every option being of utility 0.

Pareto-optimality says that if there is a mass murderer that wants to kill as many people as possible then you should not do a choice that lessens the amount of people killed ie you should not oppose the mass murderer.

Comparative advantage and when to blow up your island

Off course a good that you are not allowed to trade doesn't have a trade value. The value of leisure is in effect already in play at comparative advantage in that the advantageous position is preferable because I get to the same end state with less time spent ie more leisure. But while I care about my own leisure I don't typically care about others.

I guess the situation could be expanded in that there is a third good (say canoes), that if we are separate I would not produce for myself and you would not produce for yourself but in contact I could produce it for your consumption. That is if am good in canoes but hate them, but you are bad at canoes but like them. It would just be a situation of aymmetries of production but more equal overall absolute production levels.

There is still the choice that one typically doesn't choose max leisure time and starving but rather some work and some food. So if you get fruit diet at 100 minutes of work or caviar diet 101 minutes or grass diet with 99 minutes it is hard to pass objective judgement on that. So choices and needs might not be completely separate.

Comparative advantage and when to blow up your island

Thinking about this if the diffference in gathering rates is due to gathering skill then learning the gathering technique of the more skilled gatherer would have equal impact to trade as blowing up your island.

If the difference is not learnable and is due to soil and such one would think that the poor island resident would be tempted to relocate to the rich island. Such relocation could go over peacefully or it might cause tensions and border enforcing. But if it goes to enforcement it could go to violence and even threat of violence could be a form of coersion. So already in "We are in our respective islands" we are already in the realm of coersion.

Comparing Utilities

This jumps from mathematical consistency to a kind of opinion when pareto improvement enters the picture. Sure if we have choice between two social policies and everyone prefers one over the other because their personal lot is better there is no conflict on the order. This could be warranted if for some reason we needed consensus to get a "thing passed". However where there is true conflict it seems to say that a "good" social policy can't be formed.

To be somewhat analogous with "utility monster", construct a "consensus spoiler". He exactly prefers what everyone anti-prefers, having a coference of -1 for everyone. If someone would gain something he is of the opinion that he losses. So no pareto improvements are possible. If you have a community of 100 agents that would agree to pick some states over others and construct a new comunity of 101 with the consensus spoiler then they can't form any choice function. The consensus spoiler is in effect maximally antagonistic towards everything else. The question whether it is warranted, allowed or forbidden that the coalition of 100 just proceeds with the policy choice that screws the spoiler over doesn't seem to be a mathematical kind of claim.

And even in the less extreme degree I don't get how you could use this setup to judge values that are in conflict. And if you encounter a unknown agent it seems it is ambigious whether you should take heed of its values in compromise or just treat it as a possible enemy and just adhere to your personal choices.

The universality of computation and mind design space

That I can run or emulate a program usually doesn't much imply that I understand it very much. If I have a exe I need to decompile it or have its source provided to me and even then need to study it quite a bit. If I run it through pen and paper I am not guaranteed to gain more insight than running it via an external computer.

There is also the distinction of a specific program or what a program could be. For example "programs will halt" is wrong althought the question of this or that program haling can right or wrong. There are not many properties that you can deduce from a program from it simply being a program. "Programs have loops" can be a good inductive generalization about programs "found in the wild" but it is a terrible description of a general program.

Comparative advantage and when to blow up your island

What irks me a little is that work is being allocated to be done by people who are bad at that job. If people who were good at the picking did all the picking collecitvely between the two of them there would be only 300 minutes of work. In the specialization arrangement there is a total of 700 minutes of work. I do wonder whether there is any way of finding an activity that takes between 300/2=150 minutes and 600 minutes to make up for the increased workload of the trading partner (say massage them).

On particular it seems weird that the bananas not being picked on the efficient island are then picked on the inefficient island for 3 times as long. It is also strange that one benefits from poor conditions elsewhere to improve their own situation. It also takes incentive away form alleviating the others poor conditions and maybe even have more trade partners with poor conditions or comparative advantages dissimilar to yourself.

The ethics of breeding to kill

Using this kinf od logic a slave owner could argue that since his slaves don't commit suicide they live in a arrangement that is mutually beneficial.

The analogous question of whether it is good or bad to bring up more people living in slavery seems rather tricky. If you have a slaver owner that breeds his slaves and one that doesn't has one done a bigger bad than the other? However there it seems that the comparison point isn't so much non-existence but rather existence "in the wild" or as free members of society. With animals it would mean that while living a caged life could be positive in absolute terms it would negatively compare to life in the wild. If you kidnap someone they don't thank you for your upkeep but blame you for their loss of freedom of movement.

A Policy for Biting Bullets

With the transplant problem compare that to a similar triage problem. You have 6 life or death patients 1 of which has a condition that takes a lot of resources to cure and 5 that take a little. You can only save the hard case or the easy cases. From the point of view of basic utilitarianism the transplant problem and the triage problem seems almost identical. But it seems way easier to let the hard case die rather than disassemble a live person. That a intuition puts a dent in the general pattern of a rule doesn't mean that the rule stops applying.

Also if you bite the bullet that its okay to repurpose healthy lives for transplants then you could think of the improvement that instead of taking all the 5 organs from one life you could take the organs from 5 different healthy lives which could give them an increased chance of being alive (or like even if you can't live without a liver a life of dialysis might be preferrable to death). The step from agreeing that the rule should be followed in that case to advocating to kill people seems hasty and unneccesary.

For most part philosophy doesn't have a deadline, we will ponder the things to our contention and even then a little bit more. So the contradictions should puzzle us but we don't ever need to commit to an answer that we know is partially wrong.

I have also found that if I understand to apply a rule to a particular situation it can override the intuition how it initially seems and theorethising an intuition gives it more legitimacy. For example if people knew that doctors might actively harm patients then people would be reluctant to seek medical attention. This kind of rule would strongly differentiate between the transplant problem and the triage problem. Understanding that this kind of rule could be in conflict with "help as much as you can" takes more intricate and detailed applying rather than a vague general case. If you would let the hard case die in triage and would keep the healthy person alive in transplant does that mean you don't follow or do not believe in utilitarianism? If you would kill to save millions but wouldn't kill to save thousands do you believe in the hippocratic oath?

Load More