abramdemski

Sequences

Consequences of Logical Induction
Partial Agency
Alternate Alignment Ideas
Filtered Evidence, Filtered Arguments
CDT=EDT?
Embedded Agency
Hufflepuff Cynicism

Comments

Comparative Advantage is Not About Trade

That seems like a sensible way to set up the no-trade situation. Presumably the connection to trade is via some theorem that trade will result in pareto-optimal situations, therefore making comparative advantage applicable.

But I still wonder what the exact theorem is.

Then if you want to describe the pareto frontier that maximizes the amount of goods produced, it involves each person producing a good where they have a favorable ratio of how much of that good they can produce vs. how much of other goods-being-produced they can produce.

What do you mean by "favorable"? Is there some threshhold?

What do you mean by "involves each person producing"? Does it mean that they'll exclusively produce such goods? Or does it mean they'll produce at least some of such goods?

Don't Get Distracted by the Boilerplate

Correction: I now see that my formulation turns the question of completeness into a question of transitivity of indifference. An "incomplete" preference relation should not be understood as one in which allows strict preferences to go in both directions (which is what I interpret them as, above) but rather, a preference relation in which the  relation (and hence the  relation) is not transitive.

In this case, we can distinguish between ~ and "gaps", IE, incomparable A and B. ~ might be transitive, but this doesn't bridge across the gaps. So we might have a preference chain A>B>C and a chain X>Y>Z, but not have any way to compare between the two chains.

In my formulation, which lumps together indifference and gaps, we can't have this two-chain situation. If A~X, then we must have A>Y, since X>Y, by transitivity of .

So what would be a completeness violation in the wikipedia formulation becomes a transitivity violation in mine.

But notice that I never argued for the transitivity of ~ or  in my comment; I only argued for the transitivity of >.

I don't think a money-pump argument can be offered for transitivity here.

However, I took a look at the paper by Aumann which you cited, and I'm fairly happy with the generalization of VNM therein! Dropping uniqueness does not seem like a big cost. This seems like more of an example of John Wentworth's "boilerplate" point, rather than a counterexample.

Comparative Advantage is Not About Trade

This was helpful, but I'm still somewhat confused. Conspicuously absent from your post is an outright statement of what comparative advantage is -- particularly, what the concept and theorem is supposed to be in the general case with more than two resources and more than two agents.

The question is: who and where do I order to grow bananas, and who and where do I order to build things? To maximize construction, I will want to order people with the largest comparative advantage in banana-growing to specialize in banana-growing, and I will want to order those bananas to be grown on the islands with the largest comparative advantage in banana-growing. (In fact, this is not just relevant to maximization of construction - it applies to pareto-optimal production in general.)

Could you elaborate on this by providing the general statement rather than only the example?

Before reading your post, I had in mind two different uses for the concept:

  • Comparative advantage is often used as an argument for free trade. Dynomight's post seems to provide a sufficient counterargument to this, in its example illustrating how with more than 2 players, opening up a trade route may not be a Pareto improvement (may not be a good thing for everyone).
  • Comparative advantage is sometimes used in career advice, EG, "find your comparative advantage". This is the case I focus on in the comment I linked to illustrating my confusion. What advice is actually offered? Are agents supposed to produce and sell things which they have a comparative advantage in? Not so much. It seems that advice coming from the concept is actually extremely weak in the case of a market with more than two goods.

Your post gave me a third potential application, namely, a criterion for when trade may occur at all. This expanded my understanding of the concept considerably. It's clear that where no comparative advantage exists, no trade makes sense. A country that's bad at producing everything might want to buy stuff from a country that's just 10x better, but to do so they'd at least need a comparative advantage in producing money (which doesn't really make sense; money isn't something you produce). (Or putting it a different way: their money would soon be used up.)

But then you apply the concept of comparative advantage to a case where there isn't any trade at all. What would you give as your general statement of the concept and the theorem you're applying?

Don't Get Distracted by the Boilerplate

I happened upon this old thread, and found the discussion intriguing. Thanks for posting these references! Unless I'm mistaken, it sounds like you've discussed this topic a lot on LW but have never made a big post detailing your whole perspective. Maybe that would be useful! At least I personally find discussions of applicability/generalizability of VNM and other rationality axioms quite interesting.

Indeed, I think I recently ran into another old comment of yours in which you made a remark about how Dutch Books only hold for repeated games? I don't recall the details now.

I have some comments on the preceding discussion. You said:

It would be rather audacious to claim that this is true for each of the four axioms. For instance, do please demonstrate how you would Dutch-book an agent that does not conform to the completeness axiom!

For me, it seems that transitivity and completeness are on an equally justified footing, based on the classic money-pump argument.

Just to keep things clear, here is how I think about the details. There are outcomes. Then there are gambles, which we will define recursively. An outcome counts as a gamble for the sake of the base case of our recursion. For gambles A and B, pA+(1-p)B also counts as a gamble, where p is a real number in the range [0,1].

Now we have a preference relation > on our gambles. I understand its negation to be ; saying  is the same thing as . The indifference relation,  is just the same thing as .

This is different than the development on wikipedia, where ~ is defined separately. But I think it makes more sense to define > and then define ~ from that. A>B can be understood as "definitely choose A when given the choice between A and B". ~ then represents indifference as well as uncertainty like the kind you describe when you discuss bounded rationality. 

From this starting point, it's clear that either A<B, or B<A, or A~B. This is just a way of saying "either A<B or B<A or neither". What's important about the completeness axiom is the assumption that exactly one of these hold; this tells us that we cannot have both A<B and B<A.

But this is practically the same as circular preferences A<B<C<A, which transitivity outlaws. It's just a circle of length 2.

The classic money-pump against circularity is that if we have circular preferences, someone can charge us for making a round trip around the circle, swapping A for B for C for A again. They leave us in the same position we started, less some money. They can then do this again and again, "pumping" all the money out of us.

Personally I find the this argument extremely metaphysically weird, for several reasons.

  • The money-pumper must be God, to be able to swap arbitrary A for B, and B for C, etc.
  • But furthermore, the agent must not understand the true nature of the money-pumper. When God asks about swapping A for B, the agent thinks it'll get B in the end, and makes the decision accordingly. Yet, God proceeds to then ask a new question, offering to swap B for C. So God doesn't actually put the agent in universe B; rather, God puts the agent in "B+God", a universe with the possibility of B, but also a new offer from God, namely, to move on to C. So God is actually fooling the agent, making an offer of B but really giving the agent something different than B. Bad decision-making should not count against the agent if the agent was mislead in such a manner!
  • It's also pretty weird that we can end up "in the same situation, but with less money". If the outcomes A,B,C were capturing everything about the situation, they'd include how much money we had!

I have similar (but less severe) objections to Dutch-book arguments.

However, I also find the argument extremely practically applicable, so much so that I can excuse the metaphysical weirdness. I have come to think of Dutch-book and money-pump arguments as illustrative of important types of (in)consistency rather than literal arguments.

OK, why do I find money-pumps practical?

Simply put, if I have a loop in my preferences, then I will waste a lot of time deliberating. The real money-pump isn't someone taking advantage of me, but rather, time itself passing.

What I find is that I get stuck deliberating until I can find a way to get rid of the loop. Or, if I "just choose randomly", I'm stuck with a yucky dissatisfied feeling (I have regret, because I see another option as better than the one I chose).

This is equally true of three-choice loops and two-choice loops. So, transitivity and completeness seem equally well-justified to me.

Stuart Armstrong argues that there is a weak money pump for the independence axiom. I made a very technical post (not all of which seems to render correctly on LessWrong :/) justifying as much as I could with money-pump/dutch-book arguments, and similarly got everything except continuity. 

I regard continuity as not very theoretically important, but highly applicable in practice. IE, I think the pure theory of rationality should exclude continuity, but a realistic agent will usually have continuous values. The reason for this is again because of deliberation time.

If we drop continuity, we get a version of utility theory with infinite and infinitesimal values. This is perfectly fine, has the advantage of being more general, and is in some sense more elegant. To reference the OP, continuity is definitely just boilerplate; we get a nice generalization if we want to drop it.

However, a real agent will ignore its own infinitesimal preferences, because it's not worth spending time thinking about that. Indeed, it will almost always just think about the largest infinity in its preferences. This is especially true if we assume that the agent places positive probability on a really broad class of things, which again seems true of capable agents in practice. (IE, if you have infinities in your values, and a broad probability distribution, you'll be Pascal-mugged -- you'll only think of the infinite payoffs, neglecting finite payoffs.)

So all of the axioms except independence have what appear to me to be rather practical justifications, and independence has a weak money-pump justification (which may or may not translate to anything practical).

Comparative advantage and when to blow up your island

I once had a discussion with Scott G and Eli Tyre about this. We decided that the "real thing" was basically where you should end up in the complicated worker/job optimization problem, and there were more or less two ways to try and approximate it:

  1. Supposing everyone else has already chosen their optimal spot, what still needs doing? What can I best contribute? This is sorta easy, because you just look around at what needs doing, combine this with what you know about how capable you are at contributing, and you get an estimate of how much you'd contribute in each place. Then you go to the place with the highest number. [modulo gut feelings, intrinsic motivation, etc]
  2. Supposing you choose first, how could everyone else move around you to create an optimal configuration? You then go do the thing which implies the best configuration. This seems much harder, but might be necessary for people who provide a lot of value (and therefore what they do has a big influence on what other people should do), particularly in small teams where a near-optimal reaction to your choice is feasible.
Comparative advantage and when to blow up your island

OK. It seems there are results for more than 2 goods, but the results are quite weak:

Thus, if both relative prices are below the relative prices in autarky, we can rule out the possibility that both goods 1 and 2 will be imported—but we cannot rule out the possibility that one of them will be imported. In other words, once we leave the two-good case, we cannot establish detailed predictive relations saying that if the relative price of a traded good exceeds the relative price of that good in autarky, then that good will be exported by the country in question. It follows that any search for a strong theorem along the lines of our first proposition earlier is bound to fail. The most one can hope for is a correlation between the pattern of trade and differences in autarky prices. 

Dixit, Avinash; Norman, Victor (1980). Theory of International Trade: A Dual, General Equilibrium Approach. Cambridge: Cambridge University Press. p. 8

Comparative advantage and when to blow up your island

Here's something I don't get about comparative advantage.

The implied advice, as far as I understand it, is to check which good you have a comparative advantage in producing, and offer that good to the market.

But suppose that there are a lot more goods and a lot more participants in the market.

For any one individual, given fixed prices and supply of everyone else, it sounds like we can formulate the production and trade strategy as a linear programming problem:

  • We have some maximum amount of time. That's a linear constraint.
  • We can allocate time to different tasks.
  • The output of the tasks are assumed to be linear in time.
  • The tasks produce different goods.
  • These goods all have different prices on the market.
  • We might have some basic needs, like the 10 bananas and 10 coconuts. That's a constraint.
  • We might also have desires, like not working, or we might desire some goods. That's our linear programming objective.

OK. So we can solve this as a linear program.

But... linear programs don't have some nice closed-form solution. The simplex algorithm can solve them efficiently in practice, but that's very different from an easy formula like "produce the good with the highest comparative advantage".

And that's just solving the problem for one player, assuming the other players have fixed strategies. More generally, we have to anticipate the rest of the market as well. I don't even know if that can be solved efficiently, via linear programming or some other technique.

Is "produce where you have comparative advantage" really very useful advice for more complex cases?

Wikipedia starts out describing comparative advantage as a law:

The law of comparative advantage describes how, under free trade, an agent will produce more of and consume less of a good for which they have a comparative advantage.[1]

But no precise mathematical law is ever stated, and the law is only justified with examples (specifically, two-player, two-commodity examples). Furthermore, I only ever recall seeing comparative advantage explained with examples, rather than being stated as a theorem. (Although this may be because I never got past econ 101.)

This makes it hard to know what the claimed law even is, precisely. "produce more and consume less"? In comparison to what?

One spot on Wikipedia says:

Skeptics of comparative advantage have underlined that its theoretical implications hardly hold when applied to individual commodities or pairs of commodities in a world of multiple commodities.

Although, without citation, so I don't know where to find the details of these critiques.

Comparing Utilities

I'm not sure I follow that it has to be linear - I suspect higher-order polynomials will work just as well. Even if linear, there are a very wide range of transformation matrices that can be reasonably chosen, all of which are compatible with not blocking Pareto improvements and still not agreeing on most tradeoffs.

Well, I haven't actually given the argument that it has to be linear. I've just asserted that there is one, referencing Harsanyi and complete class arguments. There are a variety of related arguments. And these arguments have some assumptions which I haven't been emphasizing in our discussion.

Here's a pretty strong argument (with correspondingly strong assumptions).

  1. Suppose each individual is VNM-rational.
  2. Suppose the social choice function is VNM-rational.
  3. Suppose that we also can use mixed actions, randomizing in a way which is independent of everything else.
  4. Suppose that the social choice function has a strict preference for every Pareto improvement.
  5. Also suppose that the social choice function is indifferent between two different actions if every single individual is indifferent.
  6. Also suppose the situation gives a nontrivial choice with respect to every individual; that is, no one is indifferent between all the options.

By VNM, each individual's preferences can be represented by a utility function, as can the preferences of the social choice function.

Imagine actions as points in preference-space, an n-dimensional space where n is the number of individuals.

By assumption #5, actions which map to the same point in preference-space must be treated the same by the social choice function. So we can now imagine the social choice function as a map from R^n to R.

VNM on individuals implies that the mixed action p * a1 + (1-p) * a2 is just the point p of the way on a line between a1 and a2.

VNM implies that the value the social choice function places on mixed actions is just a linear mixture of the values of pure actions. But this means the social choice function can be seen as an affine function from R^n to R. Of course since utility functions don't mind additive constants, we can subtract the value at the origin to get a linear function.

But remember that points in this space are just vectors of individual's utilities for an action. So that means the social choice function can be represented as a linear function of individual's utilities.

So now we've got a linear function. But I haven't used the pareto assumption yet! That assumption, together with #6, implies that the linear function has to be increasing in every individual's utility function.

Now I'm lost again. "you should have a preference over something where you have no preference" is nonsense, isn't it? Either the someone in question has a utility function which includes terms for (their beliefs about) other agents' preferences (that is, they have a social choice function as part of their preferences), in which case the change will ALREADY BE positive for their utility, or that's already factored in and that's why it nets to neutral for the agent, and the argument is moot.

[...]

If you're just saying "people don't understand their own utility functions very well, and this is an intuition pump to help them see this aspect", that's fine, but "theorem" implies something deeper than that.

Indeed, that's what I'm saying. I'm trying to separately explain the formal argument, which assumes the social choice function (or individual) is already on board with Pareto improvements, and the informal argument to try to get someone to accept some form of preference utilitarianism, in which you might point out that Pareto improvements benefit others at no cost (a contradictory and pointless argument if the person already has fully consistent preferences, but an argument which might realistically sway somebody from believing that they can be indifferent about a Pareto improvement to believing that they have a strict preference in favor of them).

But the informal argument relies on the formal argument.

Artificial Intelligence: A Modern Approach (4th edition) on the Alignment Problem

Maybe it's better phrased as "a CIRL agent has a positive incentive to allow shutdown iff it's uncertain [or the human has a positive term for it being shut off]", instead of "a machine" has a positive incentive iff.

I would further charitably rewrite it as:

"In chapter 16, we analyze an incentive which a CIRL agent has to allow itself to be switched off. This incentive is positive if and only if it is uncertain about the human objective."

A CIRL agent should be capable of believing that humans terminally value pressing buttons, in which case it might allow itself to be shut off despite being 100% sure about values. So it's just the particular incentive examined that's iff.

Artificial Intelligence: A Modern Approach (4th edition) on the Alignment Problem

Sure, but the theorem he proves in the setting where he proves it probably is if and only if. (I have not read the new edition, so, not really sure.)

It also seems to me like Stuart Russell endorses the if-and-only-if result as what's desirable? I've heard him say things like "you want the AI to prevent its own shutdown when it's sufficiently sure that it's for the best".

Of course that's not technically the full if-and-only-if (it needs to both be certain about utility and think preventing shutdown is for the best), but it suggests to me that he doesn't think we should add more shutoff incentives such as AUP.

Keep in mind that I have fairly little interaction with him, and this is based off of only a few off-the-cuff comments during CHAI meetings.

My point here is just that it seems pretty plausible that he meant "if and only if".

Load More