No, (un)fortunately it is not so.

I say this has nothing to do with ambiguity aversion, because we can replace (1/2, 1/2+-1/4, 1/10) with all sorts of things which don't involve uncertainty. We can make *anyone* "leave money on the table". In my previous message, using ($100, a rock, $10), I "proved" that a rock ought to be worth at least $90.

If this is still unclear, then I offer your example back to you with one minor change: the trading incentive is still 1/10, and one agent still has 1/2+-1/4, but instead the other agent has 1/4. The ...

I think the best summary would be that when one must make a decision under uncertainty, preference between actions should depend on and only on one's knowledge about the possible outcomes.

To quote the article you linked: "Jaynes certainly believed very firmly that probability was in the mind ... there was only one correct prior distribution to use, given your state of partial information at the start of the problem."

I have not specified how prior intervals are chosen. I could (for the sake of argument) claim that there was only one correct pr...

011y

Well, you'd have to say how you choose the interval. Jaynes justified his prior
distributions with symmetry principles and maximum entropy. So far, your
proposals allow the interval to depend on a coin flip that has no effect on the
utility or on the process that does determine the utility. That is not what
predicting the results of actions looks like.
Given an interval, your preferences obey transitivity even though ambiguity
doesn't, right? I don't think that nontransitivity is the problem here; the
thing I don't like about your decision process is that it takes into account
things that have nothing to do with the consequences of your actions.
I only mean that middle paragraph, not the whole comment.

Right, except this doesn't seem to have anything to do with ambiguity aversion.

Imagine that one agent owns $100 and the other owns a rock. A government agency wishes to promote trade, and so will offer $10 to any agents that do trade (a one-off gift). If the two agents believe that a rock is worth more than $90, they will trade; if they don't, they won't, etc etc

211y

But it has everything to do with ambiguity aversion: the trade only fails
because of it. If we reach into the system, and remove ambiguity aversion for
this one situation, then we end up unarguably better (because of the symmetry).
Yes, sometimes the subsidy will be so high that even the ambiguity averse will
trade, or sometimes so low that even Bayesians won't trade; but there will
always be a middle ground where Bayesians win.
As I said elsewhere, ambiguity aversion seems like the combination of an agent
who will always buy below the price a Bayesian would pay, and another who will
always sell above the price a Bayesian would pay. Seen like that, your case that
they cannot be arbitraged is plausible. But a rock cannot be arbitraged either,
so that's not sufficient.
This example hits the ambiguity averter exactly where it hurts, exploiting the
fact that there are deals they will not undertake either as buyer or seller.

But it still remains that in a many circumstances (such as single draws in this setup), there exists information that a Bayesian will find useless and an ambiguity-averter will find valuable. If agents have the opportunity to sell this information, the Bayesian will get a free bonus.

How does this work, then? Can you justify that the bonus is free without circularity?

...From a more financial persepective, the ambiguity-averter gives up the opportunity to be a market-maker: a Bayesian can quote a price and be willing to either buy and sell at that price (p

011y

I wonder if you can express your result in a simpler fashion... Model your agent
as a combination of a buying agent and a selling agent. The buying agent will
always pay less than a Bayesian, the selling agent will always sell for more.
Hence (a bit of hand waving here) the combined agent will never lose money to a
money pump. The problem is that it won't pick up 'free' money.

011y

For two agents, I can.
Imagine a setup with two agents, otherwise identical, except that one owns a
1/2+-1/4 bet and the other owns 1/2. A government agency wishes to promote
trade, and so will offer 0.1 to any agents that do trade (a one-off gift).
If the two agents are Bayesian, they will trade; if they are ambiguity averse,
they won't. So the final setup is strictly identical to the start one (two
identical agents, one owning 1/2+- 1/4, one owning 1/2) except that the Bayesian
are each 0.1 richer.

Would a single ball that is either green or blue work?

That still seems like a structureless event. No abstract example comes to mind, but there must be concrete cases where Bayesians disagree wildly about the prior probability of an event (95%). Some of these cases should be candidates for very high (but not complete) ambiguity.

I think that a bet that is about whether a green ball will be drawn next should use your knowledge about the number of green balls in the urn, not your entire mental state.

I think you're really saying two things: *the correc*...

011y

Okay.
Well, once you assign probabilities to everything, you're mostly a Bayesian
already. I think the best summary would be that when one must make a decision
under uncertainty, preference between actions should depend on and only on one's
knowledge about the possible outcomes.
Aren't you violating the axiom of independence but not the axiom of
transitivity?
I'm not really sure what a lot of this means. The virtual interval seems to me
to be subjectively objective
[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/] in the same
way probability is. Also, do you mean 'could have any effect' in the normative
sense of an effect on what the right choice is?

What if I told you that the balls were either all green or all blue?

Hmm. Well, with the interval prior I had in mind (footnote 7), this would result in very high (but not complete) ambiguity. My guess is that's a limitation of two dimensions -- it'll handle updating on draws from the urn but not "internals" like that. But I'm guessing. (1/2 +- 1/6) seems like a reasonable prior interval for a structureless event.

...So in the standard Ellisberg paradox, you wouldn't act nonbayesianally if you were told "The reason I'm asking you to choose

011y

Would a single ball that is either green or blue work?
I agree that your decision procedure is consistent, not susceptible to Dutch
books, etc.
I don't think this is true. Whether or not you flip the coin, you have the same
information about the number of green balls in the urn, so, while the total
information is different, the part about the green balls is the same. In order
to follow your decision algorithm while believing that probability is about
incomplete information, you have to always use all your knowledge in decisions,
even knowledge that, like the coin flip, is 'uncorrelated', if I can use that
word for something that isn't being assigned a probability, with what you are
betting on. This is consistent with the letter of what I wrote, but I think that
a bet that is about whether a green ball will be drawn next should use your
knowledge about the number of green balls in the urn, not your entire mental
state.

Suppose in the Ellsberg paradox that the proportion of blue and green balls was determined, initially, by a coin flip (or series of coin flips). In this view, there is no ambiguity at all, just classical probabilities

Correct.

Where do you draw the line

1) I have no reason to think A is more likely than B and I have no reason to think B is more likely than A

2) I have good reason to think A is as likely as B.

These are different of course. I argue the difference matters.

...The boots and the mother example can all be dealt with using standard Bayesian tec

011y

But it still remains that in a many circumstances (such as single draws in this
setup), there exists information that a Bayesian will find useless and an
ambiguity-averter will find valuable. If agents have the opportunity to sell
this information, the Bayesian will get a free bonus.
From a more financial persepective, the ambiguity-averter gives up the
opportunity to be a market-maker: a Bayesian can quote a price and be willing to
either buy and sell at that price (plus a small fee), wherease the
ambiguity-averter's required spread is pushed up by the ambiguity (so all other
agents will shop with the Bayesian).
Also, the ambiguity-averter has to keep track of more connected trades than a
Bayesian does. Yes, for shoes, whether other deals are offered becomes relevant;
but trades that are truly independent of each other (in utility terms) can be
treated so by a Bayesian but not by an ambiguity-averter.

If you mean repeated draws from the same urn, then they'd all have the same orientation. If you mean draws from different unrelated urns, then you'd need to add dimensions. It wouldn't converge the way I think you're suggesting.

011y

The ratio of risk to return goes down with many independent draws (variances
add, but standard deviations don't). It's one of the reasons investors are keen
to diversify over uncorrelated investments (though again, risk-avoidance is
enough to explain that behaviour in Bayesian framework).

Here's an alternate interpretation of this method:

If two events have probability intervals that don't overlap, or they overlap but they have the same orientation and neither contains the other, then I'll say that one event is *unambiguously more likely* than the other. If two events have the exact same probability intervals (including orientation), then I'll say that are *equally likely*. Otherwise they are incomparable.

Under this interpretation, I claim that I do obey rule 2 (see prev post): if A is unambiguously more likely than B, then (A but not B) is unam...

I don't think you can just uncritically say "surely the world is thus and so".

But it was a conditional statement. If the universe is discrete and finite, then obviously there are no immortal agents either.

Basically I don't see that aspect of P6 as more problematic than the unbounded resource assumption. And when we question that assumption, we'll be questioning a lot more than P6.

No, this doesn't sound like the Allais paradox. The Allais paradox has all probabiliies given. The Ellsberg paradox is the one with the "undetermined balls". Or maybe you have something else entirely in mind.

011y

What I mean is possible preference reversal if you just have a probability of a
gamble vs. a known gamble.

I do not run into the Allais paradox -- and in general, when all probabilties are given, I satisfy the expected utility hypothesis.

011y

Not running into the Allais paradox means that if you dump an undetermined ball
into a pool of balls, you just add the bets together linearly. But, of course,
you do that enough times and you just have the normal result.
So yeah, I'm pretty sure Allais paradox.

How do you choose the interval? I have not been able to see any method other than choosing something that sounds good

Heh. *I'm* the one being accused of huffing priors? :-)

Okay, granted, there are methods like maximum entropy for Bayesian priors that can be applied in some situations, and the Ellsberg urn is such a situation.

Yes, you are correct about the discontinuity in the derivative.

011y

Yes. Because you're huffing priors. Twice as much, in fact - we have to make up
one number, you have to make up two.

You mean, I *will* be offered a bet on green, but I *may or may not* be offered a bet on blue? Then that's not a Dutch book -- what if I'm not offered the bet on blue?

For example: suppose you think a pair of boots is worth $30. Someone offers you a left boot for $14.50. You probably won't find a right boot, so you refuse. The next day someone offers you a right boot for $14.50, but it's too late to go back and buy the left. So you refuse. Did you just leave $1 on the table?

011y

Ah, I see what you mean now. So, through no fault of your own, I have conspired
to put the wrong boots in front of you. It's not about the probability depending
on whether you're buying or selling the bet, it's about assigning an extra value
to known proportions.
Of course, then you run in to the Allais paradox... although I forget whether
there was a dutch book corresponding to the Allais paradox or not.

I wouldn't take any of them individually, but I would take green and blue together. Why would you take the red bet in this case?

011y

I intentionally designed the bets so that your agent would take none of them
individually, but that together they would be free money. If it has a correct
belief, naturally a bet you won't take might look a little odd. But to an agent
that honestly thinks P(green | buying) = 2/9, the green and blue bets will look
just as odd.
And yes, your agent would take a bet about (green or blue). That is beside the
point, since I merely first offered a bet about green, and then a bet about
blue.

I wouldn't take any of them individually (except red), but I'd take all of them together. Why is that not allowed?

[This comment is no longer endorsed by its author]

I don't understand what you mean in the first paragraph. I've given an exact procedure for my decisions.

What kind of discontinuities to you have in mind?

011y

How do you choose the interval? I have not been able to see any method other
than choosing something that sounds good (choosing the minimum and maximum
conceivable would lead to silly Pascal's Wager - type things, and probably total
paralysis.)
The discontinuity: Suppose you are asked to put a fair price f(N) on a bet that
returns N if A occurs and 1 if it does not. The function f will have a sharp
bend at 1, equivalent to a discontinuity in the derivative.
An alternative ambiguity aversion function, more complicated to define, would
give a smooth bend.

I guess you mean: you offer me a bet on green for $2.50 and a bet on blue for $2.50, and I'd refuse either. But I'd take both, which would be a bet on green-or-blue for $5. So no, no dutch book here either.

Or do you have something else in mind?

011y

I mean that I could offer you $9 on green for 2.50, $9 on blue for 2.50, and $9
on red for 3.01, and you wouldn't take any of those bets, despite, in total,
having a certainty of making 99 cents. This "type 2" dutch book argument (not
really a dutch book, but it's showing a similar thing for the same reasons) is
based on the principle that if you're passing up free money, you're doing
something wrong :P

If the bet pays $273 if I drew a red ball, I'd buy or sell that bet for $93. For green, I'd buy that bet for $60 and sell it for $120. For red-or-green, I would buy that for $153 and sell it for $213. Same for blue and red-or-blue. For green-or-blue, I'd buy or sell that for $180.

(Appendix A has an exact specification, and you may wish to (re-)read the boot dialogue.)

[ADDED: sorry, I missed "let's drop the asymmetry" .. then, if the bet pays $9 on red, buy or sell for $3; green, buy $2 sell $4; red-or-green, buy $5 sell $7; blue, red-or-blue same, green-or-blue, buy or sell $6. Assuming risk neutrality for $, etc etc no purchase necessary must be over 18 void in Quebec.]

011y

Ah, I see. But now you'll get type 2 dutch booked - you'll pass up on certain
money if someone offers you a winning bet that requires you to buy.

I replied to Manfred with the Ellsberg example having 31 instead of 30 red balls. Does that count as different? If so, do I lose utility?

011y

From Manfred's comments (with which I agree), it looks like yes, you lose
utility by failing to buy a bet that has positive EV. You lose half as much if
you flip a coin, because sometimes the coin is right...

Well, in terms of decisions, P(green) = 1/3 +- 1/9 means that I'd buy a bet on green for the price of a true randomised bet with probability 2/9, and sell for 4/9, with the caveats mentioned.

We might say that the price of a left boot is $15 +- $5 and the price of a right boot is $15 -+ $5.

011y

Yes. So basically you are biting a certain bullet that most of us are unwilling
to bite, of not having a procedure to determine your decisions and just kind of
choosing a number in the middle of your range of choices that seems reasonable.
You're also biting a bullet where you have a certain kind of discontinuity in
your preferences with very small bets, I think.

Showing that it can't be pumped just means that it's consistent. It doesn't mean it's correct. Consistently wrong choices cost utility, and are not rational.

To be clear: you mean that my choices somehow cost utility, even if they're consistent?

I would greatly love an example that compares a plain Bayesian analysis with an interval analysis.

It's a good idea. But at the moment I think more basic questions are in dispute.

011y

Intervals of probability seem to reduce to probability if you consider the
origin of the interval. Suppose in the Ellsberg paradox that the proportion of
blue and green balls was determined, initially, by a coin flip (or series of
coin flips). In this view, there is no ambiguity at all, just classical
probabilities - so you seem to posit some distinction based on how something was
setup. Where do you draw the line; when does something become genuinely
ambiguous?
The boots and the mother example can all be dealt with using standard Bayesian
techniques (you take utility over worlds, and worlds with one boot are not very
valuable, worlds with two are; and the memories of the kids are relevant to
their happiness), and you can re-express what is intuitively an "interval of
probability" as a Bayesian behaviour over multiple, non-independent bets.
You would pay to remove ambiguity. And ambiguity removal doesn't increase
expected utility, so Bayesian agents would outperform you in situations where
some agents had ambiguity-reducing knowledge.

011y

Yes. I mean that, when your choice is different from what standard (or for some
cases, timeless) decision theory calculates for the same prior beliefs and
outcome->utility mapping, you're losing utility. I can't tell if you think that
this theory does have different outcomes, or if you think that this is "just" a
simplification that gives the same outcomes.

(This argument seems to suggest a "common-sense human" position between high ambiguity aversion and no ambiguity aversion, but most of us would find that untenable.)

Well then, P(green) = 1/3 +- 1/3 would be extreme ambiguity aversion (such as would match the adversary I think you are proposing), and P(green) = 1/3 exactly would be no ambiguity aversion , so something like P(green) = 1/3 +- 1/9 would be such a compromise, no? And why is that untenable?

To clarify: the aversary you have in mind, what powers does it have, exactly?

Generally speaki...

011y

I don't get what this range signifies. There should be a data point about how
ambiguous it is, which you could use or not use to influence actions. (For
instance, if someone says they looekd in the urn and it seemed about even, that
reduced ambiguity.) But then you want to convert that into a range, which does
not refer to the actual range of frequencies (which could be 1/3 +- 1/3) and is
dependent on your degree of aversion, but then you want to convert that into a
decision?

Once they start paying for equivalent options, then they get money-pumped.

Okay. Suppose there is an urn with 31 red balls, and 60 balls that are either green or blue. I choose to bet on red over green, and green-or-blue over red-or-blue. These are no longer equivalent options, and this is definitely not consistent with the laws of probability. Agreed?

(My prior probability interval is P(red) = 31/91 exactly, P(green) = (1/2 +- 1/6)(60/91), P(blue) = (1/2 -+ 1/6)(60/91).)

It sounds like you expected (and continue to expect!) to be able to money-pump me.

011y

I'm confused what your notation means. Let's drop the asymmetry for now and just
focus on the fact that you appear to be violating the laws of probability. Does
your (1/2 +- 1/6) notation mean that if I would give you a dollar if you drew a
green ball, you would be willing to pay 1/3 of a dollar for that bet (bet 1)?
Ditto for red (bet 2)? But then if you paid me a dollar if the ball came up
(green-or-red), you would be willing to accept 1/2 of a dollar for that bet (bet
3)?
In that case, the dutch book consists of bets like (bet 1) + (bet 2) + (bet 3):
you pay me 1/3, you pay me 1/3, I pay you 1/2 (so you paid me 1/6th of a dollar
total). Then if the ball's green I pay you a dollar, if it's red I pay you a
dollar, and if it's (green-or-red) you pay me a dollar.

you would think it's excessive to trade (20U,0U) for just 1U.

What bet did you have in mind that was worth (20U,0) ? One of the simplest examples, if P(green) = 1/3 +- 1/9, would be 70U if green, -20U if not green. Does it still seem excessive to be neutral to that bet, and to trade it for a certain 1U (with the caveats mentioned)

What if they were in the care of her future self who already flipped the coin? Why is this different?

This I don't understand. She is her future self isn't she?

Bonus scenario:

Oh boy!

...There are two standard Elisberg-para

011y

Huh, my explanations in that last post were really bad. I may have used a level
of detail calibrated for simpler points, or I may have just not given enough
thought to my level of detail in the first place.
What if I told you that the balls were either all green or all blue? Would you
regard that as (20U,0U) (that was basically the bet I was imagining but, on
reflection, it is not obvious that you would assign it that expected utility)?
Would you think it equivalent to the (20U,0U) bet you mentioned and not
preferrable to 1U?
So in the standard Ellisberg paradox, you wouldn't act nonbayesianally if you
were told "The reason I'm asking you to choose between red and green rather than
red and blue is because of a coin flip.", but you'd still prefer red if all
three options were allowed? I guess that is at least consistent.
This is getting at a similar idea as the last one. What seems like the same
option, like green or Irina, becomes more valuable when there is an interval due
to a random event, even though the random event has already occurred and the
result is now known with certainty. This seems to be going against the whole
idea of probability being about mental states; even though the uncertainty has
been resolved, its status as 'random' still matters.

I'm not sure what you mean. If it's because the situation was too symetrical, I think I adressed that.

For example, you could add or remove a couple of red balls. I still choose red over green, and green-or-blue over red-or-blue. I think the fact that it still can't lead to being dutch booked is going to be a surprise to many LW readers.

011y

I would not make this prediction. I would think anyone who would understand that
claim without having to look up "dutch book" should find that obvious.

211y

I agree, I don't think there is any way to dutch-book someone for being wrong
but consistent with the laws of probability (that is, still assigning 1/3
probabilities to r,g,b even when that's wrong). They simply lose money on
average. But this is an extra fact, unrelated to the triviality that is not
being able to dutch-book someone based on an arbitrary choice between two
equivalent options. Once they start paying for equivalent options, then they get
money-pumped.

Well, it would push me away from ambiguity aversion, I would become indifferent between a bet on red and a bet on green, etc.

Put it another way: a frequentist could say to you: "Your Bayesian behaviour is a perfect frequentist model of a situation where:

You choose a bet

An urn is selected uniformly at random from the fictional population

An outcome occurs.

It seems totally unreasonable to apply it in the Ellsberg situation or similar ones. For instance, you would then not react if you were in fact told the distribution."

And actually, as it ...

011y

There seems to be an issue of magnitude here. There are 3 possible ways the urn
can be filled:
1. It could be selected uniformly at random
2. It could be selected through some unknown process: uniformly at random,
biased against me, biased towards blue, biased towards green, always exactly
30/30, etc.
3. It could be selected so as to exactly minimize my profits
2 seems a lot more like 1 than it does like 3. Even without using any Bayesian
reasoning, a range is a lot more like the middle of the range than it is like
one end of the range.
(This argument seems to suggest a "common-sense human" position between high
ambiguity aversion and no ambiguity aversion, but most of us would find that
untenable.)
An alternative way of talking about it:
The point I am making is that it is much more clear which direction my new
information is supposed to influence you then your information is supposed to
influence me. If a variable x is in the range [0,1], finding out that it is
actually 0 is very strongly biasing information. For instance, almost every
value x could have been before is strictly higher than the new known value. But
finding out that it is 1/2 does not have a clear direction of bias. Maybe it
should make you switch to more confidently betting x is high, maybe it should
make you switch to more confidently betting x is low. I don't know, it depends
on details of the case, and is not very robust to slight changes in the
situation.

If money has logarithmic value to you, you are not risk neutral, the way I understand the term. How are you using the term?

211y

Hmm. I think you're right: I've never connected the terms in that way, using
"risk-neutral" in tems of utility rather than money. Looking at it more closely,
it appears it's more commonly used for money, which would be risk-seeking in
terms of utility, and probably non-optimal. (note: I also recognize that most
people, including me, over-estimate the decline massively, and for small wagers
it should be very close to linear).

For example, you would choose 1U with certainty over something like 10U ± 10U. You said that you would be still make the ambiguity-adverse choice if a few red balls were taken out, but what if almost all of them were removed?

If I had set P(green) = 1/3 +- 1/3, then yes. But in this case I'm not ambiguity averse to the extreme, like I mentioned. P(green) = 1/3 +- 1/9 was what I had, i.e. (1/2 +- 1/6)(2/3). The tie point would be 20 red balls, i.e. 1/4 exactly versus (1/2 +- 1/6)(3/4).

...On a more abstract note, your stated reasons for your decision seem

111y

Well utility is invariant under positive affine transformations, so you could
have 30U +- 10U and shift the origin so you have 10U +- 10U. More intuitively,
if you have 30U +- 10U, you can regard this as 20U + (20U,0U) and you would be
willing to trade this for 21U, but you're guaranteed the first 20U and you would
think it's excessive to trade (20U,0U) for just 1U.
Interesting.
What if they were in the care of her future self who already flipped the coin?
Why is this different?
Bonus scenario: There are two standard Elisberg-paradox urns, each paired with a
coin. You are asked to pick one to get a reward for iff ((green and heads) or
(blue and tails)). At first you are indifferent, as both are identical. However,
before you make your selection, one of the coins is flipped. Are you still
indifferent?

I see. My cunning reply is thus:

Suppose you were told that, rather than being from an unknown sources, the urn was in fact selected uniformly at random from 61 urns. In the first urn, there are 30 red balls, and 60 green balls. In the second urn, there are 30 red balls, 1 blue ball, and 59 green balls, etc, and in the sixty-first urn, there are 30 red balls and 60 blue balls.

This seems like pretty significant information. The kind of information that should change your behavior.

Would it change your behavior?

111y

In which direction should it change my behavior? What does it push me towards?

It seems totally unreasonable to apply it in that situation or similar ones.

You mean:

My behaviour could be explained if I were actually Bayesian, and I believed X

But I have agreed that X is false

Therefore my behaviour is unreasonable.

(Where X is the existence of an opponent with certain properties.)

Am I fairly representing what you are saying?

For instance, you would then not react to the presence of an actual adversary.

Why's that then? If there was an adversary, I could apply game theory just like anyone else, no?

...Also, I think that to f

011y

My cunning argument is thus:
Suppose you were told that, rather than being unknown, the frequencies had not
yet been decided, but would instead be chosen by a nefarious adversary after you
made your bet.
This seems like pretty significant information. The kind of information that
should change your behavior.
Would it change your behavior?

Yes, replacing the new one. I.e. given a choice between trading the bet on green for a new randomised bet, we prefer to keep the bet on green. And no, the virtual interval is not part of any bet, it is persistent.

011y

OK, now I understand why this is a necessary part of the framework.
I do think there is a problem with strictly choosing the lesser of the two
utilities. For example, you would choose 1U with certainty over something like
10U ± 10U. You said that you would be still make the ambiguity-adverse choice if
a few red balls were taken out, but what if almost all of them were removed?
On a more abstract note, your stated reasons for your decision seem to be that
you actually care about what might have happened for reasons other than the
possibility of it actually happening (does this make sense and accurately
describe your position?). I don't think humans actually care about such things.
Probability is in the mind; a difference in what might have happened is a
difference in states of knowledge about states of knowledge. A sentence like "I
know now that my irresponsible actions could have resulted in injuries or
deaths" isn't actually true given determinism, it's about what you know believe
you should have known in the past. [1] [2]
Getting back to the topic, people's desires about counterfactuals are desires
about their own minds. What Irina and Joey's mother wants is to not intend to
favour either of her children. [3] In reality, the coin is just as
determininstic as her decision. Her preference for randomness is about her mind,
not reality.
[1] True randomness like that postulated by some interpretations of QM is
different and I'm not saying that people absolutely couldn't have preferences
about truly random couterfactuals. Such a world would have to be pretty weird
though. It would have to be timeful, for instance, since the randomness would
have to be fundamentally indeterminite before it happens, rather than just not
known yet, and timeful physics doesn't even make sense to me.
[2] This is itself a counterfactual, but that's irrelevent for this context.
[3] Well, my model of her prefers flipping a coin to drawing green or blue balls
from an urn, but my model of h

Agreed, the structural component is not normative. But to me, it is the structural part that seems benign.

If we assume the agent lives forever, and there's always some uncertainty, then surely the world *is* thus and so. If the agent doesn't live forever, then we're into bounded rationality questions, and even transitivity is up in the air.

011y

P6 entails that there are (uncountably) infinitely many events. It is at least
compatible with modern physics that the world is fundamentally discrete both
spatially and temporally. The visible universe is bounded. So it may be that
there are only finitely many possible configurations of the universe. It's a big
number sure, but if it's finite, then Savage's theorem is irrelevant. It doesn't
tell us anything about what to believe in our world. This is perhaps a silly
point, and there's probably a nearby theorem that works for "appropriately large
finite worlds", but still. I don't think you can just uncritically say "surely
the world is thus and so".
If this is supposed to say something normative about how I should structure my
beliefs, then the structural premises should be true of the world I have beliefs
about.

P6 is really both. Structurally, it forces there to be something like a coin that we can flip as many times as we want. But normatively, we can say that if the agent has blah blah blah preference, it shall be able to name a partition such that blah blah blah. See e.g. [rule 4]. This of course doesn't address *why* we think such a thing is normative, but that's another issue.

011y

But why ought the world be such that such a partition exists for us to name?
That doesn't seem normative. I guess there's a minor normative element in that
it demands "If the world conspires to allow us to have partitions like the ones
needed in P6, then the agent must be able to know of them and reason about them"
but that still seems secondary to the demand that the world is thus and so.

Your definition of total pre-order:

A total preorder % satisfies the following properties: For all x, y, and z, if x % y and y % z then x % z (transitivity). For all x and y, x % y or y % x (totality). (I substituted "%" for their symbol, since markdown doesn't translate their symbol.) Let "A %B" represent "I am indifferent between, or prefer, A to B".

Looks to me like it's equivalent to what I wrote for rule 1. In particular, you say:

...To wit, I am indifferent between A and B, and between B and C, but I prefer A to C. Thi

011y

Oops, good catch. My formulation of "A % B" as "I am indifferent between or
prefer A to B" won't work. I think my doubts center on the totality requirement.

Hmm! I don't know if that's been tried. Speaking for myself, 31 red balls wouldn't reverse my preferences.

But you could also have said, "On the other hand, if people were willing to pay a premium to choose red over green *and* green-or-blue over red-or-blue..." I'm quite sure things along those lines have been tried.

Heh :-) I'm okay with people being more interested in the Ellsberg paradox than the Savage theorem. Sections headers are there for skipping ahead. There's even colour :-)

I think it would be unfair to ask me to make the Savage theorem as readable as the Ellsberg paradox. For starters, the Ellsberg paradox can be described really quickly. The Savage theorem, even redux, can't. Second, just about everyone here agrees with the conclusion of the Savage theorem, and disagrees with the Ellsberg-paradoxical behaviour.

My goal was just to make it clearer than the p...

08y

This is not a reply to this comment. I wanted to comment on the article itself,
but I can't find the comment box under the article itself.
According to Robin Pope's article, "Attractions to and Repulsions from Chance,"
section VII, Savage's sure-thing principle is not his P2, although it is easily
confused with it. The sure-thing principle says that if you do prefer (A but not
B) over (B but not A), then you ought to prefer A over B. That is, in case of a
violation of P2, you should resolve it by revising the latter preference (the
one between bets with overlapping outcomes), not the former. This is apparently
how Savage revised his preferences on the Allais paradox to align them with EU
theory.
The article:section is in the book "Game Theory, Experience, and Rationality,
Foundations of Social Sciences, Economics and Ethics, in honor of J.C. Harsanyi
," pp 102-103.

Meh. It should not really affect what I've said or what I intend to say later if you substitute "Violation of the rules of probability" or "of utility" for "paradox" (Ellsberg and Allais resp.) However paradox is what they're generally called. And it's shorter.

Thanks... Where do you see it? I can't see any. I tried logging in and out and all that, it doesn't seem to change anything (except the vote count is hidden when I logout?)

011y

It appears to be gone. It was there earlier, I swear! :P

FWIW, agreed, "not given in the problem". My bad.

211y

Betting generally includes an adversary who wants you to lose money so they win
it. Possibly in psychology experiments, betting against the experimenter, you
are more likely to have a betting partner who is happy to lose money on bets.
And there was a case of a bet happening on Less Wrong recently where the person
offering the bet had another motivation, demonstrating confidence in their
suspicion. But generally, ignoring the possibility of someone wanting to win
money off you when they offer you a bet is a bad idea.
Now betting is supposed to be a metaphor for options with possibly unknown
results. In which case sometimes you still need to account for the possibility
that the options were made available by an adversary who wants you to choose
badly, but less often. And you also should account for the possibility that they
were from other people who wanted you to choose well, or that the options were
not determined by any intelligent being or process trying to predict your
choices, so you don't need to account for an anticorrelation between your choice
and the best choice. Except for your own biases.

Very well done! I concede. Now that I see it, this is actually quite general.

My point wasn't just that I had a decision procedure, but an *explanation* for it. And it seems that, no matter what, I would have to explain

A) Why ((Green and Heads) or (Blue and Tails)) is not a known bet, equiprobable with Red, or

B) Why I change my mind about the urn after a coin flip.

Earlier, some others suggested non-causal/magical explanations. These are still intact. If the coin is subject to the Force, then (A), and if not, then (B). I rejected that sort of thing. I thought I had an intuitive non-magical explanation. But, it doesn't explain (B). So, FAIL.

How about:

Consider $6 iff ((Green and Heads) or (Blue and Tails)). This is a known bet (1/3) so worth $2. But if the coin is flipped first, and comes up Heads, it becomes $6 iff Green, and if it comes up tails, it becomes $6 iff Blue, in either case worth $1. And that's silly.

Is that the same as your objection?

011y

Yes, that is equivalent.

Indifferent. This is a known bet.

Earlier I said $-6 iff Green is identical to $-6 + $6 iff (not Green), then I decomposed (not Green) into (Red or Blue).

Similarly, I say this example is identical to $-1 + $2 iff (Green and Heads) + $1 iff (not Green), then I decompose (not Green) into (Red or (Blue and Heads) or (Blue and Tails)).

$1 iff ((Green and Heads) or (Blue and Heads)) is a known bet. So is $1 iff ((Green and Heads) or (Blue and Tails)). There are no leftover unknowns.

111y

Look at it another way.
Consider $6 iff (Green ∧ Heads) - $6 iff (Green ∧ Tails) + $4 iff Tails. This
bet is equivalent to $0 + $2 = $2, so you would be willing to pay $2 for this
bet.
If the coin comes out heads, the bet will become $6 iff Green, with a value of
$1. If the coin comes out tails, the bet will become $4 - $6 iff Green = $4 - $3
= $1. Therefore, assuming that the outcome of the coin is revealed first, you
will, with certainty, regret having payed any amount over $1 for this bet. This
is not a rational decision procedure.

I pay you $1 for the waiver, not $3, so I am down $0.

In state A, I have $6 iff Green, that is worth $1.

In state B, I have no bet, that is worth $0.

In state C, I have $-6 iff Green, that is worth $-3.

To go from A to B I would want $1. I will go from B to B for free. To go from B to A I would pay $1. State C does not occur in this example.

011y

Wouldn't you then prefer $0 to $1 iff (Green ∧ Heads) - $1 iff (Green ∧ Tails)?

Ohh, I see. Well done! Yes, I lose.

If I had a do-over on my last answer, I would not agree that $-6 iff Green is worth $-1. It's $-3.

But, given that I can't seem to get it straight, I have to admit I haven't given LW readers much reason to believe that I do know what I'm talking about here, and at least one good reason to believe that I don't.

In case anyone's still humouring me, if an event has unknown probability, so does its negation; I prefer a bet on Red to a bet on Green, but I also prefer a bet against Red to a bet against Green. This is actually t...

111y

Hmm. Now we have that $6 iff Green is worth $1 and $-6 iff Green is worth $-3,
but $6-6 = $0 iff Green is not equivalent to $1-3 = $-2.
In particular, if you have $6 conditional on Green, you will trade that to me
for $1. Then, we agree that if Green occurs, I will give you $6 and you will
give me $6, since this adds up to no change. However, then I agree to waive your
having to pay me the $6 back if you give me $3. You now have your original $6
iff Green back, but are down an unconditional $2, an indisputable net loss.
Also, this made me realize that I could have just added an unconditional $6 in
my previous example rather than complicating things by making the $6 first
conditional on (die ≥ 3) and then on (Green ∨ Blue). That would be much clearer.

That's right.

I take it what is strange is that I could be indifferent between A and B, but not indifferent between A+C and B+C.

For a simpler example let's add a fair coin (and again let N=2). I think $1 iff Green is as good as $1 iff (Heads and Red), but $1 iff (Green or Blue) is better than $1 iff ((Heads and Red) or Blue). (All payoffs are the same, so we can actually forget the utility function.) So again: A is as good as B, but A+C is better than B+C. Is this the same strangeness?

111y

Not quite.
I think that the situation that you described in less strange then the one that
I described. In yours, you are combining two 'unknown probabilities' to produce
'known probabilities'.
I find my situation stranger because the only difference between a choice that
you are indifferent about and one that you do have a preference about is the
substitution of (Green ∨ Blue) for (die ≥ 3). Both of these have clear
probabilities and are equivalent in almost any situation. To put this another
way, you would be indifferent between $3 unconditionally and $6 iff (Green ∨
Blue) - $6 iff Green if the two bets on coloured balls were taken to refer to
different draws from the (same) urn. This looks a lot like risk aversion, and
mentally feels like risk aversion to me, but it is not risk aversion since you
would not make these bets if all probabilities were known to be 1/3.

I'm not really clear on the first question. But since the second question asks how much something is worth, I take it the first question is asking about a utility function. Do I behave as if I were maximising expected utility, ie. obey the VNM postulates as far as known probabilities go? A yes answer then makes the second question go something like this: given a bet on red whose payoff has utility 1, and a bet on green whose payoff has utility N, what is the critical N where I am indifferent between the two?

For every N>1, there are decision procedures f...

011y

Okay. Let N = 2 for simplicity and let $ denote utilons like you would use for
decisions involving just risk and no uncertainty.
P(Red) = 1/3, so you are indifferent between $-1 unconditionally and ($-3 if
Red, $0 otherwise). You are also indifferent between $-3 iff Red and $-3N (=
$-6) iff Green (or equivalently Blue). By transitivity, you are therefore
indifferent between $-1 unconditionally and $-6 iff Green. Also, you are
obviously indifferent between $4 unconditionally and $6 iff (die ≥ 3).
I would think that you would allow a `pure risk' bet to be added to an
uncorrelated uncertainty bet - correct me if that is wrong. In that case, you
would be indifferent between $3 unconditionally and $6 iff (die ≥ 3) - $6 iff
Green, but you would not be indifferent between $3 unconditionally and $6 iff
(Green ∨ Blue) - $6 iff Green, which is the same as $6 iff Blue, which you value
at $1.
This seems like a strange set of preferences to have, especially since both (die
≥ 3) and (Green ∨ Blue) are both pure risk, but it could be correct.

Good-Turing estimation which was part of the Enigma project should also go under the empirical heading.

I was looking a little bit into this claim that Poincaré used subjective priors to help acquit Dreyfus. In a word, FAIL.

Poincaré's use of subjective priors was not a betrayal of his own principles because he needed to win, as someone above put it. He was granting his opponent's own hypothesis in order to criticise him. Strange that this point was not clear to whoever was researching it, given that the granting of the hypothesis was prefaced with a strong protest.

The court intervention in question was a report on Bertillon's calculations, by Poincaré with ...

If there is nothing wrong with having a state variable, then sure, I can give a rule for initialising it, and call it "objective". It is "objective" in that it looks like the sort of thing that Bayesians call "objective" priors.

Eg. you have an objective prior in mind for the Ellsberg urn, presumably uniform over the 61 configurations, perhaps based on max entropy. What if instead there had been one draw (with replacement) from the urn, and it had been green? You can't apply max entropy now. That's ok: apply max entropy "... (read more)