Interested in math, Game Theory, etc.


Shortform feeds


Sorted by New


Where Experience Confuses Physicists

Did you ever ending up posting those quotes?

Covid 9/17: It’s Worse

A vaccine will be available in October if Trump is able to override the CDC and FDA, and make it happen by fiat to help its reelection chances.

Its? (The trump administration?)


I have a strong preference on outcomes, which readers can presumably guess – but saying it outright wouldn’t convince anyone.

As a utilitarian, or as a matter of "values"?

The Counterfactual Prisoner's Dilemma

I was pointing out a typo in the Original Post. That said, that's a great summary.


Perhaps an intermediate position could be created as follows:

Given a graph of 'the tree' (including the branch you're on), position E is

expected utility over branches

position B is

you only care about your particular branch.

Position B seems to care about the future tree (because it is ahead), but not the past tree. So it has a weight of 1 on the current node and it's descendants, but a weight of 0 on past/averted nodes, while Position E has a weight of 1 on the "root node" (whatever that is). (Node weights are inherited, with the exception of the discontinuity in Position B.)

An intermediate position is placing some non-zero weight on 'past nodes', going back along the branch, and updating the inherited weights. Aside from a weight of 1/2 being placed along all in branch nodes, another series could be used, for example: r, r^2, r^3, ... for 0<r<1. (This series might allow for adopting an 'intermediate position' even when the branch history is infinitely long.)

There's probably some technical details to work out, like making all the weights add up to 1, but for a convergent series that's probably just a matter of applying an appropriate scale factor for normalization. For r=1/2, the infinite sum is 1, so no additional scaling is required. However this might not work (the sum across all node's rewards times their weight might diverge) on an infinite tree where the rewards grow too fast...


(This was an attempt at outlining an intermediate position, but it wasn't an argument for it.)

The Counterfactual Prisoner's Dilemma

So [why] do we care about what would have happened if we had?

Free Money at PredictIt: 2020 General Election

Thus, given how crazy this market could get later, and given I already tied up my funds, I’m [not] going to take the arbitrage here, at least not yet. I might take it later, but for now I want to reserve the right to make a better play.


Thanks for writing this. How prediction markets work in practice is interesting.

Radical Probabilism
  • I do not understand how Jeffrey updates lead to path dependence. Is the trick that my probabilities can change without evidence, therefore I can just update B without observing anything that also updates A, and then use that for hocus pocus? Writing that out, I think that's probably it, but as I was reading the essay I wasn't sure which bit was where the key step was happening.


Based on Radical Probabilism and Bayesian Conditioning (page 4 and page 5), the path depends on the order evidence is received in, but the destination does not.


From the text itself:

The "issue" is mentioned:

An attractive feature of Jeffrey’s kinematics is that it allows one to be a fallibilist about evidence and yet still make use of it. An apparent sighting of one’s friend across the street, for instance, can be revised subsequently when you are told that he is out of the country. A closely related feature is the order-dependence of Jeffrey conditioning: conditioning on a particular redistribution of probability over a partition {Ai} and then on a redistribution of probability over another partition {Bi} will not in general yield the same posterior probability as conditioning first on the redistribution over {Bi} and 4 See Howson [8] for a full development of this point. A Bayesian might however take this as an argument against full belief in any contingent proposition. 4 then on that over {Ai}. This property, in contrast to the first, has been a matter of concern rather than admiration; a concern for the most part based on a confusion between the experience or evidence and its effect on the mind of the agent.5

And explained:

Suppose, for instance, that I expect an essay from a student. I arrive at work to find an unnamed essay in my pigeonhole with familiar writing. I am 90% sure that it is from the student in question. But then I find that he left me a message the day before saying that he thinks that he may well not be able to bring me the essay in the next couple of days. In the light of all that I have learnt, I now lower to 30% my probability that the essay was from him. Suppose now I got the message before the essay. The final outcome should be the same, but I will get there a different way: perhaps by my probabilities for the essay coming from him initially going to 10% and then rising to 30% on finding the essay. The important thing is this reversal of the order of experience does not produce a reversal of the order of the probabilities: I do not think it 30% likely that I will get the essay after hearing the message and then revise it to 90% after checking my pigeonhole. The same experiences have different effects on my probabilities depending on the order in which they occur. (This is, of course, just a particular application of the rule that my posteriors depend both on the priors and the inputs).

Updates Thread

What games changed your mind?

[AN #115]: AI safety research problems in the AI-GA framework

Decision Points in AI Governance


(These actions should not have been predetermined by existing law and practice.)

Should not have been, or should not be?

Load More