A while back I proposed some solutions to problems with Bayesianism on my blog and on the EA Forum. This is the last part about 'the problem of agnosticism' and something I call a 'foreacting agent'. If you want the other parts you can click on the links. It's written as a fictional dialogue between a Bayesian (B) and a Non-believer (N).

The problem of agnosticism

N: What are the Bayesian probabilities? The problem of logical omniscience suggests that we can't simply say they are degrees of belief, so what are they? Take a claim like "There are a billion planets outside the observable universe". How do you assign a probability to that? We can’t observe them, so we can’t rely on empiricism or mathematics, so... shouldn’t we be agnostic? How do we represent agnosticism in terms of probability assignments?

B: Prior probabilities can be anything you want. Just pick something at random between 0 and 1. It doesn’t really matter because our probabilities will converge over time given enough incoming data.

N: If I just pick a prior at random, that prior doesn’t represent my epistemic status. If I pick 0.7, I now have to pretend I’m 70% certain that there are a billion planets outside the observable universe, even though I feel totally agnostic. I’m not even sure we’ll ever find out whether there really are a billion planets outside the observable universe. Why can’t I just say that it’s somewhere between 0 and 1, but I don’t know where?

B: You need to be able to update. A rational thinker needs to have a definite value.

N: Why? There is no Dutch book argument against being agnostic. If someone offers me Dutch book bets based on the number of planets outside the observable universe, I can just decline.

B: What if you don’t have a choice? What if that person has a gun?

N: How would that person even resolve the bet? You’d have to know the amount of planets outside the observable universe.

B: It’s God, and God has a gun.

N: Okay, fine, but even in that absurd scenario I don’t have to have a definite value to take on bets. I can, for example, use a random procedure, like rolling a dice.

B: What if that procedure gives you a 0 or a 1? You would have a trapped prior, and you couldn’t update your beliefs no matter what evidence you observed.

N: I can’t update my beliefs if I follow Bayesianism. The axioms of probability theory allow me to assign a 0 or a 1 to a hypothesis. It’s Bayesianism that traps my priors.

B: You can’t assign a 0 or a 1 to an empirical hypothesis for that reason.

N: Isn’t that ad hoc? The probabilities were meant to represent an agent's degree of belief, and agents can certainly be certain about a belief. It seems the probabilities do not represent an agent's degree of belief after all. The Bayesian needs to add all sorts of extra rules, like that we can assign 0 and 1 to logical theorems but not empirical theories, which must actually be assigned a probability between 0 and 1. So... what are the probabilities exactly?

B: Hmmm… Let me get back to you on that one!

The problem of foreacting agents

N: Say there is an agent whose behavior I want to anticipate. However, I know that this agent is:

extremely good at predicting what I’m going to guess (maybe it’s an AI or a neuroscientist with a brain scanner) and…

this agent wants me to make a successful prediction.

If I guess the agent has a 90% chance of pushing a button they will have already predicted it, and will afterwards push the button with 90% probability. Same with any other probability, they will predict it and set their probability for acting accordingly. It’s forecasting my guess and reacting before I predict, hence foreacting. After learning this information what should my posterior be? What probability should I assign to them pushing the button?

B: Whatever you want to.

N: But ‘whatever you want to’ is not a number between 0 and 1.

B: Just pick a number at random then.

N: If I just pick a prior at random, that doesn’t represent my epistemic state.

B: Ah, this is the problem of agnosticism again. I think I’ve found a solution. Instead of Bayesianism being about discrete numbers, we make it about ranges of numbers. So instead of saying the probability is around 0.7 we say it’s 0.6–0.8. That way we can say in this scenario and in the case of agnosticism that the range is 0–1.^{[1]}

N: This would be an adequate solution to one of the problems, but can’t be a solution for both agnosticism and foreacting predictors.

B: Why not?

N: Because they don’t depict the same epistemic state. In fact, they represent an almost opposite state. With agnosticism I have basically no confidence in any prediction, whereas with the foreacting predictor I have ultimate confidence in all predictions. Also, what if the agent is foreacting non-uniformly? Let’s say it makes it’s probability of acting 40% and 60% if I predict it will be 40% and 60% respectively, but makes it’s probability not conform with my prediction when I predict anything else. So if I predict, say, 51% it will act with a probability of, say, 30%. Let’s also assume I know this about the predictor. Now the range is not 0–1, it’s not even 0.4–0.6 since it will act with a different probability when I predict 51%.

B: Hmmm…

N: And what if I have non-epistemic reasons to prefer one credence over another. Let’s say I’m trying to predict whether the foreacting agent will kill babies. I have a prior probability of 99% that it will. The agent foreacts, and I observe that it does indeed kill a baby. Now I learn it’s a foreacting agent. With Bayesianism I keep my credence at 99%, but surely I ought to switch to 0%. 0% is the ‘moral credence’.

B: This is a farfetched scenario.

N: Similar things can happen in e.g. a prediction market. If the market participants think an agent has a 100% probability of killing a baby they will bet on 100%. But if they then learn that the agent will 100% kill the baby if they bet on 1%-100%, but will not kill the baby if the market is 0% they have a problem. Each individual participant might want to switch to 0%, but if they act first the other participants are financially incentivized to not switch. You have a coordination problem. The market causes the bad outcome. You don’t even need foreacting for this, a reacting market is enough. Also, there might be disagreement on what the ‘moral credence’ even is. In such a scenario the first buyers can set the equilibrium and thus cause an outcome that the majority might not want.^{[2]}

B: This talk about ‘moral credences’ is besides the point. Epistemology is not about morality. Bayesianism picks an accurate credence and that’s all it needs to do.

N: But if two credences are equally good epistemically, but one is better morally, shouldn’t you have a system that picks the more moral one?

B: Alright, what if we make Bayesianism not about discrete numbers, nor about ranges, but instead about distributions? On the x-axis we put all the credences you could pick (so any number between 0 and 1) and on the y-axis what you think the probability will be based on which number you pick. So when you encounter a phenomenon that you think has a 60% chance of occurring (no matter what you predict/which credence you pick) the graph looks like this:

And when you encounter a uniformly foreacting agent who (you believe) makes the odds of something occurring conform to what you predict (either in your head or out loud), you have a uniform distribution:

With this you can just pick any number and be correct. However if you encounter the non-uniformly foreacting agent of your example the graph could look something like this: (green line included for the sake of comparison)

Picking 0.2 (B) will result in the predictor giving you a terrible track record (0.4). But picking 0.4 or 0.6 (C or E) will give you an incredible track record. Let’s call C and E ‘overlap points’. If this distribution is about whether the agent will kill a baby, C is the ‘moral credence’.

N: Wouldn’t A be the moral credence, since that has the lowest chance of killing a baby?

B: Humans can’t will themselves to believe A since they know that predicting a 0% chance will actually result in a 20% chance.

N: What about an agent that is especially good at self deception?

B: Right, so if you have e.g. an AI that can tamper with it’s own memories, it might have a moral duty to delete the memory that 0% will result in 20% and instead forge a memory that 0% will lead to 0%, just so the baby only has a 20% chance of dying.

N: What if you have a range? What if you don’t know what the probability of something is but you know it’s somewhere between 0.5 and 0.7?

B: Then it wouldn’t be thin line at 0.6, but a ‘thick’ line, a field:

N: What about total agnosticism?

B: Agnosticism would be a black box instead of a line:

The point could be anywhere between here, but you don’t know where.

N: What if you’re partially agnostic with regards to a foreacting agent?

B: This method allows for that too. If you know what the probabilities are for the foreacting agent from A to E, but are completely clueless about E to F it looks like this:

N: What if I don’t know the probabilities of the agent between E and F, but I do know it's somewhere between 0.2 and 0.6?

B: It would look something like this:

N: What if it doesn’t foreact to your credences, but the graph as a whole?

B: Then you add an axis, if it reacts to that you add another axis etc.

N: This is still rather abstract, can you be more mathematical?

B: Sure!

Applying Bayesian modeling and updating to foreacting agents

To apply the Bayesian method, the main thing we need is a world model, which we can then use to calculate posterior probability distributions for things we are interested in. The world model is a Bayesian network that has…

one node for each relevant variable X

one directed arrow Y→X for each direct dependency among such variables, leading from a parent node Y to a child node X

and for each node X a formula that calculates the probability distribution for that variable from the values of all its parents: P(X | parents(X))

For nodes X without parents, the latter formula specifies an unconditional probability distribution: P(X | parents(X)) = P(X | empty set) = P(X).

In our case, I believe the proper model should be this:

Variables:

B: whether the agent will press the button. This is a boolean variable with possible values True and False.

p: the credence you assign to the event B=True. This is a real-valued variable with possible values 0…1

q: the probability that the agent uses to decide whether to press the button or not. This is also a real-valued variable with possible values 0…1

Dependencies:

B depends only on q: parents(B) = {q}

q depends only on p: parents(q) = {p}

Formulas for all variables’ (conditional) probability distributions:

P(B=True | q) = q, P(B=False | q) = 1 – q

P(q | p) is given by two functions flow, fhigh as follows:

If flow(p) = fhigh(p) = f(p), then q = f(p), in other words: P(q | p) = 1 iff q = f_{low}(p) and 0 otherwise

If flow(p) < fhigh(p), then P(q | p) has uniform density 1 / (f_{high}(p) – f_{low}(p) for f_{low}(p) < q < f_{high}(p) and 0 otherwise.

For the uniformly foreacting agent agent we have flow(p) = fhigh(p) = f(p) = p

Note that we assume to know the response function upfront, so the functions flow, fhigh are not variables of the model but fixed parameters in this analysis. We might later study models in which you are told the nature of the agent only at some time point and where we therefore also model flow, fhigh as a variable, but that gets harder to denote then.

P(p) = whatever you initially believe about what credence you assign to the event B=True

At this point, we might be surprising necessity of the Bayesian method and get a little wary: because our model of the situation contains statements about how our credence in some variable influences that variable, we needed to include both that variable (B) and our credence (p) as nodes into the Bayesian network. Since we have to specify probability distributions for each parentless node in the network, we need to specify them about p, i.e., a probability distribution on all possible values of p, i.e., a credence about our credence in B being 0.3, a credence about our credence in Bbeing 0.7, etc. This is the P(p) in the last line above. In other words, we need to specify 2nd-order credences! Let us for now assume that P(p) is given by a probability density g(p) for some given function g.

The whole model thus have two parameters:

two functions flow, fhigh encoding what you know about how the agent will choose q depending on p,

and a function g encoding your beliefs about your credence p.

The Bayesian network can directly be used to make predictions. Making a prediction here is nothing else than calculating the probability of an event.

In our case, we can calculate

P(B=True) = integral of P(B=True | q) dP(q) over all possible values of q = integral of P(B=True | q) dP(q | p) dP(p) over all possible values of q and p = integral of f(p) g(p) dp over p=0…1 (if flow=fhigh=f, otherwise a little more complicated)

For example:

If we consider the uniformly foreacting agent with f(p) = p and believe that we will assign credence p = 0.3 for sure, then P(B=True) = 0.3 and we are happy.

If we consider the uniformly foreacting agent with f(p) = p and believe that we will assign either credence p=0.3 or p=0.8, each with probability 50%, then P(B=True) = 0.55 and we are unhappy.

If we consider any f for which there is at least one possible value p* of psuch that f(p*)=p*, and believe that we will assign credence p = p*, then P(B=True) = f(p*) = p* and we are happy.

If we consider an f for which there is no possible value p with f(p)=p, and believe that we will assign some particular credence p* for sure, then we get P(B=True) != p* and will be unhappy.

But: If we consider an f for which there is no possible value p with f(p)=p, and believe that we might assign any possible credence value p between 0 and 1 with some positive probability, then we indeed get some result P(B=True) between 0 and 1, and since we have attached positive probability to that value, we should be happy since the result does not contradict what we believed we would predict!

Let’s assume we interpret the node p as a control variable of a rational us with some utility function u(B), let’s say u(B=True) = 1 and u(B=False) = 0. Then we can use the Bayesian model to calculate the expected utility given all possible values of p: E(u(B) | p) = q = (flow(p) + fhigh(p)) / 2. So a rational agent would choose that p which maximizes (flow(p) + fhigh(p)) / 2. If this is all we want from the model, we don’t need g! So we only need an incomplete Bayesian network which does not specify the probability distributions of control variables, since we will choose them.

Things get more interesting if u depends on B but also on whether p = q, e.g. u(B,p,q) = 1_{B=True} – |p – q| . In that case, E(u | p) = f(p) – |p – f(p)|. If f(p) > p, this equals f(p) – |f(p) – p| = f(p) – (f(p) – p) = p. If f(p) < p, this equals 2f(p) – p.

Let’s assume the rational us cannot choose a p for which f(p) != p.

Excursion: If you are uncertain about whether your utility function equals u1 or u2 and give credence c1 to u1 and c2 to u2, then you can simply use the function u = c1*u1 + c2*u2.

Bayesian updating is the following process:

We keep track of what you know (rather than just believe!) about which combinations of variable values are still possible given the data you have. We model this knowledge via a set D: the set of all possible variable value combinations that are still possible according to your data (Formally, D is a subset of the probability space Omega). If at first you have no data at all, D simply contains all possible variable combinations, i.e., D=Omega.

In our case, D and Omega equal the set of all possible value triples (B,p,q), i.e., they are the Cartesian product of the sets {True,False}, the interval [0,1] and another copy of the interval [0,1]:

D = Omega = {True,False} x [0,1] x [0,1]

Whenever we get more data:

We reflect this by throwing out those elements of D that are ruled out by the incoming data and are thus no longer considered possible. In other words, we replace D by some subset D’ of D.

Then we calculate the conditional probability distribution of those events E we are interested in, given D, using Bayes’ formula:

P(E | D) = P(E and D) / P(D)

At this point, we might be tempted to treat the value we derived for P(B=True) on the basis of some choice of f and g as data about p. Let’s consider the consequences of that. Let’s assume we start with some fixed f, g and with no knowledge about the actual values of the three variables, i.e., with D_{0 }= Omega = {True,False} x [0,1] x [0,1]. We then calculate P(B=True) and get some value p_{1} between 0 and 1. We treat this as evidence for the fact that p = p_{1}update our cumulative data to D_{1} = {True,False} x {p_{1}} x [0,1], and update our probabilities so that now P(B=True) = f(p_{1}). If the latter value, let’s call it p_{2}, equals p_{1}, we are happy. Otherwise, we wonder. We have then several alternative avenues to pursue:

We can treat the result P(B=True) = p_{2}as another incoming data about p, which needs to be combined with our earlier data. But our earlier data and this new data contradict each other. Not both can be true at the same time, so the statement S_{1}: p = p_{1} , that was suggested by our earlier data is false, or the statement S_{2}: p = p_{2}that was suggested by our earlier data is false. If we consider that S_{1} is false, we must consider why it is false since that might enable us to draw valuable conclusions. S_{1} was derived purely from our world model, parameterized by the functions f and g, so either at least one of those functions must have been incorrect or the whole model was incorrect.

The shakiest part of the model is g, so we should probably conclude that our choice of g was incorrect. We should then try to find a specification of g that does not lead to such a contradiction. We can only succeed in doing so if there is a value p* for which f(p*) = p*. If such a value exists, we can put g(p*) = infinity (remember, g specifies probability densities rather than probabilities) and g(p) = 0 for all p != p*, i.e., assume from the beginning that we will predict p* for sure. But if such a value p* does not exist, we cannotchoose g so that the contradiction is avoided.

In that case, something else about the model must have been incorrect, and the next best candidate for what is wrong is the function f. Since no p with f(p)=p exists, f must be discontinuous. Does it make sense to assume a discontinuous f? Probably not. So we replace f by some continuous function. And et voila: now there is some value p* with f(p*)=p*, and we can now choose a suitable gand avoid the contradiction.

If we desperately want to stick to a discontinuous f, then something else about the model must be wrong. I think it is the idea of the agent being able to know p with certainty, rather than just being able to measure p with some random measurement noise epsilon. I suggest adding two more variables, the noise epsilon and the measurement m, and modify the formulae as follows:

epsilon ~ N(0,1), i.e., Gaussian noise

m = h(p, epsilon) for some continuous function h that represents the influence of the random noise epsilon on the agent’s measurement mof p.

For example: h(p, epsilon) = expit(logit(p) + sigma epsilon) for some magnitude parameter sigma > 0.

q = f(m) rather than q = f(p)

With this modified model, we will get

P(B=True) = integral of P(B=True | q) dP(q) over all possible values of q = integral of P(B=True | q) dP(q | m) dP(m | p, epsilon) dP(p) dP(epsilon) over all possible values of q,p and epsilon = integral of E_{epsilon~N(0,1)}f(h(p, epsilon)) g(p) dp over p=0…1, where E is the expectation operator w.r.t. epsilon

If our choice of g assigns 100% probability to a certain value p_{1} of p, the calculation results in

which is a continuous function of p_{1} even if f is discontinuous, due to the “smearing out” performed by h! So there is some choice of p_{1} for which p_{2} = p_{1} without contradiction. This means that whatever continuous noise function h and possibly discontinuous reaction function f we assume, we can specify a function g encoding our certain belief that we will predict p_{1}, and the Bayesian network will spit out a prediction p_{2} that exactly matches our assumption p_{1}.^{[3]}

Acknowledgment

A huge thanks to Jobst Heitzig for checking my writing and for writing the “Applying Bayesian modeling and updating to foreacting agents” section of the post. He says it's incomplete and there's more to be written, but I'm thankful for what's already there. And special thanks to the countless people who provide the free secondary literature on philosophy which makes me understand these problems better. You all deserve my tuition money.

A while back I proposed some solutions to problems with Bayesianism on my blog and on the EA Forum. This is the last part about 'the problem of agnosticism' and something I call a 'foreacting agent'. If you want the other parts you can click on the links. It's written as a fictional dialogue between a Bayesian (B) and a Non-believer (N).

The problem of agnosticismN: What are the Bayesian probabilities? The problem of logical omniscience suggests that we can't simply say they are degrees of belief, so what are they? Take a claim like "There are a billion planets outside the observable universe". How do you assign a probability to that? We can’t observe them, so we can’t rely on empiricism or mathematics, so... shouldn’t we be agnostic? How do we represent agnosticism in terms of probability assignments?

B: Prior probabilities can be anything you want. Just pick something at random between 0 and 1. It doesn’t really matter because our probabilities will converge over time given enough incoming data.

N: If I just pick a prior at random, that prior doesn’t represent my epistemic status. If I pick 0.7, I now have to pretend I’m 70% certain that there are a billion planets outside the observable universe, even though I feel totally agnostic. I’m not even sure we’ll ever find out whether there really are a billion planets outside the observable universe. Why can’t I just say that it’s somewhere between 0 and 1, but I don’t know where?

B: You need to be able to update. A rational thinker needs to have a definite value.

N: Why? There is no Dutch book argument against being agnostic. If someone offers me Dutch book bets based on the number of planets outside the observable universe, I can just decline.

B: What if you don’t have a choice? What if that person has a gun?

N: How would that person even resolve the bet? You’d have to know the amount of planets outside the observable universe.

B: It’s God, and God has a gun.

N: Okay, fine, but even in that absurd scenario I don’t have to have a definite value to take on bets. I can, for example, use a random procedure, like rolling a dice.

B: What if that procedure gives you a 0 or a 1? You would have a trapped prior, and you couldn’t update your beliefs no matter what evidence you observed.

N: I can’t update my beliefs

ifI follow Bayesianism. The axioms of probability theory allow me to assign a 0 or a 1 to a hypothesis. It’s Bayesianism that traps my priors.B: You can’t assign a 0 or a 1 to an empirical hypothesis for that reason.

N: Isn’t that ad hoc? The probabilities were meant to represent an agent's degree of belief, and agents can certainly be certain about a belief. It seems the probabilities do not represent an agent's degree of belief after all. The Bayesian needs to add all sorts of extra rules, like that we can assign 0 and 1 to logical theorems but not empirical theories, which

mustactually be assigned a probability between 0 and 1. So... what are the probabilities exactly?B: Hmmm… Let me get back to you on that one!

The problem of foreacting agentsN: Say there is an agent whose behavior I want to anticipate. However, I know that this agent is:

If I guess the agent has a 90% chance of pushing a button they will have already predicted it, and will afterwards push the button with 90% probability. Same with any other probability, they will predict it and set their probability for acting accordingly. It’s forecasting my guess and

reactingbeforeI predict, henceforeacting. After learning this information what should my posterior be? What probability should I assign to them pushing the button?B: Whatever you want to.

N: But ‘whatever you want to’ is not a number between 0 and 1.

B: Just pick a number at random then.

N: If I just pick a prior at random, that doesn’t represent my epistemic state.

B: Ah, this is the problem of agnosticism again. I think I’ve found a solution. Instead of Bayesianism being about discrete numbers, we make it about ranges of numbers. So instead of saying the probability is around 0.7 we say it’s 0.6–0.8. That way we can say in this scenario and in the case of agnosticism that the range is 0–1.

^{[1]}N: This would be an adequate solution to one of the problems, but can’t be a solution for both agnosticism and foreacting predictors.

B: Why not?

N: Because they don’t depict the same epistemic state. In fact, they represent an almost opposite state. With agnosticism I have basically no confidence in any prediction, whereas with the foreacting predictor I have ultimate confidence in all predictions. Also, what if the agent is foreacting

non-uniformly?Let’s say it makes it’s probability of acting 40% and 60% if I predict it will be 40% and 60% respectively, but makes it’s probability not conform with my prediction when I predict anything else. So if I predict, say, 51% it will act with a probability of, say, 30%. Let’s also assume I know this about the predictor. Now the range is not 0–1, it’s not even 0.4–0.6 since it will act with a different probability when I predict 51%.B: Hmmm…

N: And what if I have non-epistemic reasons to prefer one credence over another. Let’s say I’m trying to predict whether the foreacting agent will kill babies. I have a prior probability of 99% that it will. The agent foreacts, and I observe that it does indeed kill a baby. Now I learn it’s a foreacting agent. With Bayesianism I keep my credence at 99%, but surely I ought to switch to 0%. 0% is the ‘moral credence’.

B: This is a farfetched scenario.

N: Similar things can happen in e.g. a prediction market. If the market participants think an agent has a 100% probability of killing a baby they will bet on 100%. But if they then learn that the agent will 100% kill the baby if they bet on 1%-100%, but will not kill the baby if the market is 0% they have a problem. Each individual participant might want to switch to 0%, but if they act first the other participants are financially incentivized to not switch. You have a coordination problem. The market

causesthe bad outcome. You don’t even need foreacting for this, a reacting market is enough. Also, there might be disagreement on what the ‘moral credence’ even is. In such a scenario the first buyers can set the equilibrium and thus cause an outcome that the majority might not want.^{[2]}B: This talk about ‘moral credences’ is besides the point. Epistemology is not about morality. Bayesianism picks an accurate credence and that’s all it needs to do.

N: But if two credences are equally good epistemically, but one is better morally, shouldn’t you have a system that picks the more moral one?

B: Alright, what if we make Bayesianism not about discrete numbers, nor about ranges, but instead about distributions? On the x-axis we put all the credences you could pick (so any number between 0 and 1) and on the y-axis what you think the probability will be based on which number you pick.

So when you encounter a phenomenon that you think has a 60% chance of occurring (no matter what you predict/which credence you pick) the graph looks like this:

And when you encounter a uniformly foreacting agent who (you believe) makes the odds of something occurring conform to what you predict (either in your head or out loud), you have a uniform distribution:

With this you can just pick any number and be correct. However if you encounter the non-uniformly foreacting agent of your example the graph could look something like this: (green line included for the sake of comparison)

Picking 0.2 (B) will result in the predictor giving you a terrible track record (0.4). But picking 0.4 or 0.6 (C or E) will give you an incredible track record. Let’s call C and E ‘overlap points’. If this distribution is about whether the agent will kill a baby, C is the ‘moral credence’.

N: Wouldn’t A be the moral credence, since that has the lowest chance of killing a baby?

B: Humans can’t will themselves to believe A since they know that predicting a 0% chance will actually result in a 20% chance.

N: What about an agent that is especially good at self deception?

B: Right, so if you have e.g. an AI that can tamper with it’s own memories, it might have a moral duty to delete the memory that 0% will result in 20% and instead forge a memory that 0% will lead to 0%, just so the baby only has a 20% chance of dying.

N: What if you have a range? What if you don’t know what the probability of something is but you know it’s somewhere between 0.5 and 0.7?

B: Then it wouldn’t be thin line at 0.6, but a ‘thick’ line, a field:

N: What about total agnosticism?

B: Agnosticism would be a black box instead of a line:

The point could be anywhere between here, but you don’t know where.

N: What if you’re

partiallyagnostic with regards to a foreacting agent?B: This method allows for that too. If you know what the probabilities are for the foreacting agent from A to E, but are completely clueless about E to F it looks like this:

N: What if I don’t know the probabilities of the agent between E and F, but I do know it's somewhere between 0.2 and 0.6?

B: It would look something like this:

N: What if it doesn’t foreact to your credences, but the graph as a whole?

B: Then you add an axis, if it reacts to that you add another axis etc.

N: This is still rather abstract, can you be more mathematical?

B: Sure!

Applying Bayesian modeling and updating to foreacting agentsTo apply the Bayesian method, the main thing we need is a world model, which we can then use to calculate posterior probability distributions for things we are interested in. The world model is a

Bayesian networkthat has…nodefor each relevantvariable Xdirected arrow Y→Xfor eachdirect dependencyamong such variables, leading from aparentnodeYto achildnodeXXaformulathat calculates theprobability distributionfor that variable from thevaluesof all its parents: P(X| parents(X))For nodes X without parents, the latter formula specifies an unconditional probability distribution: P(

X| parents(X)) = P(X| empty set) = P(X).In our case, I believe the proper model should be this:

B: whether the agent will press the button. This is a boolean variable with possible valuesTrueandFalse.p: the credence you assign to the eventB=True. This is a real-valued variable with possible values 0…1q: the probability that the agent uses to decide whether to press the button or not. This is also a real-valued variable with possible values 0…1Bdepends only onq: parents(B) = {q}qdepends only onp: parents(q) = {p}B=True |q) =q, P(B=False |q) = 1 –qq|p) is given by two functions flow, fhigh as follows:q|p) = 1 iffq=f(_{low}p)and 0 otherwisef_{high}(p) –f_{low}(p) forf_{low}(p) <q<f_{high}(p) and 0 otherwise.pflow, fhighare not variables of the model but fixed parameters in this analysis. We might later study models in which you are told the nature of the agent only at some time point and where we therefore also modelflow, fhighas a variable, but that gets harder to denote then.p) = whatever you initially believe about what credence you assign to the eventB=TrueAt this point, we might be surprising necessity of the Bayesian method and get a little wary: because our model of the situation contains statements about how our credence in some variable influences that variable, we needed to include

boththat variable (B) and our credence (p) as nodes into the Bayesian network. Since we have to specify probability distributions for each parentless node in the network, we need to specify them aboutp, i.e., a probability distribution on all possible values ofp, i.e., a credence about our credence inBbeing 0.3, a credence about our credence inBbeing 0.7, etc. This is the P(p) in the last line above. In other words, we need to specify 2nd-order credences! Let us for now assume that P(p) is given by a probability densityg(p) for some given functiong.The whole model thus have two

parameters:flow, fhighencoding what you know about how the agent will chooseqdepending onp,gencoding your beliefs about your credencep.The Bayesian network can directly be used to

make predictions.Making a prediction here is nothing else than calculating the probability of an event.P(

B=True) = integral of P(B=True |q) dP(q) over all possible values ofq= integral of P(

B=True |q) dP(q | p) dP(p) over all possible values ofqandp= integral of

f(p)g(p) dpoverp=0…1 (if flow=fhigh=f, otherwise a little more complicated)f(p) =pand believe that we will assign credencep= 0.3 for sure, then P(B=True) = 0.3 and we are happy.f(p) =pand believe that we will assign either credencep=0.3 orp=0.8, each with probability 50%, then P(B=True) = 0.55 and we are unhappy.ffor which there is at least one possible valuep* ofpsuch that f(p*)=p*, and believe that we will assign credencep=p*, then P(B=True) =f(p*) =p* and we are happy.ffor which there isnopossible valuepwith f(p)=p, and believe that we will assign some particular credencep* for sure, then we get P(B=True) !=p* and will be unhappy.ffor which there isnopossible valuepwith f(p)=p, and believe that we might assignanypossible credence valuepbetween 0 and 1 with some positive probability, then we indeed get some result P(B=True) between 0 and 1, and since we have attached positive probability to that value, we should be happy since the result does not contradict what we believed we would predict!Let’s assume we interpret the node

pas a control variable of a rational us with some utility functionu(B), let’s say u(B=True) = 1 and u(B=False) = 0. Then we can use the Bayesian model to calculate the expected utility given all possible values of p: E(u(B) |p) =q= (flow(p) + fhigh(p)) / 2. So a rational agent would choose thatpwhich maximizes (flow(p) + fhigh(p)) / 2. If this is all we want from the model, we don’t needg! So we only need an incomplete Bayesian network which does not specify the probability distributions of control variables, since we will choose them.Things get more interesting if

udepends onBbut also on whetherp=q, e.g.u(B,p,q) = 1_{B=True}– |p–q| . In that case, E(u | p) = f(p) – |p – f(p)|. If f(p) > p, this equals f(p) – |f(p) – p| = f(p) – (f(p) – p) = p. If f(p) < p, this equals 2f(p) – p.Let’s assume the rational us cannot choose a p for which f(p) != p.

Excursion: If you are uncertain about whether your utility function equals u1 or u2 and give credence c1 to u1 and c2 to u2, then you can simply use the function u = c1*u1 + c2*u2.

Bayesian updatingis the following process:know(rather than just believe!) about whichcombinations of variable valuesare stillpossible given the data you have.We model this knowledge via a setD: the set of all possible variable value combinations that are still possible according to your data (Formally,Dis a subset of the probability space Omega). If at first you have no data at all,Dsimply contains all possible variable combinations, i.e.,D=Omega.B,p,q), i.e., they are the Cartesian product of the sets {True,False}, the interval [0,1] and another copy of the interval [0,1]:At this point, we might be tempted to

treat the value we derived for P(B=True)on the basis of some choice offandgasdata about p. Let’s consider the consequences of that. Let’s assume we start with some fixedf, gand with no knowledge about the actual values of the three variables, i.e., with D_{0 }= Omega = {True,False} x [0,1] x [0,1]. We then calculate P(B=True) and get some valuepbetween 0 and 1. We treat this as evidence for the fact that_{1}p=pupdate our cumulative data to D_{1}_{1}= {True,False} x {p} x [0,1], and update our probabilities so that now P(_{1}B=True) =f(p). If the latter value, let’s call it_{1}p, equals_{2}p, we are happy. Otherwise, we wonder. We have then several alternative avenues to pursue:_{1}B=True) =pas another incoming data about_{2}p, which needs to be combined with our earlier data. But our earlier data and this new data contradict each other. Not both can be true at the same time, so the statement S_{1}:p=p, that was suggested by our earlier data is false, or the statement S_{1}_{2}:p=pthat was suggested by our earlier data is false. If we consider that S_{2}_{1}is false, we must consider why it is false since that might enable us to draw valuable conclusions. S_{1}was derived purely from our world model, parameterized by the functionsfandg, so either at least one of those functions must have been incorrect or the whole model was incorrect.g, so we should probably conclude that our choice ofgwas incorrect. We should then try to find a specification ofgthat does not lead to such a contradiction. We can only succeed in doing so if there is a valuep* for whichf(p*) =p*. If such a value exists, we can putg(p*) = infinity (remember,gspecifies probability densities rather than probabilities) andg(p) = 0 for allp!=p*, i.e., assume from the beginning that we will predictp* for sure. But if such a valuep* doesnotexist, we cannotchoosegso that the contradiction is avoided.f. Since nopwithf(p)=pexists,fmust be discontinuous. Does it make sense to assume a discontinuousf? Probably not. So we replacefby some continuous function. And et voila: now there is some valuep* withf(p*)=p*, and we can now choose a suitablegand avoid the contradiction.f, then something else about the model must be wrong. I think it is the idea of the agent being able to know p with certainty, rather than just being able to measure p with some random measurement noise epsilon. I suggest adding two more variables, the noise epsilon and the measurement m, and modify the formulae as follows:m=h(p, epsilon) for some continuous functionhthat represents the influence of the random noise epsilon on the agent’s measurementmofp.h(p, epsilon) = expit(logit(p) + sigma epsilon) for some magnitude parameter sigma > 0.q=f(m) rather thanq=f(p)With this modified model, we will get

P(

B=True) = integral of P(B=True |q) dP(q) over all possible values ofq= integral of P(

B=True |q) dP(q | m) dP(m | p,epsilon) dP(p) dP(epsilon) over all possible values ofq,pand epsilon= integral of E

_{epsilon~N(0,1)}f(h(p,epsilon))g(p) dpoverp=0…1, where E is the expectation operator w.r.t. epsilonIf our choice of

gassigns 100% probability to a certain valuep_{1}ofp, the calculation results inp_{2}:= P(B=True) = E_{epsilon~N(0,1)}f(h(p_{1}, epsilon)),which is a

continuousfunction ofp_{1}even iffis discontinuous, due to the “smearing out” performed byh! So there is some choice ofpfor which_{1}pwithout contradiction. This means that whatever continuous noise function_{2}= p_{1}hand possibly discontinuous reaction functionfwe assume, we can specify a functiongencoding our certain belief that we will predictp, and the Bayesian network will spit out a prediction_{1}pthat exactly matches our assumption_{2}p._{1}^{[3]}AcknowledgmentA huge thanks to Jobst Heitzig for checking my writing and for writing the “Applying Bayesian modeling and updating to foreacting agents” section of the post. He says it's incomplete and there's more to be written, but I'm thankful for what's already there. And special thanks to the countless people who provide the free secondary literature on philosophy which makes me understand these problems better. You all deserve my tuition money.

^{^}See Infra-Bayesianism by @Diffractor and @Vanessa Kosoy

^{^}If the first people in a prediction market have different demographics/interests than the population at large this becomes a problem.

^{^}Alternatively, we could conclude that the output of the Bayesian network, P(

B=True), shouldnotbe treated as data onp. But then what?