Decision theory: Why Pearl helps reduce “could” and “would”, but still leaves us with at least three alternatives

11Vladimir_Nesov

3Wei Dai

0SforSingularity

0Vladimir_Nesov

0SforSingularity

0Vladimir_Nesov

4Wei Dai

3SforSingularity

3Wei Dai

0SforSingularity

0Vladimir_Nesov

1Wei Dai

1Vladimir_Nesov

4Wei Dai

9Scott Alexander

5[anonymous]

3AnnaSalamon

2[anonymous]

2tut

1tut

5jimmy

8Eliezer Yudkowsky

2jimmy

3Eliezer Yudkowsky

2jimmy

0Nick_Tarleton

2AnnaSalamon

2Eliezer Yudkowsky

1jimmy

3IlyaShpitser

2cheesedaemon

6pengvado

3saturn

1Psy-Kosh

1SilasBarta

1Psy-Kosh

3SilasBarta

1Psy-Kosh

1SilasBarta

2Psy-Kosh

1SilasBarta

1Psy-Kosh

1SilasBarta

1Psy-Kosh

1SilasBarta

1Psy-Kosh

1SilasBarta

1Psy-Kosh

1SilasBarta

1Psy-Kosh

1SilasBarta

0tut

1timtyler

9Nick_Tarleton

15Alicorn

2Nick_Tarleton

-9timtyler

-7timtyler

5Nick_Tarleton

-1timtyler

4Eliezer Yudkowsky

-2timtyler

3Eliezer Yudkowsky

-9timtyler

4jimmy

-4timtyler

1Nubulous

-2timtyler

3PhilGoetz

0timtyler

0timtyler

-1whpearson

New Comment

Some comments are truncated due to high volume. (⌘F to expand all)

3

My UDT1, which makes use of the concept of "platonic computation", seems to work, at least on simple toy problems, so I don't see what's wrong with it. Are you arguing that "platonic computation" will cause difficulties in more complex problems, or what?

0

Can we just remove the word "platonic" and define "computation" in the usual way as an input for a UTM?

0

Where's the UTM that runs both in yours and Omega's head?

0

What do you mean by "where" in reference to a mathematical abstraction?

0

That's the thing: you are basically requiring "platonic" to be returned in the explanation (mathematical abstraction that doesn't reside anywhere specifically). "Computation" I can imagine: it's the process running on my PC, or in my head, or wherever. Mathematical abstraction of computation -- not so clear. It's for one thing something that happens in mathematicians' heads, but moving this process to the moon is suspect.
There is always a "where" to any abstract math that anyone ever observed, and the lawfulness with which this phenomenon persists among many instances doesn't need to be explained by something outside the physical laws (or "meta" to them): it's just another regularity in the world, a daunting one, but as always a curiosity-stopper is not the answer (assuming a hidden world of mathematical truths that is not even anywhere in our real world in which you can peek from anywhere with varying levels of success by theories of different strength, never to see it whole, etc. -- a kind of modern dualism, consciousness of reason).

4

There is no need for dualism, if we assume that mathematics is all there is, and that consciousness is a property of certain mathematical objects. "Never to see it whole" makes perfect sense, since why would a part be able to see the whole as a whole?
To put it another way, why do you infer a physical world, instead of a mathematical world, from your observations? Is there some reason why a pile of physical particles moving around in a certain pattern can cause a conscious experience, whereas that pattern as an abstract mathematical object can't be conscious?

3

What's the difference between these two? I think we are getting to the stage of philosophical abstraction where words lose all purchase. I have no idea what image "physical world, instead of a mathematical world" conjures up in Wei and Vladimir's minds, and the words "physical " and "mathematical" don't seem to help me.

3

My position is that "physical world" is meaningless, and the question was a rhetorical one that I asked because I thought Nesov was thinking in terms of a physical world.

0

I think it is reasonable to eliminate the phrase "physical world". "Hubble volume that we inhabit" seems to do most of the job that it did for me anyway.

0

I can hardly do more than sound my vote, with a few details -- it's a huge debate, with intricate arguments on both sides. My emphasis is on sidestepping the confusion by staying at the level of natural phenomena. Saying that "everything is math" is not an explanation of what math is, in the sense of lawful processes in mathematicians' heads, and more generally in decision-making. There is a danger of losing track of the question.

1

It seems fairly obvious that a mathematician's head is doing a physics computation, which is logically correlated with an abstract neural-network computation (representing its mind), which is logically correlated with whatever part of math that the mathematician is considering. "Everything is math" doesn't tell us the exact nature of those logical correlations, but neither does it hurt our attempt to find out, as far as I can tell.
Also, I don't understand what you mean by "staying at the level of natural phenomena" nor how that helps to "sidestepping the confusion".

1

My point is that you don't need to make that last step, saying that process in the head is related to some abstract math. Instead, take two processes in two heads, and relate them directly, through the physics stuff.
To make an analogy, when you see two similar plants, it's confusing to talk about them being instances of the same platonic plant. Instead, by saying that they are similar, you mean that you formed certain representations of them in your own mind, and the representations considerably match: it's a concrete operation that is performed by one who recognizes the connection.
With math, relating processes (or formulas) through denotational semantics has a danger of losing track of the procedure that relates them, which can in some cases be unfeasible, and limitations on which may be important. Some things you can't even pinpoint to the semantics: are these two programs equal, in the sense of producing the same results for the same inputs? That is, what are those mathematical objects that correspond to each of them? You'll never know, and thus the question is effectively meaningless.
Interaction between specific details of implementation is part of decision-making as well as of the decisions themselves. Introducing abstract representation that abstracts away the details in unspecified fashion and gets communicated through the ether may confuse the situation.

4

Ok, that's much clearer, and while I don't know if I agree with you completely, there's nothing you said that I object to.
I think confusion arose in the first place because you interpreted "platonic computation" to mean the denotational semantics of a computation, whereas Eliezer (and others) were using it to refer to the "abstract neural-network computation" as opposed to the "physics computation" involving wavefunctions and such, or the "physical world" with physical particles/wavefunctions (whatever that means).

Thanks for this sequence. I wasn't really able to follow some of the other decision theory posts, but this is very clear, ties a lot of things together, and finally gives me a good handle on what a lot of people are talking about.

Stupid question time: Why are all the agents only "surgically setting" nodes with no parents? Is that a coincidence, or is it different in a significant way from the sunniness example?

3

Good question.
If we're surgically setting a decision node in a Bayes net to a given value (e.g., setting the "my action" node to "one-box"), we always imagine snipping any parents of that node, so that the node can just be arbitrarily set, and setting the node to a particular value does not affect our probability distribution on any siblings of that node.
So, we could add parents in the graphs if we want, but we'd basically be ignoring them in our decision procedure.
This means that in effect, an actions-CSA must (after conditionalizing on its information) view its action as uncorrelated with the state of any of the rest of the world, except for the children/effects of its action. (For example, it must view its action as uncorrelated with Omega's belief about its action.)
Similarly, an innards-CSA must (after conditionalizing on its information) view its innards as uncorrelated with everything that is not a physical effect of its innards, such as Clippy's innards.
Similarly, a timeless, aka algorithm-output, CSA must (after conditionalizing on its information)view its algorithm output as uncorrelated with everything that is not a child of the "its algorithm" node, such as perhaps (depending on its architecture) the output of similar algorithms.

2[anonymous]

Thanks, that makes sense. I was thinking that the diagrams represented all the nodes that the agents looked at, and that based on what nodes they saw they would pick one to surgically set. I didn't realize they represented the result of setting a node.
Follow-up stupid questions:
1. Do all the agents start with the same graph and just pick different surgery points, or is it a combination of starting with different nodes and picking different nodes?
2. If you put "innards" and "platonic" on the same graph (for any reason) what does that look like?

2

1. They have different graphs, but the one necessary difference is the node that they do surgery on.
2. Presumably you would remove the arrow from platonic algorithm to your action and add arrows from platonic algorithm to your innards and from your innards to your actions.

1

I believe that she simply neglects to draw the parents of the surgical nodes because they should be ignored anyway so drawing them is a waste of space.

I think a lot of the confusion about these types of decision theory problems has to do with not not everyone thinking about the same problem even when it seems like they are.

For example, consider the problem I'll call 'pseudo newcombs problem'. Omega still gives you the same options, and history has proven a strong correlation between peoples choices and its predictions.

The difference is that instead of simulating the relevant part of your decision algorithm to make the prediction, Omega just looks to see whether you have a blue dot or a red dot on your f...

8

Do you think that the other guy is thinking the same thing, and reasoning the same way? Or do you think that the other will probably decide to cooperate or defect on the PD using some unrelated algorithm?
My main reason for potentially defecting on the true PD against another human - note the sheer difficulty of obtaining this unless the partner is Hannibal with an imminently lethal wound - would be my doubts that they were actually calculating using a timeless decision theory, even counting someone thinking about Hofstadterian superrationality as TDT. Most people who've studied the matter in college have been taught that the right thing to do is to defect, and those who cooperate on instinct are running a different algorithm, that of being honorable.
But it'd be pretty damn hard in real life to put me into a literally one-shot, uncoordinated, no-communication, true PD where I'm running TDT, the other person is running honor with no inkling that I'm TDT, and the utilities at stake outweigh that which constrains me not to betray honorable people. It deserves a disclaimer to the effect of "This hypothetical problem is sufficiently different from the basic conditions of real life that no ethical advice should be taken from my hypothetical answer."

2

The latter. I haven't thought about this enough be comfortable knowing how similar his algorithm must be in order to cooperate, but if I ultimately decided to defect it'd be because I thought it qualified as sufficiently different.

3

So you fully expect in real life that you might defect and yet see the other person cooperate (with standard ethical disclaimers about how hard it is to true the PD such that you actually prefer to see that outcome).

2

Yes, that's correct. I also currently see a significant probability of choosing to cooperate and finding out that the other guy defected on me. Should I take your response as evidence to reconsider? As I said before, I don't claim to have this all sorted out.
As to your disclaimer, it seems like your impression says that it's much harder to true PD than mine says. If you think you can make the thing truly one shot without reputational consequences (which may be the hard part, but it seems like you think its the other part), then it's just a question of setting up the payoff table.
If you don't have personal connections to the other party, it seems that you don't care any more about him than the other 6 billion people on earth. If you can meet those conditions, even a small contribution to fighting existential risks (funded by your prize money) should outweigh anything you care about him.

0[anonymous]

Mostly because of the "one-shot, uncoordinated, no-communication, true... utilities at stake outweigh" parts, I would think. The really relevant question conditions on those things.

2

Depending on the set-up, “innards-CSAs" may one-box here. Innards-CSAs go back to a particular moment in time (or to their creator’s probability distribution) and ask: “if I had been created at that time, with a (perhaps physically transparent) policy that would one-box, would I get more money than if I had been created with a (perhaps physically transparent) policy that would two-box?”
If your Omega came to use the colored dots in its prediction because one-boxing and two-boxing was correlated with dot-colors, and if the innards-CSA in question is programmed to do its its counterfactual innards-swap back before Omega concluded that this was the correlation, and if your innards-CSA ended up copied (perhaps with variations) such that, if it had had different innards, Omega would have ended up with a different decision-rule... then it will one-box.
And “rightly so” in the view of the innards-CSA... because, by reasoning in this manner, the CSA can increase the odds that Omega has decision-rules that favor its own dot-color. At least according to its own notion of how to reckon counterfactuals.

2

Depending on your beliefs about what computation Omega did to choose its policy, the TDT counterfactual comes out as either "If things like me one-boxed, then Omega would put $1m into box B on seeing a blue dot" or "If things like me one-boxed, then Omega would still have decided to leave B empty when seeing a blue dot, and so if things like me one-boxed I would get nothing."

1

I see your point, which is why I made sure to write "In addition, all people with working brains have chosen two boxes in the past."
My point is that you can have situations where there is a strong correlation so that Omega nearly always predicts correctly, but that Omega's prediction isn't caused by the output of the algorithm you use to compute your decisions, so you should two box.
The lack of effort to distinguish between the two cases seems to have generated a lot of confusion (at least it got me for a while).

The description of Pearl's counterfactuals in this post isn't entirely correct, in my opinion.

Your description of the "sunny day beachgoers" statement is describing an interventional distribution (in other words something of the form P(y|do(x))).

The key difference between interventions and counterfactuals is that the former are concerned with a single possible world (after an intervention), while the latter are concerned with multiple hypothetical worlds simultaneously.

Example of a statement about an intervention (causal effect): "I am about...

Can anyone explain why Goodman considers this statement to be true:

Hence `If that piece of butter had been heated to 150°F, it would not have melted.' would also hold.

6

Interpreted as truth-functional, "if A then B" is equivalent to "A→B" is equivalent to "~A ∨ B". Which is true whenever A is false, regardless of its relation to B or lack thereof.

3

"If that piece of butter had been heated to 150°F, it would not have melted" can be read as "that piece of butter has not been heated to 150°F, or it did not melt, or both," or "it is not the case that both that piece butter has melted and that piece of butter has been heated to 150°F."

How does one then take into account the fact that one's abstract platonic algorithm is implemented on physical hardware that may end up occasionally causing an accidental bitflip or other corruption, thus not actually computing the answer that the algorithm you think you're computing actually computes?

My *INITIAL* (sleep deprived) thought is a hybrid of options 2,3, and a form of EDT in which one would say "If I output X, then that would be evidence that abstract platonic computation outputs X, which would then also cause other implementation/model of t...

1

Omega's simulations of you show this happening rarely enough that its expected actions out of you are the same as the abstract platonic algorithm, presumably.

1

Sorry, I was sort of asking a general question and putting it in the terms of this particular problem at the same time. I should have been clearer.
What I meant was "I like TDC, but I think it's insufficient, it doesn't seem to easily deal with the fact that the physical implementation of the abstract computation can potentially end up having other things happen that result in something OTHER than what the ideal platonic would say should happen"
I think though that my initial suggestion might not have been the right solution. Instead, maybe invert it, say "actual initial state of hardware/software/etc" feeds into "selector that selects a platonic algorithm" which then feeds into "output"... then, depending on how you want to look at it, have other external stuff, radiation, damage to hardware occurring mid computation, etc etc etc have causal inputs into those last two nodes. My initial thought would be the second to last node.
The idea here being that such errors change which platonic computation actually occurred.
Then you can say stuff in terms in decisions being choosing "what does the abstract computation that I am at this moment output?", with the caveat of "but I'm not absolutely certain that I am computing the specific algorithm that I think I am"... so that is where one could place the uncertainty that arises from hardware bugaboos, etc etc. (Also, logical uncertainty perhaps about if your code actually implements algorithms that you think it does, if that's relevant.)

3

I'm still having trouble seeing what troubles you. Yes, the physical hardware might mess up the attempt to implement the Platonic algorithm. So, there's a probability of Omega guessing wrong, but if Omega picks your most likely action, it will still better approximate it by just using the platonic algorithm instead of the platonic algorithm plus noise.
Also, as Eliezer_Yudkowsky keeps pointing out, you don't want an agent that computes "what does the abstract computation that I am at this moment output?" because whatever it picks, it's correct.
AnnaSalamon didn't mention this, but under Pearl's model of causal networks, each node is implicitly assumed to have an "external unknown factor" parent (all of such factors assumed independent of each other), so this uncertaintly is already in the model. So, like any other node, the agent takes this kind of uncertainty into account.

1

What I meant is that for TDT, the agent, for lack of a better word, decides what the outcome for a certain abstract algorithm is. (Specifically, the abstract algorithm that it is using to decide that.)
The agent can reason about other systems computing the related algorithms producing related output, so it knows that what it chooses will be reflected in those other systems.
But, I'd want it to be able to take into account the fact that the algorithm it's actually computing is not necessarally the algorithm it thinks it is computing. That is, due to hardware error or whatever, it may produce an output other than what the abstract calculation it thought it was doing would have produced... thus breaking the correlation it was assuming.
ie, I just want some way for the agent to be able to take into account in all this the possibility of errors in the hardware and so on, and in the raw TDT there didn't seem to be a convenient way to do that. Adding in an extra layer of indirection, setting up the causal net as saying "my innards" control a selector which determines which abstract algorithm is actually being computed would SEEM to fix that in a way that, to me, seems to actually fit what's actually going on.
If we assume a weaker "Omega", that can't predict, say, a stray cosmic ray hitting you and causing you to make a 1 bit error or whatever in your decision algorithm, even though it has a copy of your exact algorithm, then that's where what I'm talking about comes in. In that case, your output would derive from the same abstract computation as Omega's prediction for your output.
Imagine the set of all possible algorithms feed into a "my selector node", and also into omega's "prediction selector node"... then "my innards" are viewed as selecting which of those determine the output. But a stray cosmic ray comes in, influences the computation... that is, influences which algorithm the "selector" selects.
A stray cosmic ray can't actually alter an abstract platonic al

1

Okay, I think I see what you're saying: There is the possibility of something making your action diverge from the Platonic computation you think you're instantiating, and that would interfere with the relationship between the choice you make and the Platonic algorithm.
On top of that, you say that there should be a "My innards" node between the platonic algorithm node and the action node.
However, you claim Omega can't detect this kind of interference. Therefore, the inteference is independent of the implicit interference with all the other nodes and does not need to be represented. (See my remark about how Pearl networks implicitly have an error term parent for every node, and only need to be explicity represented when two or more of these error parents are not independent.)
Also, since it would still be an uninterrupted path from Platonic to choice, the model doesn't gain anything by this intermediate steps; Pearl nets allow you to collapse these into one edge/node.
And, of course, it doesn't make much of a difference for Omega's accuracy anyway...

2

Yeah, I think you've got the point of the problem I'm trying to deal with, though I'm not sure I communicated my current view of the structure of what the solution should be. For one thing, I said that my initial plan, platonic algorithm pointing to innards pointing to output was wrong.
There may potentially be platonic algorithm pointing to innards representing the notion of "intent of the original programmer" or whatever, but I figured more importantly is an inversion is that.
Ie, start with innards... the initial code/state/etc "selects" a computation from the platonic space of all possible computations. But, say, a stray cosmic ray may interfere with the computation. This would be analogous to an external factor poking the selector, shifting which abstract algorithm is the one being computed. So then "omega" (in quotes because am assuming a slightly less omniscient being than usually implied by the name) would be computing the implications of one algorithm, while your output would effectively be the output if a different algorithm. So that weakens the correlation that justifies PD coopoperation, Newcomb one-boxing, etc etc etc etc...
I figure the "innards -> selector from the space of algorithms" structure would seem to be the right way to represent this possibility. It's not exactly just logical uncertainty.
So, I don't quite follow how this is collapsible. ie, It's not obvious to me that the usual error terms help with this specific issue without the extra node. Unless, maybe, we allow the "output" node to be separate from the "algorithm" node and let us interpret the extra uncertianty term from the output node as something that (weakly) decouples the output from the abstract algorithm...

1

Yes, but like I said the first time around, this would be a rare event, rare enough to be discounted if all that Omega cares about is maximizing the chance of guessing correctly. If Omega has some other preferences over the outcomes (a "safe side" it wants to err on), and if the chance is large enough, it may have to change its choice based on this possibility.
So, here's what I have your preferred representation as:
"Platonic space of algorithms" and "innards" both point to "selector" (the actual space of algorithms influences the selector, I assume); "innards" and "Platonic space" also together point to "Omega's prediction", but selector does not, because your omega can't see the things that can cause it to err. Then, "Omega's prediction" points to box content and selector points to your choice. Then, of course, box content and your choice point to payout.
Further, you say the choice the agent makes is at the innards node.
Is that about right?

1

Even if rare, the decision theory used should at least be able to THINK ABOUT THE IDEA of a hardware error or such. Even if it dismisses it as not worth considering, it should at least have some means of describing the situation. ie, I am capable of at least considering the possibility of me having brain damage or whatever. Our decision theory should be capable of no less.
Sorry if I'm unclear here, but my focus isn't so much on omega as trying to get a version of TDT that can at least represent that sort of situation.
You seem to more or less have it right. Except I'd place the choice more at the selector or at the "node that represents the specific abstract algorithm that actually gets used"
As per TDT, choose as if you get to decide what the output for the abstract algorithm should be. The catch is that here there's a bit of uncertainty as to which abstract algorithm is being computed. So if, due to cosmic ray striking and causing a bitflip at a certain point in the computation, you end up actually computing algorithm 1B while omega models you as being algorithm 1A, then that'd be potentially a weakening of the dependence. (Again, just using the Newcomb problem simply as a way of talking about this.)

1

Okay, so there'd be another node between "algorithm selector" and "your choice of box"; that would still be an uninterrupted path (chain) and so doesn't affect the result.
The problem, then, is that if you take the agent's choice as being at "algorithm selector", or any descendant through "your choice of box", you've d-separated "your choice of box" from "Omega's prediction", meaning that Omega's prediction is conditionally independent of "your choice of box", given the agent's choice. (Be careful to distinguish "your choice of box" from where we're saying the agent is making a choice.)
But then, we know that's not true, and it would reduce your model to the "innards CSA" that AnnaSalamon gave above. (The parent of "your choice of box" has no parents.)
So I don't think that's an accurate representation of the situation, or consistent with TDT. So the agent's choice must be occuring at the "innards node" in your graph.
(Note: this marks the first time I've drawn a causal Bayesian network and used the concept of d-separation to approach a new problem. w00t! And yes, this would be easier if I uploaded pictures as I went.)

1

Not sure where you're getting that extra node from. The agent's choice is the output of the abstract algorithm they actually end up computing as a result of all the physical processes that occur.
Abstract algorithm space feeds into both your algorithm selector node and the algorithm selector node in "omega"'s model of you. That's where the dependence comes from.
So given logical uncertainty about the output of the algorithm, wouldn't they be d-connected? They'd be d-separated if the choice was already known... but if it was, there'd be nothing left to choose, right? No uncertainties to be dependent on each other in the first place.
Actually, maybe I ought draw a diagram of what I have in mind and upload to imgur or whatever.

1

Alright, after thinking about your points some more, and refining the graph, here's my best attempt to generate one that includes your concerns: Link.
Per AnnaSalamon's convention, the agent's would-node-surgery is in a square box, with the rest elliptical and the payoff octagonal. Some nodes included for clarity that would normally be left out. Dotted lines indicate edges that are cut for surgery when fixing "would" node. One link I wasn't sure about has a "?", but it's not that important.
Important points: The cutting of parents for the agent's decision preserves d-connection between box choice and box content. Omega observes innards and attempted selection of algorithm but retains uncertainty as to how the actual algorithm plays out. Innards contribute to hardware failures to accurately implement algorithm (as do [unshown] exogenous factors).
And I do hope you follow up, given my efforts to help you spell out your point.

1

Just placing this here now as sort of a promise to follow up. Just that I'm running on insufficient sleep, so can only do "easy stuff" at the moment. :) I certainly plan on following up on our conversation in more detail, once I get a good night's sleep.

1

Understood. Looking forward to hearing your thoughts when you're ready :-)

1

Having looked at your diagram now, that's not quite what I have in mind. For instance, "what I attempt to implement" is kinda an "innards" issue rather than deserving a separate box in this context.
Actually, I realized that what I want to do is kind of weird, sort of amounting to doing surgery on a node while being uncertain as to what node you're doing the surgery on. (Or, alternately, being uncertain about certain details of the causal structure). I'm going to have to come up with some other notation to represent this.
Before we continue... do you have any objection to me making a top level posting for this (drawing out an attempt to diagram what I have in mind and so on?) frankly, even if my solution is complete nonsense, I really do think that this problem is an issue that needs to be dealt with as a larger issue.
Begun working on the diagram, still thinking out though exact way to draw it. I'll probably have to use a crude hack of simply showing lots of surgery points and basically saying "do surgery at each of these one at a time, weighing the outcome by the probability that that's the one you're actually effectively operating on" (This will (hopefully) make more sense in the larger post)

1

Grr! That was my first suggestion!
Not that weird, actually. I think you can do that by building a probabilistic twin network. See the good Pearl summary, slide 26. Instead of using it for a counterfactual, surgically set a different node in each subnetwork, and also the probabilities coming from the common parent (U in slide 26) to represent the probability of each subnetwork being the right one. Then use all terminal nodes across both subnetworks as the outcome set for calculating probability.
Though I guess that amounts to what you were planning anyway. Another way might be to use multiple dependent exogenous variables that capture the effect of cutting one edge when you thought you were cutting another.
No problem, just make sure to link this discussion.

1

*clicks first link*
And I said that was more or less right, didn't I? ie, "what I attempt to implement" ~= "innards", which points to "selector"/"output", which selects what actually gets used.
Looking through the second link (ie, the slides) now

1

Okay, I think there are some terminological issues to sort out here, resulting from our divergence from AnnaSalamon's original terminology.
The discussion I thought we were having corresponds to the CSA's calculation of "woulds". And when you calculate a would, you surgically set the output of the node, which means cutting the links to its parents.
Is this where we are? Are you saying the "would" should be calculated from surgery on the "algorithm selector" node (which points to "choice of box")? Because in that case, the links to "algorithm selector" from "algorithm space" and "innards" are cut, which d-separates them. (ETA: to clarify: d-separates "box choice" from Omega and its descendants.)
OTOH, even if you follow my suggestion and do surgery on "innards", the connection between "box choice" and "omega's prediction" is only a weak link -- algorithm space is huge.
Perhaps you also want an arrow from "algorithm selector" to "omega's prediction" (you don't need a separate node for "Omega's model of your selector" because it chains). Then, the possible difference between the box choice and omega's prediction emerges from the independent error term pointing to box choice (which accounts for cosmic rays, hardware errors, etc.) There is a separate (implicit) "error parent" for the "Omega's prediction" node, which accounts for shortcomings of Omega's model.
This preserves d-connection (between box choice and box content) after a surgery on "algorithm selector". Is that what you're aiming for?
(Causal Bayes nets are kinda fun!)

0[anonymous]

I believe that she simply neglects to draw the parents of the surgical nodes because they should be igored anyway so drawing them is a waste of space.

Agents do *not* need to calculate what would have happened, if something impossible had happened.

They need to calculate the consequences of their possible actions.

These are all possible, by definition, from the point of view of the agent - who is genuinely uncertain about the action she is going to take. Thus, from her point of view at the time, these scenarios are not "counterfactual". They do not contradict any facts known to her at the time. Rather they all lie within her cone of uncertainty.

9

... but nevertheless, all but one are, in fact, logically impossible.

That's the difference between epistemic and metaphysical possibility. Something can be epistemically possible without being metaphysically possible if one doesn't know if it's metaphysically possible or not.

2

Thanks, that's exactly what I was trying to say.

-9

-7

3

Counterfactuals don't need to be about impossible things - and agents do calculate what would have happened, if something different had happened. And it is very hard to know whether it would have been possible for something different to happen.
The problem of counterfactuals is not actually a problem. Goodman's book is riddled with nonsensical claims.
What can Pearl's formalism accomplish, that earlier logics could not? As far as I can tell, "Bayes nets" just means that you're going to make as many conditional-independence assumptions as you can, use an acyclic graph, and ignore time (or use a synchronous clock). But nothing changes about the logic.

0

I am not sure. I haven't got much from Pearl so far. I did once try to go through The Art and Science of Cause and Effect - but it was pretty yawn-inducing.

0

I was replying to this bit in the post:
...and this bit:
It is true that agents do sometimes calculate what would have happened if something in the past had happened a different way - e.g. to help analyse the worth of their decision retrospectively. That is probably not too common, though.

My problem with these formalisms is that they don't apply to real world problems. In that real world problems you can alter the network as well as altering the nodes.

For example you could have an innards altering program that could make innards that attempt to look at omega's innards and make a decision on whether to one-box or not based on that. This would form a loop in the network, what it would do to the correct strategy, I don't know.

In the real world this would not be able to be stopped. Sure you can say that it doesn't apply to the thought experiment but then you are divorcing the thought experiment from reality.

(This is the third post in a planned sequence.)

My last post left us with the questions:

Today, I’ll take an initial swing at these questions. I’ll review Judea Pearl’s causal Bayes nets; show how Bayes nets offer a general methodology for computing counterfactual “would”s; and note

threeplausible alternatives for how to use Pearl’s Bayes nets to set up a CSA. One of these alternatives will be the “timeless” counterfactuals of Eliezer’s Timeless Decision Theory.The problem of counterfactualsis the problem what we do and should mean when we we discuss what “would” have happened, “if” something impossible had happened. In its general form, this problem has proved to be quite gnarly. It has been bothering philosophers of science for at least 57 years, since the publication of Nelson Goodman’s book “Fact, Fiction, and Forecast” in 1952:Recall that we seem to need counterfactuals in order to build agents that do useful decision theory -- we need to build agents that can think about the consequences of each of their “possible actions”, and can choose the action with best expected-consequences. So we need to know how to compute those counterfactuals. As Goodman puts it, “[t]he analysis of counterfactual conditionals is no fussy little grammatical exercise.”

Judea Pearl’s Bayes nets offer a method for computing counterfactuals.As noted, it is hard to reduce human counterfactuals in general: it is hard to build an algorithm that explains what (humans will say) really “would” have happened, “if” an impossible event had occurred. But it is easier to construct specific formalisms within which counterfactuals have well-specified meanings. Judea Pearl’s causal Bayes nets offer perhaps the best such formalism.Pearl’s idea is to model the world as based on some set of causal variables, which may be observed or unobserved. In Pearl’s model, each variable is determined by a conditional probability distribution on the state of its parents (or by a simple probability distribution, if it has no parents). For example, in the following Bayes net, the beach’s probability of being “Sunny” depends only on the “Season”, and the probability that there is each particular “Number of beach-goers” depends only on the “Day of the week” and on the “Sunniness”. Since the “Season” and the “Day of the week” have no parents, they simply have fixed probability distributions.

Once we have a Bayes net set up to model a given domain, computing counterfactuals is easy*. We just:

For example, suppose I want to evaluate the truth of: “If last Wednesday had been sunny, there would have been more beach-goers”. I leave the “Day of the week” node at Wednesday“, set the ”Sunny?“ node to ”Sunny“, ignore the “Season” node, since it is the parent of a surgically set node, and compute the probability distribution on beach-goers.

*Okay, not quite easy: I’m sweeping under the carpet the conversion from the English counterfactual to the list of variables to surgically alter, in step 2. Still, Pearl’s Bayes nets do much of the work.

But, even if we decide to use Pearl’s method, we are left with the choice of how to represent the agent's "possible choices" using a Bayes net.More specifically, we are left with the choice of what surgeries to execute, when we represent the alternative actions the agent “could” take.There are at least three plausible alternatives:

Alternative One: “Actions CSAs”:Here, we model the outside world however we like, but have the agent’s own “action” -- its choice of a_1, a_2, or ... , a_n -- be the critical “choice” node in the causal graph. For example, we might show Newcomb’s problem as follows:

The assumption built into this set-up is that the agent’s action is uncorrelated with the other nodes in the network. For example, if we want to program an understanding of Newcomb’s problem into an Actions CSA, we are forced to choose a probability distribution over Omega’s prediction that is independent of the agent’s actual choice.

How Actions CSAs reckon their coulds and woulds:So, if causal decision theory is what I think it is, an “actions CSA” is simply a causal decision theorist. Also, Actions CSAs will two-box on Newcomb’s problem, since, in their network, the contents of box B is independent of their choice to take box A.

Alternative Two: “Innards CSAs”:Here, we again model the outside world however we like, but we this time have the agent’s own “innards” -- the physical circuitry that interposes between the agent’s sense-inputs and its action-outputs -- be the critical “choice” node in the causal graph. For example, we might show Newcomb’s problem as follows:

Here, the agent’s innards are allowed to cause both the agent’s actions and outside events -- to, for example, we can represent Omega’s prediction as correlated with the agent’s action.

How Innards CSAs reckon their coulds and woulds:Innards CSAs will one-box on Newcomb’s problem, because they reason that if their innards were such as to make them one-box, those same innards would cause Omega, after scanning their brain, to put the $1M in box B. And so they “choose” innards of a sort that one-boxes on Newcomb’s problem, and they one-box accordingly.

Alternative Three: “Timeless” or “Algorithm-Output” CSAs:In this alternative, as Eliezer suggested in Ingredients of Timeless Decision Theory, we have a “Platonic mathematical computation” as one of the nodes in our causal graph, which gives rise at once to our agent’s decision, to the beliefs of accurate predictors about our agent’s decision, and to the decision of similar agents in similar circumstances. It is the output to this

mathematicalfunction that our CSA uses as the critical “choice” node in its causal graph. For example:How Timeless CSAs reckon their coulds and woulds:Like innards CSAs, algorithm-output CSAs will one-box on Newcomb’s problem, because they reason that if the output of their algorithm was such as to make them one-box, that same algorithm-output would also cause Omega, simulating them, to believe they will one-box and so to put $1M in box B. They therefore “choose” to have their algorithm output “one-box on Newcomb’s problem!”, and they one-box accordingly.

Unlike innards CSAs, algorithm-output CSAs will also Cooperate in single-shot prisoner’s dilemmas against Clippy -- in cases where they think it sufficiently likely that Clippy’s actions are output by an instantiation of “their same algorithm” -- even in cases where Clippy cannot at all scan their brain, and where their innards play no physically causal role in Clippy’s decision. (An Innards CSA, by contrast, will Cooperate if having Cooperating-type innards will physically cause Clippy to cooperate, and not otherwise.)

Coming up: considerations as to the circumstances under which each of the above types of agents will be useful, under different senses of “useful”.

Thanks again to Z M Davis for the diagrams.