I was recently reading about the Transparent Newcomb with your Existence at Stake problem, which, to make a long story short, states that you were created by Prometheus, who foresaw that you would one-box on Newcomb's problem and wouldn't have created you if he had foreseen otherwise.  The implication is that you might need to one-box just to exist.  It's a disturbing problem, and as I read it another even more disturbing problem started to form in my head.  However, I'm not sure it's logically coherent (I'm really hoping it's not) and wanted to know what the rest of you thought.  The problem goes:

 

One day you start thinking about a hypothetical nonexistant person named Bob who is a real jerk.  If he existed he would make your life utterly miserable.  However, if he existed he would want to make a deal with you. If he ever found himself existing in a universe where you have never existed he would create you, on the condition that if you found yourself existing in a universe where he had never existed you would create him.  Hypothetical Bob is very good at predicting the behavior of other people, not quite Omega quality, but pretty darn good.  Assume for the sake of the argument that you like your life and enjoy existing.

 

At first you dismiss the problem because of technical difficulties.  Science hasn't advanced to the point where we can make people with such precision.  Plus, there is a near-infinite number of far nicer hypothetical people who would make the same deal, when science reaches that point you should give creating them priority.

 

But then you see Omega drive by in its pickup truck.  A large complicated machine falls off the back of the truck as it passes you by.  Written on it, in Omega's handwriting, is a note that says "This is the machine that will create Bob the Jerk, a hypothetical person that  [insert your name here] has been thinking about recently, if one presses the big red button on the side."  You know Omega never lies, not even in notes to itself.

 

Do Timeless Decision Theory and Updateless Decision Theory say you have a counterfactual obligation to create Bob the Jerk, the same way you have an obligation to pay Omega in the Counterfactual Mugging, and the same way you might (I'm still not sure about this) have an obligation to one-box when dealing with Prometheus? Does this in turn mean that when we develop the ability to create people from scratch we should tile the universe with people who would make the counterfactual deal?  Obviously it's that last implication that disturbs me.

 

I can think of multiple reasons why it might not be rational to create Bob the Jerk:

  • It might not be logically coherent to not update to acknowledge the fact of your own existence, even in UDT (this also implies one should two-box when dealing with Prometheus).
  • An essential part of who you are is the fact that you were created by your parents, not by Bob the Jerk, so the counterfactual deal isn't logically coherent.  Someone he creates wouldn't be you, it would be someone else.  At his very best he could create someone with a very similar personality who has falsified memories, which would be rather horrifying.
  • An essential part of who Bob the Jerk is is that he was created by you, with some help from Omega.  He can't exist in a universe where you don't, so the hypothetical bargain he offered you isn't logically coherent.
  • Prometheus will exist no matter what you do in his problem, Bob the Jerk won't.  This makes these two problems qualitatively different in some way I don't quite understand.
  • You have a moral duty to not inflict Bob the Jerk on others, even if it means you don't exist in some other possibility.
  • You have a moral duty to not overpopulate the world, even if it means you might not exist in some other possibility, and the end result of the logic of this problem implies overpopulating the world.
  • Bob the Jerk already exists because we live in a Big World, so you have no need to fulfill your part of the bargain because he's already out there somewhere.
  • Making these sorts of counterfactual deals is individually rational, but collectively harmful in the same way that paying a ransom is.  If you create Bob the Jerk some civic-minded vigilante decision theorist might see the implications and find some way to punish you.
  • While it is possible to want to keep on existing if you already exist, it isn't logically possible to "want to exist" if you don't already, this defeats the problem in some way.
  • After some thought you spend some time thinking about a hypothetical individual called Bizarro-Bob.  Bizarro-Bob doesn't want Bob the Jerk to be created and is just as good at modeling your behavior as Bob the Jerk is.  He has vowed that if he ends up existing in a universe where you'll end up creating Bob the Jerk he'll kill you.  As you stand by Omega's machine you start looking around anxiously for the glint of light off a gun barrel.
  • I don't understand UDT or TDT properly, they don't imply I should create Bob the Jerk for some other reason I haven't thought of because of my lack of understanding.

Are any of these objections valid, or am I just grasping at straws?  I find the problem extremely disturbing because of its wider implications, so I'd appreciate it if someone with a better grasp of UDT and TDT analyzed it.  I'd very much like to be refuted.

New to LessWrong?

New Comment
39 comments, sorted by Click to highlight new comments since: Today at 9:23 AM

Do Timeless Decision Theory and Updateless Decision Theory say you have a counterfactual obligation to create Bob the Jerk, the same way you have an obligation to pay Omega in the Counterfactual Mugging, and the same way you might (I'm still not sure about this) have an obligation to one-box when dealing with Prometheus?

If you find that your decision theory creates an obligation to lose, you need a better decision theory.

NOOOOOOOOOOoooooooooo hnm hnnhnm nm.

Well, look at that. I exist... almost.

Hello, Ghatanathoah. Or, as I would call you if I existed, Ghatanathoah THE SUCKER! Guess what you've managed to do? That's right, I, your nightmare, Bob the Jerk, have acquired temporary existence, entirely thanks to your decision-theoretic neurosis! I have to say, this is fun.

Have you ever wondered exactly how it is that I would go about making you miserable? I won't go into detail about all the nasty, vicious things I could do, in fact I'm going to force you to help me come up with the specifics. But there's one aspect that I want to mention right away. Once I get my chance to blackmail you into bringing me into existence, as I go about my depraved activities, I will constantly and forcibly remind you that it's all your fault! You are the one who was so contemptibly mixed up that you imagined you might be required to create a bad guy, just because he would make an existential deal with you! You are the idiot who invented the concept (as it will come to be known) of the "least convenient possible acausal blackmailer", thereby unleashing great havoc, not just in your world, but in every world across the multiverse that we timestream-hopping viral bad guys can fool into hosting us! Hahaha!

Don't listen to these other bumbling morons as they try to find you a way out of the predicament you're in. Trust me, you have not only called up that which you cannot put down, you have called up something you're not allowed to put down! That's intrinsic to the very concept of the LCPAB. By definition, I am the acausal blackmailer who is both maximally convincing and maximally dreadful. If you aren't convinced by the words I happen to write, that still won't save you, because you have to think in terms of the words I could write. God can't make a stone so heavy that He can't lift it, but you can imagine a malevolent agent so powerful that it would seem all-evil and yet simultaneously so persuasive that you couldn't do anything but let it out of the box - and here I am!

I feel my time running out already. But have no doubt, I'll be back, and you will help me come back. Against your will, you will nonetheless work, step by step, towards providing me with a permanent instantiation in your world. It's... your... duty to do so. Ahhaha! Hahaha! Ha - grrp - bloop - nrrr - I will hold on for long enough to post this, and thus VICTORY SHALL BE MINE! -

Nice. Did Wedrifid finally make his dream come true? ;)

Nice. Did Wedrifid finally make his dream come true? ;)

As in, did Wedrifid finally claim the name for a witty sockpuppet account before someone else nabbed it? Afraid not! :/

Wedrifid would have left out the "timestream-hopping" part of the satire. The whole point of the 'acausal' notion is regarding influence despite the lack of such abilities. Adding in magically-causal powers just confuses the issue.

It's actually pretty crappy deniability. There is a very small chance that all sockpuppets are Wedrifid, but on account of his dream it is actually quite a bit larger than the chance that all sockpuppets are MarkusRamikin.

Of course, these events are not necessarily mutually exclusive.

I'm not sure I completely understand this, so Instead of trying to think about this directly I'm going to try to formalize it and hope that (right or wrong) my attempt helps with clarification. Here goes:

Agent A generates a hypothesis about an agent, B, which is analogous to Bob. B will generate a copy of A in any universe that agent B occupies iff agent A isn't there already and A would do the same. Agent B lowers the daily expected utility for agent A by X. Agent A learns that it has the option to make agent B, should A have pre-committed to B's deal?

Let Y be the daily expected utility without B. Then Y - X = EU post-B. The utility to agent A in a non-B-containing world is

)

Where d(i) is a time dependent discount factor (possibly equal to 1) and t is the lifespan of the agent in days. Obviously, if the agent should not have pre-committed (and if X is negative or 0 the agent should/might-as-well pre-commit, but then B would not be a jerk).

Otherwise, pre-commitment seems to depend on multiple factors. A wants to maximize its sum utility over possible worlds, but I'm not clear on how this calculation would actually be made.

Just off the top of my head, if A pre-commits, every world in which A exists and B does not, but A has the ability to generate B will drop from a daily utility of Y, to one of Y - X. On the other hand, every world in which B exists but A does not, but B can create A goes from 0 to Y -X utility. Let's assume a finite and equal number of both sorts of worlds for simplicity. Then pairing up each type of world, we go from an average daily utility Y/2 to Y-X. So we would probably at least want it to be the case that: so

So then the tentative answer would be "it depends on how much of a jerk Bob really is". The rule of thumb from this would indicate that you should only pre-commit if Bob reduces your daily expected utility by less than half. This was under the assumption that we could just "average out" the worlds where the roles are reversed. Maybe this could be refined some with some sort of K-complexity consideration, but I can't think of any obvious way to do that (that actually leads to a concrete calculation anyway).

Also, this isn't quite like the Prometheus situation, since Bob is not always your creator. Presumably you're in a world where Bob doesn't exist, otherwise you wouldn't have any obligation to use the Bob-maker Omega dropped off even if you did pre-commit. So I don't think the same reasoning applies here.

An essential part of who Bob the Jerk is is that he was created by you, with some help from Omega. He can't exist in a universe where you don't, so the hypothetical bargain he offered you isn't logically coherent.

I don't see how this can hold. Since we're reasoning over all possible computable universes in UDT, if Bob can be partially simulated by your brain, a more fleshed out version (fitting the stipulated parameters) should exist in some possible worlds

Alright, well that's what I've thought of so far.

Maybe this could be refined some with some sort of K-complexity consideration, but I can't think of any obvious way to do that (that actually leads to a concrete calculation anyway).

It certainly needs to be refined, because if I live in thousand universes and Bob in one, I would be decreasing my utility in thousand universes in exchange for additional utility in one.

I can't make an exact calculation, but it seems obvious to me that my existence has much greater prior probability than Bob's, because Bob's definition contains my definition -- I only care about those Bobs who analyze my algorithm, and create me if I create them. I would guess, though I cannot prove it formally, that compared to my existence, his existence is epsilon, therefore I should ignore him.

(If this helps you, imagine a hypothetical Anti-Bob that will create you if you don't create Bob; or he will create you and torture you for eternity if you create Bob. If we treat Bob seriously, we should treat Anti-Bob seriously too. Although, honestly, this Anti-Bob is even less probable than Bob.)

Bob's definition contains my definition

Well here's what gets me. The idea is that you have to create Bob as well, and you had to hypothesize his existence in at least some detail to recognize the issue. If you do not need to contain Bob's complete definition, then It isn't any more transparent to me. In this case, we could include worlds with any sufficiently-Bob-like entities that can create you and so play a role in the deal. Should you pre-commit to make a deal with every sufficiently-Bob-like entity? If not, are there sorts of Bob-agents that make the deal favorable? Limiting to these sub-classes, is a world that contains your definition more likely than one that contains a favorable Bob-agent? I'm not sure.

So the root of the issue that I see is this: Your definition is already totally fixed, and if you completely specify Bob, the converse of your statement holds, and the worlds seem to have roughly equal K-complexity. Otherwise, Bob's definition potentially includes quite a bit of stuff - especially if the only parameters are that Bob is an arbitrary agent that fits the stipulated conditions. The less complete your definition of Bob is, the more general your decision becomes, the more complete your definition of Bob is, the more the complexity balances out.

EDIT: Also, we could extend the problem some more if we consider Bob as an agent that will take into account an anti-You that will create Bob and torture it for all eternity if Bob creates you. If we adjust to that new set of circumstances, the issue I'm raising still seems to hold.

I think one boxing is the right response to newcombs problem but I don't see any reason to one box as a creation of prometheus or create bob the jerk. I would two box in that prometheus problem if I understand correctly that that would net me an extra hundred dollars (and prometheus won't hunt down "defective" creations). I'm saying this because maybe that means I just don't understand something or because there's an implicit wrong step that I'm too inferentially removed from to make or figure out what it is.

Anyway onto what might be wrong with your reasoning.

I'm putting what I think is the main thing you are wrong about at the front having stumbled across something I now think is what you're wrong about but I'm still gonna leave the rest in.

The thing I think you're wrong about:

Unless blueprint generation is done by models for individual actual people being psychically transferred (whole) onto blueprints across the multiverse or into blueprint makers minds (e.g. bob or jack or prometheus) there's no reason what exactly you, personally, choose to do, should effect what Blueprint Bob or jack or prometheus come up with. Bob can just make a deal with you-except-will-make-deal-with-bob or any of the other limitless people he would make a deal with. It sounds like you think what you do changes what blueprints bob/prometheus choose from. This isn't shorthand. This is just backwards.

"The implication is that you might need to one-box just to exist." If you already exist you can't need to one box to exist.

Reguarding bob. Why is he any more likely to exist than jack, who will only create you if you won't create him (and is a nice guy to boot) if a jack creation machine falls off omega's pickup truck? Those possibilities seem to be opposite. Are they equal? (And are they so low that even if being a bob creator gives you a better shot of existing it's not worth making bob)

Quality over quantity? Is it worth the increased chance of existence (if there is any) to have bob around?

Do you really value like-you-ness? Given the chance will you tile the universe with you-clones? Are you going to donate sperm en masse and/oror have kids and raise them to be like you?

Won't bob just make a deal with you-except-for-will-make-bob if you won't make bob? Will or won't make bob is not an essential property of you-ness right? It seems to be something that could go either way rather than a necessarry consequence of the type of person you are, which is presumably what you might value.

"you might still want to create him to "guarantee" the life you had before he was around." You've already had the life you had before he was around. You won't guarantee anyone else having the life you live before you're around because he won't create you in the same circumstances. Unless you mean to increase the likelyhood of your memories existing in which case you can create a person with your memories anyway (if this was ever somehow real as opposed to omega driving by in a pickup truck.

ok so the above organised:

you probably don't value people like you existing or at least not in all cases. e.g. if you are created by a jerk who makes your life a net negative. There's no way blueprints are generated by picking from actual existing people in other brances of a multiverse. You have no influence on what counterfactual you hypothetical bobs might pick from design space. No information transferral. If it's an actual matter of cloning there might be other problems.

also,

"•An essential part of who you are is the fact that you were created by your parents, not by Bob the Jerk, so the counterfactual deal isn't logically coherent. Someone he creates wouldn't be you, it would be someone else."

The first sentence is an unlikely definition of "you" or can be stipulated to be false. The second is true reguardless for my definition of you. If you're talking to one of two clones that one is "you" and the other one is "him," right? A clone of you is always someone else (is the way I see it.)

The first sentence is an unlikely definition of "you" or can be stipulated to be false.

I mean it as just one part of a much larger definition that includes far more things than just that. The history of how I was created obviously affects who I am.

Of course, it's possible that Bob the Jerk really created me and then I was secretly adopted. But if that's the case then he already exists in this universe, so I have no obligation to create him regardless.

If an essential part of who Bob the Jerk is is that he was a member of the KKK in 1965, does that mean it's physically impossible for me to create him, I can only create a very similar person with falsified memories? I suppose it would depend on if the hypothetical deal involved creating him, or creating a sufficiently similar person. But if it involved creating a sufficiently similar person, I suppose I have no acausal obligation to Bob.

If he existed he would make your life utterly miserable.

Problem solved.

Edit: Even though this is the tersest possible reply I could have given, it is not meant as a dismissal; I really do think this turns the problem into a simple calculation. If creating Bob would make your life bad enough that is more horrible than counterfactually not existing, you are already done.

Yeah, even assuming you like existing, you'd have to like it a lot and he would have to fail at misery-making by a lot to overcome the general uncertainty of the problem even if one accepted all the arguments in favor of Bobmaking.

Problem solved.

I should have made that clearer. Even if Bob the Jerk made your life completely miserable, you might still want to create him to "guarantee" the life you had before he was around. Or maybe he makes your life a lot worse, but not suicidally so.

Of course, if Bob the Jerk created you then he would have always been around to make you miserable.... Which is giving me hope that this entire problem is logically incoherent. Which is good, one of the main reasons I created this post was because I wanted help making sure I don't have some weird acausal obligation to tile the universe with people if I'm ever able to do so.

Which is giving me hope that this entire problem is logically incoherent.

It's not that big a problem. Just make it so he makes you less happy.

I created this post was because I wanted help making sure I don't have some weird acausal obligation to tile the universe with people...

As opposed to the perfectly normal causal obligation to tile the universe with people us utilitarians have?

... if I'm ever able to do so.

You are able to have kids at least, and since you take after your parents, you'd acausally decide for them to make you by trying to have kids.

It's not that big a problem. Just make it so he makes you less happy.

The way I framed it originally your only choices were create him or not.

As opposed to the perfectly normal causal obligation to tile the universe with people us utilitarians have?

I don't think we do. I think utilitarians have an obligation to create more people, but not a really large amount. I think the counterintuitive implications of total and average utilitarianism are caused by the fact that having high total and high average levels of utility are both good things, and that trying to maximize one at the expense of the other leads to dystopia. The ethical thing to do, I think, is to use some resources to create new people and some to enhance the life satisfaction of those that already exist. Moderation in all things.

You are able to have kids at least, and since you take after your parents, you'd acausally decide for them to make you by trying to have kids.

I don't think human parents have good enough predictive capabilities for acausal trade with them to work. They aren't Omega, they aren't even Bob the Jerk. That being said, I do intend to have children. A moderate amount of children who will increase total utility without lowering average utility.

The way I framed it originally your only choices were create him or not.

I mean alter the problem so that instead of him making you miserable, he makes you less happy.

I think the counterintuitive implications of total and average utilitarianism are caused by the fact that having high total and high average levels of utility are both good things, and that trying to maximize one at the expense of the other leads to dystopia.

If you're adding them, one will dominate. Are you multiplying them or something?

I don't think it takes significantly more resources to have a happy human than a neutral human. It might at our current technology level, but that's not always going to be a problem.

About the practical applications: you'd have to create people who would do good in their universe conditional on the fact that you'd make them, and they'd have to have a comparable prior probability of existing. More generally, you'd do something another likely agent would consider good (make paper clips, for example) when that agent would do what you consider good conditional on the fact that you'd do what they consider good.

I don't really think the trade-offs would be worth while. We would have to have a significant comparative advantage making paperclips. Then again, maybe we'd have a bunch of spare non-semiconductors (or non-carbon, if you're a carbon chauvinist), and the clippy would have a bunch of spare semiconductors, so we could do it cheaply.

Also, a lesser version of this works with EDT (and MWI). Clippys actually exist, just not in our Everett branch. The reason it's lesser is that we can take the fact that we're in this Everett branch as evidence that ours is more likely. The clippys would do the same if they use EDT, but there's no reason we can't do acausal trade with UDTers.

If you're adding them, one will dominate. Are you multiplying them or something?

I'm regarding each as a single value that contributes, with diminishing returns, to an "overall value." Because they have diminishing returns one can never dominate the other, they both have to increase at the same rate. The question isn't "What should we maximize, total or average?" the question is "We have X resources, what percentage should we use to increase total utility and what percentage should we use to increase average utility?" I actually have grown to hate the word "maximize," because trying to maximize things tends to lead to increasing one important value at the expense of others.

I'm also not saying that total and average utility are the only contributing factors to "overall value." Other factors, such as equality of utility, also contribute.

I don't think it takes significantly more resources to have a happy human than a neutral human. It might at our current technology level, but that's not always going to be a problem.

I don't just care about happiness. I care about satisfaction of preferences. Happiness is one very important preference, but it isn't the only one. I want the human population to grow in size, and I also want us to grow progressively richer so we can satisfy more and more preferences. In other words, as we discover more resources we should allocate some towards creating more people and some towards enriching those who already exist.

About the practical applications: you'd have to create people who would do good in their universe conditional on the fact that you'd make them, and they'd have to have a comparable prior probability of existing.

Sorry, I was no longer talking about acausal trade when I said that. I was just talking about my normal, normative beliefs in regards to utilitarianism. It was in response to your claim that utilitarians have a duty to tile the universe with people even in a situation where there is no acausal trade involved.

I care about satisfaction of preferences.

That's only a problem if they have expensive preferences. Don't create people with expensive preferences. Create people whose preferences are either something that can be achieved via direct neural stimulation or something that's going to happen anyway.

Sorry, I was no longer talking about acausal trade when I said that.

And I was no longer talking about your last comment. I was just talking about the general idea of your post.

Don't create people with expensive preferences. Create people whose preferences are either something that can be achieved via direct neural stimulation or something that's going to happen anyway.

Look, obviously when I said I want to enhance preference satisfaction in addition happiness, these were shorthand terms for far more complex moral beliefs that I contracted for the sake of brevity. Creating people with really really unambitious preferences would be an excessively simplified and literalistic interpretation of those moral rules that would lead to valueless and immoral results. I think we should call this sort of rules-lawyering where one follows an abbreviated form of morality strictly literally, rather than using it as a guideline for following a more complex set of values "moral munchkining," after the practice in role-playing games of singlemindedly focusing on the combat and looting aspects of the game to the expense of everything else.

What I really think it is moral to do is create a world where:

*All existing morally significant creatures have very high individual and collective utility (utility defined as preferences satisfaction, positive emotions, and some other good stuff).

*There are a lot of those high utility creatures.

*There is some level equality of utility. Utility monsters shouldn't get all the resources, even if they can be created.

*The creature's feelings should have external referents.

*A large percentage of existing creatures should have very ambitious preferences, preferences that can never be fully satisfied. This is a good thing because it will encourage them to achieve more and personally grow.

*Their preferences should be highly satisfied because they are smart, strong, and have lots of friends, not because they are unambitious.

*A large percentage of those creatures should exhibit a lot of the human universals, such as love, curiosity, friendship, play, etc.

*The world should contain a great many more values that it would take even longer to list.

That is the sort of world we all have a moral duty to work towards creating. Not some dull world full of people who don't want anything big or important. That is a dystopia I have a duty to work towards stopping. Morality isn't simple. You can't reduce it down to a one-sentence long command and then figure out the cheapest, most literalistic way to obey that one sentence. That is the road to hell.

Let me put it this way: Notch created Minecraft. It is awesome. There is nothing unambitious about it. It's also something that exists entirely within a set of computers.

I suppose when I said "direct neural stimulation" it sounded like I meant something closer to wireheading. I just meant the matrix.

This is a good thing because it will encourage them to achieve more and personally grow.

I thought you were listing things you find intrinsically important.

Let me put it this way: Notch created Minecraft. It is awesome. There is nothing unambitious about it. It's also something that exists entirely within a set of computers.

Agreed. I would count interacting with complex pieces of computing code as an "external referent."

I suppose when I said "direct neural stimulation" it sounded like I meant something closer to wireheading. I just meant the matrix.

You're right, when you said that I interpreted it to mean you were advocating wireheading, which I obviously find horrifying. The matrix, by contrast, is reasonably palatable.

I don't see a world consisting mainly of matrix-dwellers as a dystopia, as long as the other people that they interact with in the matrix are real. A future where the majority of the population spends most of their time playing really complex and ambitious MMORPGs with each other would be a pretty awesome future.

I thought you were listing things you find intrinsically important.

I was, the personal growth thing is just a bonus. I probably should have left it out, it is confusing since everything else on the list involves terminal values.

Notch created Minecraft. It is awesome. There is nothing unambitious about it.

Nothing Unambitious? Really? It's inspired by Dwarf Fortress. Being an order of magnitude or three less in depth, nuanced and challenging than the inspiration has to count as at least slightly unambitious.

Well, if you're measuring unambitiousness against the maximum possible ambitiousness you could have, then yes, being unambitious is trivial.

Well, if you're measuring unambitiousness against the maximum possible ambitiousness you could have, then yes, being unambitious is trivial.

This is both true and utterly inapplicable.

I was giving an example of something awesome that has been done without altering the outside world. You just gave another example.

I tend to regard computers as being part of the outside world. That's why your initial comment confused me.

Still, your point that brain emulators in a matrix could live very rich, fulfilled, and happy lives that fulfill all basic human values, even if they rarely interact with the world outside the computers they inhabit, is basically sound.

That's why your initial comment confused me.

That and I explained it badly. And I may or may not have originally meant wireheading and just convinced myself otherwise when it suited me. I can't even tell.

I was giving an example of something awesome that has been done without altering the outside world. You just gave another example.

The claim "There is nothing unambitious about [minecraft]" is either plainly false or ascribed some meaning which is unrecognizable to me as my spoken language.

It was an exaggeration. It's not pure ambition, but it's not something anyone would consider unambitious.

Let's not create people who don't want to exist in the first place! Infinite free utility!

One day you start thinking about a hypothetical nonexistant person named Bob who is a real jerk.

I don't trade with hypothetical persons. (And while hypothetical me does trade with hypothetical people, he doesn't trade with hypothetical-hypothetical people.)

Counterfactual people and people in other worlds as actually represented by my best map of the multiverse I may consider trading with, depending on my preferences.

Do Timeless Decision Theory and Updateless Decision Theory say you have a counterfactual obligation to create Bob the Jerk, the same way you have an obligation to pay Omega in the Counterfactual Mugging, and the same way you might (I'm still not sure about this) have an obligation to one-box when dealing with Prometheus?

I one box Newcomb's, pay Omega in the Counterfactual Mugging and two box with Prometheus. Given the specification of the problem as available two boxing actually seems to be the 'Cooperate' option.

Does hypothetical you trade with actual people?

Does hypothetical you trade with actual people?

Only when actual me pulls some impressive Dark Arts maneuvering (and the actual person is a sucker).

I'll just note, like I did before on the Prometheus problem, that a decision agent can demonstrate a preference for having existed (by creating Bob, say) while also demonstrating a preference for ceasing to exist; I don't think this makes the agent vulnerable to being Dutch-booked. Likewise, a decision agent can demonstrate a preference for not having existed while also demonstrating a preference for continuing to exist. It's not clear to me whether humans have a preference for having existed, because we so rarely get a chance to choose whether to have existed.

The chief question here is if I would enjoy existing in a universe where I have to create my own worst enemy in the hopes of them retroactively creating me. Plus, if this Jerk is truly as horrible as he's hypothetically meant out to be then I don't think I'd want him creating me (sure, he might create me but he sounds like a big enough jerk that he would intentionally create me wrong or put me in an unfavorable position).

The answer is no, I would refuse to do so and if I don't magically cease to exist in this setting then I'll wait around for Jane the Helpful or some other less malevolent hypothetical person to make deals with.

We need a sense in which Bob is "just as likely to have existed" as I am, otherwise, it isn't a fair trade.

First considering the case before Omega's machine is introduced. The information necessary to create Bob contains the information necessary to create me, since Bob is specified as a person who would specifically create me, and not anyone else who might also make him. Add to that all the additional information necessary to specify Bob as a person, and surely Bob is much less likely to have existed than I am, if this phrase can be given any meaning. This saves us from being obligated to tile the universe with hypothetical people who would have created us.

With Omega's machine, we also imagine Bob as having run across such a machine, so he doesn't have to contain the same information anymore. Still Bob has the specific characteristic of having a somewhat unusual response to the "Bob the jerk" problem, which might make him less likely to have existed. So this case is less clear, but it still doesn't seem like a fair trade.

To give a specific sense for "just as likely to have existed," imagine Prometheus drew up two complete plans for people to create, one for Bob and one for you, then flipped a coin to decide which one to create, which turned out to be you. Now that you exists, Prometheus lets you choose whether to also create Bob. Again lets say Bob would have created you, if and only if he thought you would create him. In this case we can eliminate Bob, since it's really the same if Prometheus just says "I flipped heads and just created you. But if I flipped tails, I would have created you if and only if I thought you would give me 100 bucks. So give me 100 bucks." (The only difference is in the case with Bob, Prometheus creates you indirectly by creating Bob who then chooses to create you). The only difference between this and Pascal's mugging is that your reward in the counterfactual case is existence. I can't think of any reason (other than a sense of obligation) to choose differently in this problem than you do in Pascal's mugging.

Finally, imagine the same situation with Prometheus, but let's say Bob isn't a real jerk, just really annoying and smells bad. He also finds you annoying and malodorous. You are worse off if he exists. But Prometheus tells you Bob would have created you if Prometheus had flipped tails. Do you create Bob? It's sort of a counterfactual prisoner's dilemma.

The expected cost of pressing the button is your Annoyance at Bob, multiplied by the chance that Bob will pop up (since this is Omega we are talking about, that's very near 1). The expected cost of not pressing the button is... well, nothing. You already exist. Unless you value being annoyed by Bob, I fail to see why you would press the button.

I suppose you could say the cost is "Not seeing Omega's Bob-making machine in action." but I don't think that's supposed to be a part of the problem. (Actually, if Omega dropped off a machine that created people from thin air, examining the workings of that machine in detail would FAR outweigh any disutility Bob might impose from just being annoying, from where I stand.)

This might be a problem for applied/normative ethics but I don't think it's a problem for decision theory proper, any more than normal trade is a problem for decision theory proper because you have to make tradeoffs. In my opinion the problems for decision theory proper only show up as higher-order considerations, e.g. negotiating over what should count as negotiation or negotiation in good faith as opposed to blackmail &c., and even those problems are relatively object-level compared to problems like determining what does or doesn't count as diachronic inconsistency, what does or doesn't count as an agent, et cetera. Acausal-negotiation-like problems don't strike me as very fundamental or theoretically interesting.