# Evan Ward

Obvs into economics, computer science, rationality, and the like. Pretty sold on AI alignment, but tend to think my comparative advantage is in other areas. I rarely get to talk to many rationalists in person because of where I live, so I'd love to call or Zoom to y'all about rationality & whatever. HMU evanward97@gmail.com.

# Evan Ward's Posts

Sorted by New

Expected utility and repeated choices

To maximize utility when you can play any N number of games, I believe you just need to calculate the EV (not EU) through playing every possible strategy. Then, you pass all those values through your U function and go with the strategy associated with the highest utility.

[This comment is no longer endorsed by its author]Reply
Expected utility and repeated choices

<Tried to retract this comment since I no longer agree with it, but it doesn't seem to be working>

[This comment is no longer endorsed by its author]Reply
A Plausible Entropic Decision Procedure for Many Worlds Living, Round 2

There are trillions of quantum operations occurring in one's brain all the time. Comparatively, we make very few executive-level decisions. Further, these high-level decisions are often based off a relatively small set of information & are predictable given that set of information. I believe this implies that a person in the majority of recently created worlds makes the same high-level decisions. It's hard to imagine numerous different decisions we could make in any given circumstance given the relatively ingrained decision procedures we seem to walk the Earth with.

I know that your Occam's prior for Bob in a binary decision is .5. That is a separate matter from how many Bobs make the same decision given the same set of external information and the same high-level decision procedures inherited from the Bob in a recent (perhaps just seconds ago) common parent world.

This decision procedure does plausibly effect recently created Everett worlds and allow people in them to coordinate amongst 'copies' of themselves. I am not saying I can coordinate w/ far past sister worlds. I can, I am theorizing, coordinate myselves in my soon-to-be-generated child worlds because there is no reason to think quantum operations would randomly delete this decision procedure from my brain over a period of seconds or minutes.

A Plausible Entropic Decision Procedure for Many Worlds Living, Round 2

Do you think making decisions with the aid of quantum generated bits actually does increase the diversification of worlds?

A Possible Decision Theory for Many Worlds Living

I really appreciate this comment and my idea definitely might come down trying to avoid risk rather than maximize expected utility. However, I still think there is something net positive about diversification. I write a better version of my post here: https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/ and if you could spare the time, I would love your feedback.

A Possible Decision Theory for Many Worlds Living

I think you are right, but my idea applies more when one is uncertain about their expected utility estimates. I write a better version if my idea here https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/ and would love your feedback

A Possible Decision Theory for Many Worlds Living

I am glad you appreciated this! I'm sorry I didn't respond sooner. I think you are write about the term "decision theory" and have opted for "decision procedure" in my new, refined version of the idea at https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/

A Possible Decision Theory for Many Worlds Living

I'm sorry but I am not familiar with your notation. I am just interested in the idea: when an agent Amir is fundamentally uncertain about the ethical systems that he evaluates his actions by, is it better if all of his immediate child worlds make the same decision? Or should he hedge against his moral uncertainty, ensure his immediate child worlds choose courses of action that optimize for irreconcilable moral frameworks, and increase the probability that in a subset of his child worlds, his actions realize value?

It seems that in a growing market (worlds splitting at an exponential rate), it pays in the long term to diversify your portfolio (optimize locally for irreconcilable moral frameworks).

I agree that QM already creates a wide spread of worlds, but I don't think that means it's safe to put all of one's eggs in one basket when one has doubt that their moral system is fundamentally wrong.