Walker Vargas

Comments

Unification seems to be a way to avoid infinitarian paralysis in consequentialist aggregative ethics.

Under unification wouldn't it make sense to consider ourselves to be every instance of our mind state? So there's no fact of the matter of what your surroundings are like until they effect your mind state. Similarly, every past and future that is compatible with your current mind state happened and will happen respectively.

Would a more deadly virus have induced greater compliance with US lockdown restrictions?

This isn't the flu. America has had 318,000 deaths so far. That's ~8.5 years worth of flu deaths. One of those years was from the last 26 days. If the world had America's almost 1 death per 1,000 people mortality rate, that would be about 7.8 million deaths. There are 1.7 million deaths globally. That's 6 million people spared! And frankly America is in at least a half banked lockdown.

If your country has almost no cases, that isn't something to complain about. Mass graves would mean that your country had failed to the point that they were having difficulty manages all of the corpses. This point will vary country to country, but it is a lot harder for a first world country to hit that point than you seem to think.

Not Even Evidence

This doesn't require faster than light signaling. If you and the copy are sent way with identical letters, that you open after crossing each other's event horizons. You learn want was packed with your clone when you open your letter. Which lets you predict what your clone will find.

Nothing here would require the event of your clone seeing the letter to affect you. You are affected by the initial set up. If the clone counterfactually saw something else, this wouldn't affect you according to SIA. It would require some assumptions about the setup to be wrong for that to happen to your clone though.

Another example would be if you learn a star that has crossed your cosmic event horizon was 100 solar masses, it's fair to infer that it will become a black hole and not a white dwarf.

What do the baby eaters tell us about ethics?

Sorry this is so late. I haven't been on the site for a while. My last post was in reply to no interference always being better than fighting it out. Most of the character's seem to think that stopping the baby eaters has more utility than letting the superhappies do the same thing to us would cost.

What do the baby eaters tell us about ethics?

The story brings up the possibility that, the disutility of the babyeaters might outweigh the utility of humanity. There's certainly nothing logically impossible about this.

Counterfactual Mugging: Why should you pay?

Just ask which algorithm wins then. At least in these kinds of situations udt does better. The only downside is the algorithm has to check if it's in this kind of situation; it might not be worth practicing.

Bayesian Probability is for things that are Space-like Separated from You

It's a variant of the liar's paradox. If you say the statement is unlikely, you're agreeing with what it says. If you agree with it, you clearly don't think it's unlikely, so it's wrong.

What do the baby eaters tell us about ethics?

Vigilantism has been found to be lacking. If I wanted to help with that problem in particular I'd become a cop, or vote for politicians to put higher priority on it. That seems directly comparable to what the humans in the story intended to do for most of it.

What the baby eaters are doing is worst by most people's standards than anything in our history. At least if scale counts for something. Humans don't even need a shared utility function. There just needs to be a cluster around what most people would reflectively endource. Paperclip maximizers might fight each other over the exact definition, but a pencil maximizer is clearly not helping by any of their standards.

Also the baby eaters aren't Spartans. If you gave the Spartans a cure all for birth defects, they would stop killing their kids, and certainly wouldn't complain about cure.