Andaro

Posts

Sorted by New

Wiki Contributions

Comments

Should Effective Altruism be at war with North Korea?

I agree. I certainly didn't mean to imply that the Trump administration is trustworthy.

My point was that the analogy of AIs merging their utility functions doesn't apply to negotiations with the NK regime.

Should Effective Altruism be at war with North Korea?

It's not a question of timeframes, but of how likely you are to lose the war, how big the concessions would have to be to prevent the war, and how much the war would cost you even if you win (costs can have flow-through effects into the far future).

Not that any of this matters to the NK discussion.

Should Effective Altruism be at war with North Korea?

The idea is that isolationism and destruction aren't cheaper than compromise. Of course this doesn't work if there's no mechanism of verification between the entities, or no mechanism to credibly change the utility functions. It also doesn't work if the utility functions are exactly inverse, i.e. neither side can concede priorities that are less important to them but more important to the other side.

A human analogy, although an imperfect one, would be to design a law that fulfills the most important priorities of a parliamentary majority, even if each individual would prefer a somewhat different law.

I don't think something like this is possible with untrustworthy entities like the NK regime. They're torturing and murdering people as they go, of course they're going to lie and break agreements too.

Asymmetric Justice

>The symmetric system is in favor of action.

This post made me think how much I value the actions of others, rather than just their omissions. And I have to conclude that the actions I value most in others are the ones that *thwart* actions of yet other people. When police and military take action to establish security against entities who would enslave or torture me, I value it. But on net, the activities of other humans are mostly bad for me. If I could snap my fingers and all other humans dropped dead (became inactive), I would instrumentally be better off than I am now. Sure, I'd lose their company and economic productivity, but it would remove all intelligent adversaries from my universe, including those who would torture me.

>The Good Place system...

I think it's worth noting that you have chosen an example of a system where people will not just be tortured, but tortured *for all eternity without the right to actually ever die* and not even the moral philosopher character manages to formulate a coherent in-depth criticism of that philosophy. I know it's a comedy show, but it's still premised on the acceptance that there would be a system of eternal torture and that system would be moralized as justice, and of course nonconsensual without an exit option.

How do S-Risk scenarios impact the decision to get cryonics?

>as the world branches, my total measure should decline many orders of magnitude every second

I'm not sure why you think that. From any moment in time, it's consistent to count all future forks toward my personal identity without having to count all other copies that don't causally branch from my current self. Perhaps this depends on how we define personal identity.

>but it doesn't affect my decision making.

Perhaps it should - tempered by the possibilities that your assumptions are incorrect, of course.

Another accounting trick: Count future where you don't exist as neutral perspectives of your personal identity (empty consciousness). This should collapse the distinction between total and relative measure. Yes, it's a trick, but the alternative is even more counter-intuitive to me.

Let's regard a classical analogy: You're in a hypothetical situation where your future contains of negative utility. Let's say you suffer -5000 utils per unit time for the next 10 minutes, then you die with certainty. But you have the option of adding another 10 trillion years of life at -4999 utils per unit time. If we regard relative rather than total measure, this should be preferable because your average utils will be ~-4999 per unit time rather than -5000. But it's clearly a much more horrible fate.

I always found average utlitarianism unattractive because of mere addition problems like this, in addition to all the other problems utilitariansims have.

How do S-Risk scenarios impact the decision to get cryonics?

That's a clever accounting trick, but I only care what happens in my actual future(s), not elsewhere in the universe that I can't causally affect.

How do S-Risk scenarios impact the decision to get cryonics?

>Thus, by not signing for cryonics she increases the share of her futures where she will be hostily resurrected in total share of her futures.

But she decreases the share of her futures where she will be resurrected at all, some of which contain hostile resurrection, and therefore she really decreases the share of her futures where she will be hostilely resurrected. She just won't consciously experience those where she doesn't exist, which is better than suffering from the perspective of those who consider suffering negative utility.

A Case for Taking Over the World--Or Not.

>It is even possible (in fact, due to resource constraints, it is likely) that they’re at odds with one another.

They're almost certainly extremely at odds with each other. Saving humanity from destroying itself points in the other direction from reducing suffering, not by 180 degrees, but at a very sharp angle. This is not just because of resource constraints, but even more so because humanity is a species of torturers and it will try to spread life to places where it doesn't naturally occur. And that life obviously will contain large amounts of suffering. People don't like hearing that, especially in the x-risk reduction demographic, but it's pretty clear the goals are at odds.

Since I'm a non-altruist, there's not really any reason to care about most of that future suffering (assuming I'll be dead by then), but there's not really any reason to care about saving humanity from extinction, either.

There are some reasons why the angle is not a full 180 degrees: There might be aliens who would also cause suffering and humanity might compete with them for resources, humanity might wipe itself out in ways that also cause suffering such as AGI, or there might be a practical correlations between political philosophies that cause high-suffering and also high-extinction-probability, e.g. torturers are less likely to care about humanity's survival. But none of these make the goals point in the same direction.

A Roadmap: How to Survive the End of the Universe

>Our life could be eternal and thus have meaning forever.

Or you could be tortured forever without consent and without even being allowed to die. You know, the thing organized religion has spent millennia moralizing through endless spin efforts, which is now a part of common culture, including popular culture.

Let's just look at our culture, as well as contemporary and historical global cultures. Do we have:

  • a consensus of consensualism (life and suffering should be voluntary)? Nope, we don't.
  • a consensus of anti-torture (torturing people being illegal and immoral universally)? Nope, we don't.
  • a consensus of proportionality (finite actions shouldn't lead to infinite punishments)? Nope, we don't.

You'd need at least one of these to just *reduce* the probability of eternal torture, and then it still wouldn't guarantee an acceptable outcome. And we have none of these.

They would if they could, and the only reason you're not being already tortured for all eternity is because they haven't found a way to implement it.

The probability of getting it done is small, but that is not an argument in favor of your suggestion; if it can't be done, you don't get eternal meaning either, if it can be done, you have effectually increased the risk of eternal torture for all of us by working in this direction.

Two Small Experiments on GPT-2

I’m confused about OpenAI’s agenda.

Ostensibly, their funding is aimed at reducing the risk of AI dystopia. Correct? But how does this research prevent AI dystopia? It seems more likely to speed up its arrival, as would any general AI research that’s not specifically aimed at safety.

If we have an optimization goal like “Let’s not get kept alive against our will and tortured in the most horrible way for millions of years on end”, then it seems to me that this funding is actually harmful rather than helpful, because it increases the probability that AI dystopia arrives while we are still alive.

Load More