Value Deathism

We can't even specify most of our top-node values with any kind of precision or accuracy -- why should we care if (a) they change or (b) a world that we personally do not live in becomes optimized for other values?

Where you don't have any preference, you have indifference, and you are not indifferent all around. There is plenty of content to your values. Uncertainty and indifference are no foes to accuracy, they can be captured as precisely as any other concept.

Whether "you don't personally live" in the future is one property of the future to ... (read more)

As Poincaré said, "Every definition implies an axiom, since it asserts the existence of the object defined." You can call a value a "single criterion that doesn't tolerate exceptions and status quo assumptions" -- but it's not clear to me that I even have values, in that sense.

Of course, I will believe in the invisible, provided that it is implied. But why is it, in this case?

You also speak of the irrelevance (in this context) of the fact that these values might not even be feasibly computable. Or, even if we can identify them, there ma... (read more)

Value Deathism

by Vladimir_Nesov 1 min read30th Oct 2010121 comments

26


Ben Goertzel:

I doubt human value is particularly fragile. Human value has evolved and morphed over time and will continue to do so. It already takes multiple different forms. It will likely evolve in future in coordination with AGI and other technology. I think it's fairly robust.

Robin Hanson:

Like Ben, I think it is ok (if not ideal) if our descendants' values deviate from ours, as ours have from our ancestors. The risks of attempting a world government anytime soon to prevent this outcome seem worse overall.

We all know the problem with deathism: a strong belief that death is almost impossible to avoid, clashing with undesirability of the outcome, leads people to rationalize either the illusory nature of death (afterlife memes), or desirability of death (deathism proper). But of course the claims are separate, and shouldn't influence each other.

Change in values of the future agents, however sudden of gradual, means that the Future (the whole freackin' Future!) won't be optimized according to our values, won't be anywhere as good as it could've been otherwise. It's easier to see a sudden change as morally relevant, and easier to rationalize gradual development as morally "business as usual", but if we look at the end result, the risks of value drift are the same. And it is difficult to make it so that the future is optimized: to stop uncontrolled "evolution" of value (value drift) or recover more of astronomical waste.

Regardless of difficulty of the challenge, it's NOT OK to lose the Future. The loss might prove impossible to avert, but still it's not OK, the value judgment cares not for feasibility of its desire. Let's not succumb to the deathist pattern and lose the battle before it's done. Have the courage and rationality to admit that the loss is real, even if it's too great for mere human emotions to express.

26