It's not a question of timeframes, but of how likely you are to lose the war, how big the concessions would have to be to prevent the war, and how much the war would cost you even if you win (costs can have flow-through effects into the far future).
Not that any of this matters to the NK discussion.
The idea is that isolationism and destruction aren't cheaper than compromise. Of course this doesn't work if there's no mechanism of verification between the entities, or no mechanism to credibly change the utility functions. It also doesn't work if the utility functions are exactly inverse, i.e. neither side can concede priorities that are less important to them but more important to the other side.
A human analogy, although an imperfect one, would be to design a law that fulfills the most important priorities of a parliamentary majorit...
>The symmetric system is in favor of action.
This post made me think how much I value the actions of others, rather than just their omissions. And I have to conclude that the actions I value most in others are the ones that *thwart* actions of yet other people. When police and military take action to establish security against entities who would enslave or torture me, I value it. But on net, the activities of other humans are mostly bad for me. If I could snap my fingers and all other humans dropped dead (became inactive), I would instrumentally be bette...
>as the world branches, my total measure should decline many orders of magnitude every second
I'm not sure why you think that. From any moment in time, it's consistent to count all future forks toward my personal identity without having to count all other copies that don't causally branch from my current self. Perhaps this depends on how we define personal identity.
>but it doesn't affect my decision making.
Perhaps it should - tempered by the possibilities that your assumptions are incorrect, of course.
Another accounting trick: Coun...
That's a clever accounting trick, but I only care what happens in my actual future(s), not elsewhere in the universe that I can't causally affect.
>Thus, by not signing for cryonics she increases the share of her futures where she will be hostily resurrected in total share of her futures.
But she decreases the share of her futures where she will be resurrected at all, some of which contain hostile resurrection, and therefore she really decreases the share of her futures where she will be hostilely resurrected. She just won't consciously experience those where she doesn't exist, which is better than suffering from the perspective of those who consider suffering negative utility.
>It is even possible (in fact, due to resource constraints, it is likely) that they’re at odds with one another.
They're almost certainly extremely at odds with each other. Saving humanity from destroying itself points in the other direction from reducing suffering, not by 180 degrees, but at a very sharp angle. This is not just because of resource constraints, but even more so because humanity is a species of torturers and it will try to spread life to places where it doesn't naturally occur. And that life obviously will contain large amounts ...
>Our life could be eternal and thus have meaning forever.
Or you could be tortured forever without consent and without even being allowed to die. You know, the thing organized religion has spent millennia moralizing through endless spin efforts, which is now a part of common culture, including popular culture.
Let's just look at our culture, as well as contemporary and historical global cultures. Do we have:
I’m confused about OpenAI’s agenda.
Ostensibly, their funding is aimed at reducing the risk of AI dystopia. Correct? But how does this research prevent AI dystopia? It seems more likely to speed up its arrival, as would any general AI research that’s not specifically aimed at safety.
If we have an optimization goal like “Let’s not get kept alive against our will and tortured in the most horrible way for millions of years on end”, then it seems to me that this funding is actually harmful rather than helpful, because it increases the probability that AI dystopia arrives while we are still alive.
Not all proposed solutions to x-risk fit this pattern: If government spends taxes to build survival shelters that will shelter only a chose few who will then go on to perpetuate humanity in case of a cataclysm, most tax payers receive no personal benefit.
Similarly, if government-funded programs solve AI value loading problems and the ultimate values don't reflect my personal self-regarding preferences, I don't benefit from the forced funding and may in fact be harmed by it. This is also true for any scientific research whose effect can be harmful to me personally even if it reduces x-risk overall.
What have you read about it that has caused you to stop considering it, or to overlook it from the start?
I reject impartiality on the grounds that I'm a personal identity and therefore not impartial. The utility of others is not my utility, therefore I am not a utilitarian. I reject unconditional altruism in general for this reason. It amazes me in hindsight that I was ever dumb enough to think otherwise.
Can you teach me how to see positive states as terminally (and not just instrumentally) valuable, if I currently don’t?
Teach, no, but there are some ...
I observe that you are communicating in bad faith and with hostility, so I will use my right to exit for any further communication with you.
What? Why? No sane person would classify "he will murder me if I leave" as "the right to exit isn't blocked". I don't expect much steelmanning from the downvote-bots here, but if you're strawmanning on a rationalist board, good-faith communication becomes disincentivized. It's not like I have skin in the game; all my relationships are nonviolent and I neither give a shit about feminism nor anti-feminism.
Still, if "she's such a nice person but sometimes she explodes" isn't compatible with revealed ...
I didn't read the whole post, but most of that is just the right to exit being blocked by various mechanisms, including socioeconomic pressure and violence. And the socioeconomic ones aren't even necessarily incompatible with revealed preference; if the alternative is homelessness, this may suck, but the partner still has no obligation to continue the relationship and the socioeconomic advantages are obviously a part of the package.
if we are able to wirehead in an effective manner it might be morally obligatory to force them into wireheading to maximize utility.
Not interested in this kind of "moral obligation". If you want to be a hedonistic utilitarian, use your own capacity and consent-based cooperation for it.
I think it's worth making the distinction between reward hacking, pleasure wireheading, and addiction more clearly. There's some overlap, but these are different concepts with different implications for our utility.
The whole ideological subtext reeks with puritan moralism. You imply that we exist to make humanity's future bigger, rather than to do whatever the hell we actually prefer.
As long as pleasure wireheading is consensual, you longtermists can simply forgo your own pleasure wireheading and instead work very hard on the whole growth an...
The demand for sexual violence in fiction is easy to explain. It allows us to fantasize about behavior that would be prohibitively disadvantageous in practice, and it allows us to reflect on hypothetical situations that are relevant to our interests, such as how to deal with violent people.
My default model for abusive relationships *where the right to exit is not blocked* is indeed revealed preference. Not necessarily revealed preference for the abuse, but for the total package of goods and bads in the relationship.
The sex and romance market is a market af...
Indeed, as mentioned, without altruism, voting behaviour is fairly inexplicable.
I vote to reward or penalize politicians based on their previous choices, rather than to create better outcomes. That is, I look back, not forward.
There are some exceptions, e.g. when a candidate before assuming office is sending unusually credible signals, e.g. glorifying torture or some such. Other than that, I mostly ignore promises, and instead implement reciprocity for past decisions.
Edited after more reflection:
Whereas the expected benefit of voting to you alone is the Br...
I agree. I certainly didn't mean to imply that the Trump administration is trustworthy.
My point was that the analogy of AIs merging their utility functions doesn't apply to negotiations with the NK regime.