Given two options:

1. Alter reality so it more closely aligns with human expectations.

2. Alter humans so their expectations more closely align with reality.

If it cost the same to implement either option, which would you prefer and why?

New to LessWrong?

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 12:45 PM

3. Alter reality so it more closely aligns with human desires.

4. Alter humans so their desires more closely align with reality.

1a-4a: Replace "humans" by "oneself".

1b-4b: Replace "humans" by "the people you deal with from day to day".

So of the expanded options, which would you choose?

In that case, how do you handle the problem of humans wanting the "wrong" things? (Meaning people wanting things that ultimately result in bad outcomes for themselves or others.)

Would altering reality so it more closely aligns with the humans' desires include avoiding negative consequences, side effects, and externalities?

You can always come up with exceptions to general rules. Yes, these issues would have to be handled. How they might be handled is not something that strikes me as useful to talk about in the present context, which is concerned with very general and abstract scenarios, and which, in general, are preferable to which others.

Generally speaking, I agree with Swift's likening of avoiding suffering by cutting off one's desires to cutting off one's feet because one lacks shoes.

The Stoical scheme of supplying our wants by lopping off our desires, is like cutting off our feet when we want shoes.

Lovely quote, thank you.

Definitely 2. If you start messing with reality, things get really boring really quickly.

I think that's my choice as well. Humans expectations are much narrower than reality appears to be. If reality conformed to human expectation then no one would ever be surprised, which I think would be sad.