Sorted by New

Wiki Contributions


I'm curious as to what, more specifically, The Path of Courage looks like.

If broken legs have not been eliminated... Would a person still learn, over time, how to completely avoid breaking a leg - and the difference lies in having to learn it, rather than starting out with unbreakable legs? Or do we remain forever in danger of breaking our legs (which is okay because we'll heal and because the rest of life is less brutal in general)?

If the latter... What happens to "optimizing literally everything"? Will we experience pain and then make a conscious decision not to prevent it for next time, knowing that our life is actually richer for it? Or will we have mental states such that we bemoan and complain that pain happened, and hope it doesn't again, but just-not-think-about actually trying to prevent it ourselves? Or do we, in fact, keep optimizing as hard as we can... and simply trust that we'll never actually succeed so greatly that we de-story our life and regret it?

The only reason we need happiness or pleasure is so that we are motivated to seek out things that would help us or things that matter to us.

That may be the only reason we evolved happiness or pleasure, but we don't have to care about what evolution optimized for, when designing a utopia. We're allowed to value happiness for its own sake. See Adaptation-Executers, not Fitness-Maximizers.

If we reached all possible goals, and ran out of possible goals to strive for, what do we do then?

Worthwhile goals are finite, so it's true we might run out of goals someday, and from then on be bored. But it doesn't frighten me too much because:

  1. We're not going to run out of goals as soon as we create an AI that can achieve them for us; we can always tell it to let us solve some things on our own, if it's more fun that way.

  2. The space of worthwhile goals is still ridiculously big. To live a life where I accomplish literally everything I want to accomplish is good enough for me, even if that life can't be literally infinite.* Plus, I'm somewhat open to the idea of deleting memories/experience in order to experience the same thing again.

  3. There's other fun things to do that don't involve achieving goals, and that aren't used up when you do them.

*Actually, I am a little worried about a situation where the stronger and more competent I get, the quicker I run out of life to live... but I'm sure we'll work that out somehow.

I know it says on this very site that perfectionism is one of the twelve virtues of rationality, but then it says that the goal of perfection is impossible to reach. That doesn't make sense to me. If the goal you are trying to reach is unattainable, then why attempt to attain it?

I guess technically the real goal is to be "close to perfection", as close as possible. We pretend that the goal is "perfection" for ease of communication, and because (as imperfect humans) we can sometimes trick ourselves into achieving more by setting our goals higher than what's really possible.

EY, I'm not sure I'm with you about needing to get smarter to integrate all new experiences. If we want to stay and slay every monster, couldn't we instead allow ourselves to forget some experiences, and to not learn at maximum capacity?

It does seem wrong to willfully not learn, but maybe as a compromise, I could learn all that my ordinary brain allows, then allow that to act as a cap and not augment my intelligence until that level of challenges fully bored me. I could maybe even learn new things while forgetting others to make space.

Or am I merely misunderstanding something about how brains work?

My motivation for taking this tack is that I find the fun of making art and of telling stories more compelling than the fun of learning; therefore, I'm not inclined to learn as fast as possible, if it means skipping over other fun; I'm also disinclined to become so competent that I'm alienated from the hardships/imperfections that give my life a story / allow me to enjoy stories.