Cole Killian

Wiki Contributions


Unlike a maximiser, that will attempt to squeeze the universe to every drop of utility that it can, a satisficer will be content when it reaches a certain level expected utility (a satisficer that is content with a certain level of utility is simply a maximiser with a bounded utility function).

Does it make sense to to claim that a satisficer will be content when it reaches a certain level of expected utility though? Some satisficers may work that way, but they don't all need to work that way. Expected utility is somewhat arbitrary.

Instead, you could have a satisficer which tries to maximize the probability that the utility is above a certain value. This leads to different dynamics than maximizing expected utility. What do you think?

Related post on utility functions here:

Is there a reason for not link posting all overcoming bias posts to lesswrong?

Could you elaborate on the reasoning behind the high bar for alignment forum membership?

I looked briefly into Ziz. My conclusion is that she had some interesting ideas I hadn't heard before, and some completely ridiculous ideas. I couldn't find her definition of "good" or "bad" or the idea of tiling the future lightcone with copies of herself.

Thanks for reminding me about that scene from the Matrix. Gave it a look on YouTube. Awesome movie.

I'm wondering, how do you look at the question of what we want to tile the future lightcone with?

Yea I like the way you describe it.

I'll check out his writings on the history of Buddhism and meditation, thanks.

I agree it can be seen as a destructive meme. At the same time, I wonder why it has spread so little. Maybe because it doesn't have a very evangelical property. People who become infected with it might not have much of a desire to pass it on to others.

Hey thanks for the link Richard that was an interesting read. There definitely seems to be some similarities.

I was actually thinking about what we want to tile the future lightcone with the other day. This was the progression I saw:

  • Conventional Morality :: Do what feels right without thinking much about it.
  • Utilitarianism I :: The atomic unit of "goodness" and "badness" is the valence of human experience. The valence of experience across all humans matters equally. The suffering of a child in Africa matters just as much as the suffering of my neighbor.
  • Utilitarianism II :: The valence of experience across all sentient things matters equally. i.e. The suffering of cows matters too.
  • Utilitarianism III :: The valence of experience across all sentient things across time matters equally. The suffering of sentient things in the future matters just as much as the suffering of my neighbor today. i.e. longtermism
  • Utilitarianism IV :: Understanding valence and consciousness takes a lexicographical preference over any attempt to improve the valence of sentient things as we understand it today because only with this better understanding can we efficiently maximize the valence of sentient things. i.e. veganism is only helpful in its ability to speed up our ability to understand consciousness and release a utilitron shockwave. Everything before the utilitron shockwave can be rounded to zero.
  • Utiltiarianism V :: Upon understanding consciousness, we can expect to have our preferences significantly shaken in a way that we can't hope to properly anticipate (we can't expect to have properly understood our preferences with such a weak understanding of "reality"). The lexicographical preference then becomes understanding consciousness and making the "right" decision on what to do next upon understanding it. In this case, it would mean that all of our "moral" actions were only good in so far as their contribution to this revelation and making the "right" decision upon understanding consciousness.
  • Utilitarianism VI :: ?

Utilitarianism V has some similarities to tiling the future lightcone with copies of yourself which can then execute based on their updated preferences in the future.

But "yourself" is really just a collection of memes. It will be the memes that are propagating themselves like a virus. There's no real coherent persistent definition of "yourself".

What do you want to tile the future lightcone with?

I took a look at meaningness a few months ago but couldn't really get into it. It felt a bit too far from rationality and very hand wavy.

Did you find Meaningness valuable? I may take another look

You're assessment seems very accurate!

It didn't occur to me that there are probably many more people like him than I realize. I'm not sure I've met any. Have you?

My response is to say that sometimes it doesn't all add up to normality. Sometimes you learn something which renders your previous way of living obsolete.

It's similar to the idea of thinking of yourself as having free will even if it isn't the case: It can be comforting to think of yourself as having continuity of consciousness even if it isn't the case.

Wei Dai posts here ( suggesting that we "keep all of our (potential/apparent) values intact until we have a better handle on how we're supposed to deal with ontological crises in general". So basically, favor the status quo until you develop an alternative and understand its implications.

What do you think?

Load More