I looked briefly into Ziz. My conclusion is that she had some interesting ideas I hadn't heard before, and some completely ridiculous ideas. I couldn't find her definition of "good" or "bad" or the idea of tiling the future lightcone with copies of herself.
Thanks for reminding me about that scene from the Matrix. Gave it a look on YouTube. Awesome movie.
I'm wondering, how do you look at the question of what we want to tile the future lightcone with?
Yea I like the way you describe it.
I'll check out his writings on the history of Buddhism and meditation, thanks.
I agree it can be seen as a destructive meme. At the same time, I wonder why it has spread so little. Maybe because it doesn't have a very evangelical property. People who become infected with it might not have much of a desire to pass it on to others.
Hey thanks for the link Richard that was an interesting read. There definitely seems to be some similarities.
I was actually thinking about what we want to tile the future lightcone with the other day. This was the progression I saw:
Utilitarianism V has some similarities to tiling the future lightcone with copies of yourself which can then execute based on their updated preferences in the future.
But "yourself" is really just a collection of memes. It will be the memes that are propagating themselves like a virus. There's no real coherent persistent definition of "yourself".
What do you want to tile the future lightcone with?
I took a look at meaningness a few months ago but couldn't really get into it. It felt a bit too far from rationality and very hand wavy.
Did you find Meaningness valuable? I may take another look
You're assessment seems very accurate!
It didn't occur to me that there are probably many more people like him than I realize. I'm not sure I've met any. Have you?
My response is to say that sometimes it doesn't all add up to normality. Sometimes you learn something which renders your previous way of living obsolete.
It's similar to the idea of thinking of yourself as having free will even if it isn't the case: It can be comforting to think of yourself as having continuity of consciousness even if it isn't the case.
Wei Dai posts here (https://www.lesswrong.com/posts/uXxoLPKAdunq6Lm3s/beware-selective-nihilism) suggesting that we "keep all of our (potential/apparent) values intact until we have a better handle on how we're supposed to deal with ontological crises in general". So basically, favor the status quo until you develop an alternative and understand its implications.
What do you think?
Does it make sense to to claim that a satisficer will be content when it reaches a certain level of expected utility though? Some satisficers may work that way, but they don't all need to work that way. Expected utility is somewhat arbitrary.
Instead, you could have a satisficer which tries to maximize the probability that the utility is above a certain value. This leads to different dynamics than maximizing expected utility. What do you think?
Related post on utility functions here: https://colekillian.com/posts/sbf-and-pascals-mugging/