I only have second-hand descriptions of suicidal thoughts-processes, but I’ve heard from some who say they had become convinced that their existence was a negative on the world and the people they care about, and they came to their decision to commit suicide from a sort of (misguided) utilitarian calculation. I tried to give the man this perspective rather than the apathetic perspective you suggest. There’s diversity in the psychology of suicidal people. Do no suicidal people (or sufficiently few) have the Utilitarian type of psychology?
I’m glad you enjoyed it! I had heard of people making promises similar to your Trump-donation one. The idea for this story came from applying that idea to the context of suicide prevention. The part about models is my attempt to explain my (extremely incomplete grasp of) Functional Decision Theory in the context of a story. https://www.lesswrong.com/tag/functional-decision-theory
4/8 of Eliezer Yudkowsky's posts in this list have a minus 9. Compare this with 1/7 for duncan_sabien, 0/6 for paulfchristiano, 0/5 for Daniel Kokotajlo, or 0/3 for HoldenKarnofsky. I wonder why that is.
On one level, the post used a simple but emotionally and logically powerful argument to convince me that the creation of happy lives is good.
On a higher level, I feel like I switch positions of population ethics every time I read something about it, so I am reluctant to predict that I will hold the post's position for much time. I remain unsettled that the field of population ethics, which is central to long-term visions of what the future should look like, has so little solid knowledge. My thinking, and therefore my actions, will remain split among the convincing population ethics positions.
This sequence made me doubt the soundness of philosophical arguments founded on what is "intuitive" (which this post very much relies upon). I don't know how someone might go about doing population ethics from a psychology point of view, but the post's subtitles "Preciousness," "Gratitude," and "Reciprocity" give some clues.
A testable aspect of the post would be to find out if the responses to the Wilbur and Michael thought experiments are universal. Also, I'd be interested to know how many of the people who read this post in 2021 (and have interacted with population ethics since then) maintain their position.
Carlsmith should follow up with his take on the Repugnant Conclusion. The Repugnant Conclusion is the central question of population ethics, so excluding it from this post is a major oversight.
Notes: The "famously hard" link is broken.
I’m here with a few others in a booth near the door. We haven’t seen Uzair.
Yes, it is. I wanted to win, and there is no rule against “going against the spirit” of AI Boxing.
I think about AI Boxing in the frame of Shut up and Do the Impossible, so I didn’t care that my solution doesn’t apply to AI Safety. Funnily, that makes me an example of incorrect alignment.
I have spent many hours on this, and I have to make a decision by two days from now. There's always the possibility that there is more important information to find, but even if I stayed up all night and did nothing else, I would not be able to read the entirety of the websites, news articles, opinion pieces, and social media posts relating to the candidates. Research costs resources! I suppose what I'm asking for is a way of knowing when to stop looking for more information. Otherwise I'll keep trying possibility 2 over and over and end up missing the election deadline!
Thanks for the response. Those are fair reasons. I should have contributed more.
The LessWrong community is big and some are in Florida. If anyone had interesting things to share about the election I wanted to encourage them to do so.
I hadn’t considered this. You point out a big flaw in the neighbor’s strategy. Is there a way to repair it?