Posts

Sorted by New

Wiki Contributions

Comments

I am not sure how to even respond to this. I do not know what drives you to hatefully twist my words, depicting my cry for help as some kind of contrived attempt at manipulation, but you are obviously not acting with anything close to an altruistic intent.

Yes, I am entirely serious about this. Far more than you know. Perhaps if you had contacted me to have an intelligent discussion, instead of directly proceeding to accuse me with many critical generalizations, you would have realized that.

I have had several people message me already, and we are currently having civil discussions about potential future scenarios. I am certain they would all attest that they are not being 'emotionally exploited', as you seem to think is my goal. I publicly mentioned suicide because genuine consideration of the possibility was the entire point of the post, and I (correctly, for the most part) assumed that this community was mature enough to handle it without any drama.

You clearly have zero experience dealing with suicidal individuals, and would do well to stay away from this discussion. I had a hard enough time working up the courage to make that post, and I really do not want any drama from this. I hope you will do the mature thing and just leave me alone.

I am considering ending my life because of fears related to AI risk. I am posting here because I want other people to review my reasoning process and help ensure I make the right decision.

First, this is not an emergency situation. I do not currently intend to commit suicide, nor have I made any plan for doing so. No matter what I decide, I will wait several years to be sure of my preference. I am not at all an impulsive person, and I know that ASI is very unlikely to be invented in less than a few decades.

I am not sure if it would be appropriate to talk about this here, and I prefer private conversations anyway, so the purpose of this post is to find people willing to talk with me through PMs. To summarize my issue: I only desire to live because of the possibility of utopia, but I have recently realized that ASI-provided immortal life is significantly likely to be bad rather than good. If you are very familiar with the topics of AI risk, mind uploading, and utilitarianism, please consider sending me a message with a brief explanation of your beliefs and your intent to help me. I especially urge you to contact me if you already have similar fears of AI, even if you are a lurker and are not sure if you should. Because of the sensitive nature of this topic, I may not respond unless you provide an appropriately genuine introduction and/or have a legitimate posting history.

Please do not reply/PM if you just want to tell me to call a suicide prevention hotline, tell me the standard objections to suicide, or give me depression treatment advice. I might take a long time to respond to PMs, especially if several people end up contacting me. If nobody contacts me I will repost this in the next discussion thread or on another website.

Edit: The word limit on LW messages is problematic, so please email me at sad_dolphin@protonmail.com instead.