Posts

Sorted by New

Wiki Contributions

Comments

LessWrong Has Agree/Disagree Voting On All New Comment Threads

Giving a post's creator the option to enable/disable this secondary axis voting seems valuable. A post creator will probably know when his post will generally need nuanced comments with differing opinions, or is more lightweight (ie. what's your favourite icecream) and would appreciate the lighter UI. 

LessWrong Has Agree/Disagree Voting On All New Comment Threads

If you're really into manipulating public opinion, you should also have to be thinking about strong upvoting posts you disagree with but that are weakly written, so as to present an easily defeated strawman. 

I'd say you're correct this new addition does not change much to the previous incentives that exist in manipulating comment visibility, but that's not the point of this new addition, so not a negative of this update. 

Relationship Advice Repository

Though I expected it to be a joke, I'm still happy that the first comment on this (good btw) post is a call out on the astrology section. I did not bother to click the link because I did not imagine I could find anything of value behind it so I don't get the occasion to confirm it was a joke until arriving to this comment. 

MIRI announces new "Death With Dignity" strategy

I at first also downvoted because your first argument looks incredibly weak (this post has little relation to arguing for/against the difficulty of the alignment problem, what update are you getting on that from here?), as did the followup 'all we need is...' which is formulation which hides problems instead of solving them.  
Yet, your last point does have import and that you explicitly stated that is useful in allowing everyone to address it, so I reverted to an upvote for honesty, though strong disagree. 

To the point, I also want to avoid being in a doomist cult. I'm not a die hard long term "we're doomed if don't align AI" guy, but from my readings throughout the last year am indeed getting convinced of the urgency of the problem. Am I getting hoodwinked by a doomist cult with very persuasive rhetoric? Am I myself hoodwinking others when I talk about these problems and they too start transitioning to do alignment work? 

I answer these questions not by reasoning on 'resemblance' (ie. how much does it look like a doomist cult) but going into finer detail. An implicit argument being made when you call [the people who endorse the top-level post] a doomist cult is that they share the properties of other doomist cults (being wrong, having bad epistemics/policy, preying on isolated/weird minds) and are thus bad. I understand having a low prior for doomist cults look-alikes actually being right (since there is no known instance of a doomist cult of world end being right), but that's not reason to turn into a rock (as in https://astralcodexten.substack.com/p/heuristics-that-almost-always-work?s=r , believing that "no doom prophecy is ever right". You can't prove that no doom prophecy is ever right, only that they're rarely right (and probably only once). 

I thus advise changing your question "do [the people who endorse the top-level post] look like a doomist cult?" into "What would be sufficient level of argument and evidence so I would take this doomist-cult-looking goup seriously?". It's not a bad thing to call doom when doom is on the way. Engage with the object level argument and not with your precached pattern recognition "this looks like a doom cult so is bad/not serious". Personally, I had similar qualms as you're expressing, but having looked into the arguments, it feels very strong and much more real to believe in "Alignement is hard and by default AGI is an existential risk" rather than not. I hope your conversation with Ben will be productive and that I haven't only expressed points you already considered (fyi they have already been discussed on LessWrong). 

A fate worse than death?

This sounds like dogma specific to the culture you're currently in, not some kind of universal rule. Throughout history many humans lived in slavery (think Rome), and a non zero percentage greatly enjoyed their lives, and would definitely prefer their lives than being dead. It is still an open question as to what causes positive or negative valence, but submission is probably not a fundamental part of it. 

A fate worse than death?

I appreciate that you went through the effort of sharing your thoughts, and as some commenters have noted, I also find the topic interesting. Still, you do not seem to have laid bare your assumptions that guide your models, and when examined it seems most of your musings seem to miss essential aspects of valence as experienced in our universe. I will be examining this question through the lens of total utilitarian consequentialism, where you sum the integral of valences of all lives over the lifespan of the universe. Do specify if you were using another framework. 

When you conclude "Bad feelings are vastly less important than saved lives.", it seems you imply that 
1) Over time our lives will always get better (or positive) 
2) That there's always enough time left in the universe to contribute more good than bad. 
(You could otherwise be implying that life is good in of itself, but that seems too wrong to discuss much, and I don't expect you would value someone suffering 100 years and then dying as better than someone dying straight away). 
In a S-risk scenario, most lives suffer until heath death, and keeping those lives alive is worse than not, so 1 is not always true. 2 also doesn't hold in scenarios where a life is tortured for half of the universe's lifespan (supposing positive valence is symmetrical to negative valence). It is only when considering there's always infinite time left that you could be so bold as to say keeping people alive through suffering is always worth it, but that's not the case of our universe. 

More fundamentally, you don't seem to be taking into account yet non existing people/lives into account, the limited nature of our universe in time and accessible space, or the fungibility of accumulated valence. Suppose A lives 100 years happy, dies, and then B lives 100 years happy, it seems there's as much experienced positive valence in the universe as having had A around happy for 200 years. You call it a great shame that someone should die, but once they're dead they are not contributing negative valence, and there is space for new lives that contribute positive valence. Thus, it seems that if someone was fated to suffer a 100 years, it would be better they die now, and that someone else is born and lives 100 years happy, than trying to keep that original life around and making them live 200 years happy after the fact to compensate. Why should we care that the positive valence is experienced by a specific life and not another ?
In our world, there are negative things associated with death, such as age related ill heath (generally with negative valence associated), and negative feelings from knowing someone has died (because it changes our habits, we lose something we liked), so it would cause less suffering if we solved ageing and death. But there is no specific factor in the utility function marking death as bad. 

With these explanations of the total utility point of view, do you agree that a large amount of suffering (for example over half the lifespan of the universe) IS worse than death? 

Cryonics signup guide #1: Overview

Hi. I'm seeing this post because it's curated and assume this will be the case of quite a few other people who'll read this article soon. Before rushing to sign up on cryonics, I'd be interested in discussion on the grievances brought up against Alcor here by Michael-G-Darwin  https://www.reddit.com/r/cryonics/comments/d6s41b/can_alcor_get_any_worse/) . For reference Michael G Darwin (https://en.wikipedia.org/wiki/Mike_Darwin) worked at Alcor a long while. 

In the post I've linked he quite extensively explains faults he finds in how Alcor has handled patients in the last years. Having read it I'm not inclined to go forwards with cryonics before having good evidence that standards of care have improved, or that Mike Darwin's claims have been solidly refuted. In general I'd appreciate strong evidence that the level of care given to most patients (and that which anyone signing up would expect to receive) is 'the best we can do' and not 'just good enough that people continue paying and scandals don't break out too often'.

Are there any such discussions debating these points available elsewhere? I've currently only looked around for a couple hours max so I'm not knowledgeable on the subject. I'm mostly bringing this up so other novices at least know there's been some debate and there's more to look into than just what the companies offering those services say.