Evan_Gaensbauer

Wiki Contributions

Comments

Thank you for this detailed reply. It's valuable, so I appreciate the time and effort you've put into it. 

The thoughts I've got to respond with are EA-focused concerns that would be tangential to the rationality community, so I'll draft a top-level post for the EA Forum instead of replying here on LW. I'll also read your EA Forum post and the other links you've shared to incorporate into my later response. 

Please also send me a private message if you want to set up continuing the conversation over email, or over a call sometime. 

I've edited the post so It's now "resentment from rationalists elsewhere to the Bay Area community" to "resentment from rationalists elsewhere toward the Bay Area community" because that seems to reduce the ambiguity some. My use of the word 'resentment' was intentional. 

Thanks for catching those. The word 'is' was missing. The word "idea" was meant to be "ideal." I've made the changes. 

I'm thinking of asking as another question post, or at least a post seeking feedback probably more than trying to stake a strong claim. Provoking debate for the sake of it would hinder that goal, so I'd try to writing any post in a way to avoid that. Those filters applied to any post I might write wouldn't hinder any kind of feedback I'd seek. The social barriers to posting raised by others with the concerns you expressed are seeming high enough that I'm unsure I'll post it after all.

This is a concern I take seriously. While it is possible increasing awareness of the problem of AI will make things worse overall, I think a more likely outcome is that it will be neutral to good.

Another consideration is how it may be a risk for long-termists to not pursue new ways of conveying the importance and challenge of ensuring human control of transformative AI. There is a certain principle of being cautious in EA. Yet in general we don't self-reflect enough to notice when being cautious by default is irrational on the margin. 

Recognizing the risks of acts of omission is a habit William MacAskill has been trying to encourage and cultivate in the EA community during the last year. Yet it's been a principle we've acknowledged since the beginning. Consequentialism doesn't distinguish between action, and inaction, as a failure to take any appropriate, crucial or necessary action to prevent a negative outcome. Risk aversion is focused on in the LessWrong Sequences more than most cognitive biases.

It's now evident that past attempts at public communication about existential risks (x-risks) from AI have altogether proven to be neither sufficient nor adequate. It may not be a matter of not drawing more attention to the matter so much as drawing more of the right kind of attention. In other words, carefully conducing changes in how AI x-risks are perceived by various sections of the public is necessary. 

The way we together as a community help you ensure how you write the book strikes the right balance may be to keep doing what MacAskill recommends: 

  • Stay in constant communication about our plans with others, inside and outside of the EA community, who have similar aims to do the most good they can
  • Remember that, in the standard solution to the unilateralist’s dilemma, it’s the median view that’s the right (rather than the most optimistic or most pessimistic view)
  • Are highly willing to course-correct in response to feedback

I'm aware it's a rather narrow range of ideas but a set of a few standard options being the ones most people adhere to is how it's represented in popular discourse, which is what I'm going off of as a starting point. It has been established in other comments on my post that isn't what to go off of. I've also mentioned that to be exposed to ideas I may not have thought of myself is part of why I want to have an open discussion on LW. My goal has been to gauge if that's a discussion any significant portion of the LW user-base is indeed open to having. The best I've been able to surmise as an answer thus far is: "yes, if it's done right."

As to the question of whether I can hold myself to those standards and maintain them, I'll interpret the question not as a rhetorical but literally. My answer is: yes, I expect I would be able to hold myself to those standards and maintain them. I wouldn't have asked the original question in the first place if I thought there wasn't at least a significant chance I could. I'm aware of how I'm writing this may seem to betray gross overconfidence on my part.

I'll try here to convince you otherwise by providing context in terms of the perceived strawmanning of korin43's comment on my part. The upshot as to why it's not a strawman is because my position is the relatively extreme one, putting me in opposition to most people who broadly adopt my side of the issue (i.e., pro-choice). I expect it's much more plausible that I am the one who is evil, crazy, insane, etc., than almost everyone who might disagree with me. Part of what I want to do is a 'sanity check,' figuratively speaking.

1. My position on abortion is one that most might describe as 'radically pro-choice.' The kind of position most would consider more extreme than mine is the kind that would go further to an outcome like banning anti-abortion/pro-life protests (which is an additional position I reject). 

2. I embraced my current position on the basis of a rational appeal that contradicted the moral intuitions I had at the time. It still contradicts my moral intuitions. My prior moral intuition is also one I understand as among the more common (moral consideration should be given to an unborn infant or whatnot after the second trimester, or after the point when the infant could independently survive outside the womb). That this has me in a state of some confusion and that others on LessWrong can help me deconfuse better than I can by myself is why I want to ask the question.

3. What I consider a relatively rational basis for my position is one I expect only holds among those who broadly share similar moral intuitions. By "assumptions diametrically opposite mine," I meant someone having an intuition that what would render a fetus worth moral consideration is not based on its capacity for sentience but on it having an immortal soul imbued by God. In that case, I don't know of any way I might start making a direct appeal as to why someone should accept my position. The only approach I can think of is to start indirectly by convincing someone much of their own religion is false. That's not something I'm confident I could do with enough competence to make such an attempt worthwhile. 

I meant to include the hyperlink to the original source in my post but I forgot to, so thanks for catching that. I've now added it to the OP. 

It seems like the kind of post I have in mind would be respected more if I'm willing and prepared to put in the effort of moderating the comments well too. I won't make such a post before I'm ready to commit the time and effort to doing so. Thank you for being so direct about why you suspect I'm wrong. Voluntary explanations for the crux of a disagreement or a perception of irrationality are not provided on LessWrong nearly often enough.

I am thinking of making a question post to ask because I expect there may be others who are able to address an issue related to legal access to abortion in a way that is actually good. I expect I might be able to write a post that would be considered to not only "suck" but might be so-so as opposed to unusually good. 

My concern was that by even only asking a question, even asked well in a way that will frame responses to be better, I would still be downvoted. It's seeming like if I put serious effort into it, though, the question post would not be super downvoted.

I'm not as concerned about potential reputational harm to myself compared to others. I also have a responsibility to communicate in ways that minimize undue reputational harm to others. Yet I'd want to talk about abortion in terms of either public policy or philosophical arguments, so it'd be a relatively jargon-filled and high-context discussion either way. 

My impression has been it's presumed that a position presented will have been adopted for bad epistemological reasons and that it has little to do with rationality without much in the way of checking. I'm not asking about subjects I want to or would frame as political. I'm asking if there are some subjects that will be treated as though they are inherently political even when they are not. 

Load More