All of Evan_Gaensbauer's Comments + Replies

Thank you for this detailed reply. It's valuable, so I appreciate the time and effort you've put into it. 

The thoughts I've got to respond with are EA-focused concerns that would be tangential to the rationality community, so I'll draft a top-level post for the EA Forum instead of replying here on LW. I'll also read your EA Forum post and the other links you've shared to incorporate into my later response. 

Please also send me a private message if you want to set up continuing the conversation over email, or over a call sometime. 

I've edited the post so It's now "resentment from rationalists elsewhere to the Bay Area community" to "resentment from rationalists elsewhere toward the Bay Area community" because that seems to reduce the ambiguity some. My use of the word 'resentment' was intentional. 

Thanks for catching those. The word 'is' was missing. The word "idea" was meant to be "ideal." I've made the changes. 

3localdeity7mo
Then should the last sentence in that paragraph perhaps say "less resettlement" instead of "less resentment"?

I'm thinking of asking as another question post, or at least a post seeking feedback probably more than trying to stake a strong claim. Provoking debate for the sake of it would hinder that goal, so I'd try to writing any post in a way to avoid that. Those filters applied to any post I might write wouldn't hinder any kind of feedback I'd seek. The social barriers to posting raised by others with the concerns you expressed are seeming high enough that I'm unsure I'll post it after all.

This is a concern I take seriously. While it is possible increasing awareness of the problem of AI will make things worse overall, I think a more likely outcome is that it will be neutral to good.

Another consideration is how it may be a risk for long-termists to not pursue new ways of conveying the importance and challenge of ensuring human control of transformative AI. There is a certain principle of being cautious in EA. Yet in general we don't self-reflect enough to notice when being cautious by default is irrational on the margin. 

Recognizing the ... (read more)

I'm aware it's a rather narrow range of ideas but a set of a few standard options being the ones most people adhere to is how it's represented in popular discourse, which is what I'm going off of as a starting point. It has been established in other comments on my post that isn't what to go off of. I've also mentioned that to be exposed to ideas I may not have thought of myself is part of why I want to have an open discussion on LW. My goal has been to gauge if that's a discussion any significant portion of the LW user-base is indeed open to having. The best I've been able to surmise as an answer thus far is: "yes, if it's done right."

As to the question of whether I can hold myself to those standards and maintain them, I'll interpret the question not as a rhetorical but literally. My answer is: yes, I expect I would be able to hold myself to those standards and maintain them. I wouldn't have asked the original question in the first place if I thought there wasn't at least a significant chance I could. I'm aware of how I'm writing this may seem to betray gross overconfidence on my part.

I'll try here to convince you otherwise by providing context in terms of the perceived strawmanning of ... (read more)

4Richard_Kennaway8mo
That is a rather narrow range of ideas. A benefit of open discussion is that the participants will be exposed to ideas that they would never have thought of themselves.

I meant to include the hyperlink to the original source in my post but I forgot to, so thanks for catching that. I've now added it to the OP. 

It seems like the kind of post I have in mind would be respected more if I'm willing and prepared to put in the effort of moderating the comments well too. I won't make such a post before I'm ready to commit the time and effort to doing so. Thank you for being so direct about why you suspect I'm wrong. Voluntary explanations for the crux of a disagreement or a perception of irrationality are not provided on LessWrong nearly often enough.

I am thinking of making a question post to ask because I expect there may be others who are able to address an issue related to legal access to abortion in a way that is actually good. I expect I might be able to write a post that would be considered to not only "suck" but might be so-so as opposed to unusually good. 

My concern was that by even only asking a question, even asked well in a way that will frame responses to be better, I would still be downvoted. It's seeming like if I put serious effort into it, though, the question post would not be sup... (read more)

4ChristianKl8mo
Part of the trouble with highly charged political discussions is that it's usually not drawing people in a way to discuss in a way that's actually good. The issue is not just reputational harm to individuals but also reputational harm to LessWrong itself. If you would write posts that bear a good risk for reputational harm to LessWrong itself but provide relatively little value to pay for that, it makes sense to downvote those. Charles A. Murray who wrote The Bell Curve made the point that while he's criticized a lot his for the book his critics seldomly quote anything from the book when criticizing it. He spent a lot of effort in writing his book in a way where you can't easily attack it by taking passages out of context. When having highly political discussions it's worth striving to have the discourse happen on that kind of level.

My impression has been it's presumed that a position presented will have been adopted for bad epistemological reasons and that it has little to do with rationality without much in the way of checking. I'm not asking about subjects I want to or would frame as political. I'm asking if there are some subjects that will be treated as though they are inherently political even when they are not. 

It's not as much about moral intuitions to me so much as rational arguments. That may not hold up if someone has some assumptions diametrically opposite of mine, like the unborn being sacred or otherwise special in some way that assigns a moral weight to them incomparably higher than the moral weight assigned to pregnant persons. That's something I'd be willing to write about if that itself is considered interesting. My intention is to ask what are the best compromises for various positions being offered by the side of the debate opposite myself, so that's very different from perspectives unfit for LW.

I'm not an active rationalist anymore but I've 'been around' for a decade. Sometimes I occasionally post on LessWrong still because it's interesting or valuable enough for some subjects. That the rationality community functions the way you describe and the norms that entails is an example of why I don't participate in the rationality community as much anymore. Thank you, though, for the feedback. 

1Evan R. Murphy9mo
Indeed! :)

Thanks. Do you feel like you have a sense of what proportion of long-termists you know who are forecasting that way? Or do you know of some way how one might learn more about forecasts like this and the reasoning or models behind them?

I think the difficulty with answering this question is that many of the disagreements boil down to differences in estimates for how long it will take to operationalize lab-grade capabilities.

The same point was made on the Effective Altruism Forum and it's a considerable one. Yet I expected that. 

The problem frustrating me is that the relative number of individuals who have volunteered their own numbers is so low it's an insignificant minority. One person doesn't disagree with their own self unless there is model uncertainty or whatever. Unless individ... (read more)

Upvoted. Thanks. 

I'll state that in my opinion it shouldn't necessarily have to be the responsibility of MIRI or even Eliezer to clarify what was meant by a position stated but is taken out of context. I'm not sure but it seems as though at least a significant minority of those who've been alarmed by some of Eliezer's statements haven't read the full post to put it in a less dramatic context. 

Yet errant signals sent seem important to rectify as they make it harder for MIRI to coordinate with other actors in the field of AI alignment based on exis... (read more)

I don't know what "this" is referring to in your sentence.

I was referring to the fact that there are meta-jokes in the post about which parts are or are not jokes. 

I want to push back a bit against a norm I think you're arguing for, along the lines of: we should impose much higher standards for sharing views that assert high p(doom), than for sharing views that assert low p(doom).

I'm sorry I didn't express myself more clearly. There shouldn't be a higher standard for sharing views that assert a high(er) probability of doom. That's not what I was argui... (read more)

The issue is that Eliezer appears to think, but without any follow-up, that most other approaches to AI alignment distinct from MIRI's, including ones that otherwise draw inspiration from the rationality community, will also fail to bear fruit. Like, the takeaway isn't other alignment researchers should just give up, or just come work for MIRI...?, but then what is it? 

From the AGI interventions discussion we posted in November (note that "miracle" here means "surprising positive model violation", not "positive event of negligible probability"):

Anonym

... (read more)

Thank you for the detailed response. It helps significantly. 

The parts of the post that are an April Fool's Joke, AFAIK, are the title of the post, and the answer to Q6. The answer to Q6 is a joke because it's sort-of-pretending the rest of the post is an April Fool's joke.

It shouldn’t be surprising others are confused if this is your best guess about what the post means altogether. 

believing p(doom) is high isn't a strategy, and adopting a specific mental framing device isn't really a "strategy" either). (I'm even more confused by how

... (read more)
5Rob Bensinger10mo
I don't know what you mean by this -- I don't know what "this" is referring to in your sentence. I mean, the big dump of chat logs is trying to make our background models clearer to people so that we can hopefully converge more. There's an inherent tension between 'say more stuff, in the hope that it clarifies something' versus 'say less stuff, so that it's an easier read'. Currently I think the best strategy is to err on the side of over-sharing and posting long things, and then rely on follow-up discussion, summaries, etc. to address the fact that not everyone has time to read everything. E.g., the three points you highlighted don't seem like new information to me; I think we've said similar things publicly multiple times. But they can be new info to you, since you haven't necessarily read the same resources I have. Since everyone has read different things and will have different questions, I think the best solution is for you to just ask about the stuff that strikes you as the biggest holes in your MIRI-map. I do think we're overdue for a MIRI strategy post that collects a bunch of the take-aways we think are important in one place. This will inevitably be incomplete (or very long), but hopefully we'll get something out in the not-distant future. I want to push back a bit against a norm I think you're arguing for, along the lines of: we should impose much higher standards for sharing views that assert high p(doom), than for sharing views that assert low p(doom). High and low p(doom) are both just factual claims about the world; an ideal Bayesian reasoner wouldn't treat them super differently, and by default would apply just as much scrutiny, skepticism, and wariness to someone who seems optimistic about AI outcomes, as to someone who seems pessimistic. In general, I want to be pretty cautious about proposed norms that might make people self-censor more if they have "concerning" views about object-level reality. There should be norms that hold here, but it

SummaryThe ambiguity as to how much of the above is a joke appears it may be for Eliezer or others to have plausible deniability about the seriousness of apparently extreme but little-backed claims being made. This is after a lack of adequate handling on the part of the relevant parties of the impact of Eliezer’s output in recent months on various communities, such as rationality and effective altruism. Virtually none of this has indicated what real, meaningful changes can be expected in MIRI’s work. As MIRI’s work depends in large part on the commu... (read more)

7Rob Bensinger10mo
Here's a reply! [https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy?commentId=FounAZsg4kFxBDiXs] Hope that clears stuff up.

Here is an update on our efforts in Canada.

1. There are nearly five of us who would be willing to sponsor a refugee to settle in Canada (indefinitely or for however long the war might last). There is a requisite amount of money that must be committed beforehand to cover at least a few months worth of costs for settling in Canada and living here for a few months. Determining whether 3 or more of us would be able to cover those costs appears to be the most significant remaining bottleneck before we decide whether to take this on.

2. There are two effective al... (read more)

That isn't something I thought of but that makes sense as the most significant reason that, at least so far, I hadn't considered yet. 

I notice this comment has only received downvotes other than the strong upvote this post received by default from me as the original poster. My guess would be this post has been downvoted because it's (perceived as):

  1.   an unnecessary and nitpicking question.
  2. maybe implying MIRI and the rationality community are not authoritative sources in the field of AI alignment.

That was not my intention. I'd like to know what other reasons there may be for why this post was downvoted, so please reply if you can think of any or you are one of the users who downvoted this post.

AI alignment is the term MIRI (among other actors in the field) ostensibly prefers to refer to the control problem instead of AI safety to distinguish it from other AI-related ethics or security issues because those other issues don't constitute x-risks. Of course the extra jargon could be confusing for a large audience being exposed to AI safety and alignment concerns for the first time. In the case of introducing the field to prospective entrants into the field or students, keeping it simpler as you do may very easily be the better way to go.

Strongly upvoted. Thanks for your comprehensive review. This might be the best answer I've ever received for any question I've asked on LW.

In my opinion, given that these other actors who've adopted the term are arguably leaders in the field more than MIRI, it's valid for someone in the rationality community to claim it's in fact the preferred term. A more accurate statement would be:

  1. There is a general or growing preference for the term AI alignment be used instead of AI safety to refer to the control problem.
  2. There isn't a complete consensus on this but th
... (read more)

Thanks for flagging this. 

  1. I presumed that "AI alignment" was being used as a shorthand for x-risks from AI but I didn't think of that. I'm not aware either that anyone from the rationality community I've seen express this kind of statement really meant for AI alignment to mean all x-risks from AI. That's my mistake. I'll presume they're referring to only the control problem and edit my post to clarify that.
     
  2. As I understand it, s-risks are a sub-class of x-risks, as an existential risk is not only an extinction risk but any risk of the future trajectory of Earth-originating intelligence being permanently and irreversibly altered for the worse. 

There are several signals the government might be trying to send that come to mind:

  • It may be only one government agency or department, or a small set of agencies/departments, that are currently focused on the control problem. They may also still need to work on other tasks with government agencies/departments that have national security as the greatest priority. Even if a department internally thinks about the control problem in terms of global security, they may want to publicly reinforce national security as a top priority to keep a good working relation
... (read more)

I previously have not been as aware that this is a pattern of how so many people have experienced responses to criticism from Geoff and Leverage in the past. 

Yeah, at this point, everyone coming together to sort this out together as a way of building a virtuous spiral of making speaking up feel safe enough that it doesn't even need to be a courageous thing to do or whatever is the kind of thing I think your comment also represents and what I was getting at. 

For what it's worth, my opinion is that you sharing your perspective is the opposite of making a mistake.

Sorry, edited. I meant that it was a mistake for me to keep away before, not now.

(That said, this post is still quite safe. It's not like I have scandalous information, more that, technically I (or others) could do more investigation to figure out things better.)

In the past, I've been someone who has found it difficult and costly to talk about Leverage and the dynamics around it, or organizations that are or have been affiliated with effective altruism, though the times I've spoken up I've done more than others. I would have done it more but the costs were that some of my friends in effective altruism interacted with me less, seemed to take me less seriously in general and discouraged me from speaking up more often again with what sometimes amounted to nothing more than peer pressure. 

That was a few years ago... (read more)

Those making requests for others to come forward with facts in the interest of a long(er)-term common good could find norms that serves as assurance or insurance that someone will be protected against potential retaliation against their own reputation. I can't claim to know much about setting up effective norms for defending whistleblowers though.

I dipped my toe into openly commenting last week, and immediately received an email that made it more difficult to maintain anonymity - I was told "Geoff has previously speculated to me that you are 'throwaway', the author of the 2018 basic facts post".

Leverage Research hosted a virtual open house and AMA a couple weeks ago for their relaunch as a new kind of organization that has been percolating for the last couple years. I attended. One subject Geoff and I talked about was the debacle that was the article in The New York Times (NYT) on Scott Alexander f... (read more)

Based on how you wrote your comment, it seems that the email you received may have come across as intimidating.

I think the important information here is how did Geoff / Leverage Research handle similar criticism in the past. (I have no idea. I assume both you and Ryan probably know more about this.) As they say, past behavior is the best predictor of future behavior. The wording of the e-mail is not so important.

Regarding problems related pseudoscientific quacks and cranks as a kind of example given, at this point it seems obvious that it needs to be taken for granted that there will be causal factors that, absent effective interventions, will induce large sections of society to embrace pseudo-scientific conspiracy theories. In other words, we should assume that if there is another pandemic in a decade or two, there will be more conspiracy theories. 

At that point in time, people will beware science again because they'll recall the conspiracies they believed i... (read more)

The fact that many scientists are awful communicators who are lousy as telling stories is not a point against them. It means that they were more interested in figuring out the truth than figuring out how to win popularity contests.

This implies to me that there is a market for science communicators who in their careers specialize in winning popularity contests but do so to spread the message of scientific consensus in a way optimized to combat the most dangerous pseudoscience and misinformation/disinformation. It seemed like the Skeptics movement was trying... (read more)

First, don't trust any source that consistently sides with one political party or one political ideology, because Politics is the Mind Killer.

One challenge with this is that it's harder to tell what the ideology in question is. If anti-vaxxers are pulled from among the populations of wingnuts on both the left and the right, I'm inclined to take lots of people whose views consistently side with one political party much more seriously not only on vaccines but on many other issues as well. 

It's quantitatively difficult to meet one million people, e.g., in terms of the amount of time it takes to accomplish that feat but how qualitatively hard it is makes it seem almost undoable but to me it's more imaginable. I've worked in customer service and sales jobs in multiple industries. 

I never kept count enough to know if I ever met one hundred people in one day but it could easily have been several dozen people everyday. I wouldn't be surprised if someone working the till at a McDonalds in Manhattan met over one hundred people on some days. Mo... (read more)

One overlooked complication here is the extent to which honor is still socially constructed in particular circumstances. One helpful way to frame practical ethics is to distinguish between public and private morality. Almost nobody subscribes to a value system that exists in a vacuum independent of the at least somewhat subjective influence of their social environment. Having integrity can sometimes still mean subverting one's personal morality to live up to societal standards imposed upon oneself. 

To commit suicide after a sufficiently shameful act h... (read more)

PR is about managing how an antagonist could distort your words and actions to portray you in a negative light.

There are narrow contexts in which the overwhelming purpose of PR, to the exclusion of almost any other concern, is to manage how an antagonist could distort one's words and actions to depict one in a hostile way. That's not the only good reason for PR in general. 

Much of PR is about finding the right ways to best communicate what an organization is trying to do in an accurate way. Miscommunication may trigger others into fearing what one rea... (read more)

I'm coming to this article by way of being linked from a Facebook group though I am also an occasional LessWrong user. I would have asked this question in the comments of the FB post where this post was linked, but since the comments were closed there, I'll ask it here: What was (or were) the reason(s) behind:
 

  1. Posting this to a FB group with the comments open;
  2. Waiting until a few comments had been made, then closing them on FB and then asking for commenters to comment on this LW post instead?

I understand why someone would do this if they thought a plat... (read more)

2Mati_Roy2y
1. was a mistake turning off comments serves as a coordination mechanism to discuss the topic at the same place
4Raphaëll2y
I wrote this blogpost but I did not post this article on Facebook myself, so I have no idea what were the motivations behind those decisions. I would be curious too see what they said!
What seems to matter is (1) that such a focus was chosen because interventions in that area are believed to be the most impactful, and (2) that this belief was reached from (a) welfarist premises and (b) rigorous reasoning of the sort one generally associates with EA.

This seems like a thin concept of EA. I know there are organizations who choose to pursue interventions based on them being in an area they believe to be (among) the most impactful, and based on welfarist premises and rigorous reasoning. Yet they don't identify as EA organizations. That ... (read more)

2Pablo3y
I said that the belief must be reached from welfarist premises and rigorous reasoning, not from what the organization believes are welfarist premises and rigorous reasoning. I'm not sure what you mean by this. And it seems clear to me that lots of nonprofit orgs would not classify as EA orgs given my proposed criterion (note the clarification above).

Technical Aside: Upvoted for being a thoughtful albeit challenging response that impelled me to clarify why I'm asking this as part of a framework for a broader project of analysis I'm currently pursuing.

2Ben Pace3y
Pardon for being so challenging, you know I’m always happy to talk with you and answer your questions Evan :) Am just a bit irritated, and let that out here. I do think that “identity” and “brand” mustn’t become decoupled from what actually gets done - if you want to talk meaningfully about ‘EA’ and what’s true about it, it shouldn’t all be level 3/4 simulacra. Identity without substance or action is meaningless, and sort of not something you get to decide for yourself. If you decide to identify as ‘an EA‘ this causes no changes in your career or your donations, has the average EA donations suddenly gone down? Has EA actually grown? It’s good to be clear on the object level and whether the proxy actually measures anything, and I’m not sure I should call that person and EA’ despite their speech acts to the contrary. (Will go and read your longer comment now.)

Summary:

I'm working on a global comparative analysis of funding/granting orgs not only in EA, but also in those movements/communities that overlap with EA, including x-risk.

Many in EA may evaluate/assess the relative effectiveness of these orgs in question according to the standard normative framework(s) of EA, as opposed to the lense(s)/framework(s) through which such orgs evaluate/assess themselves, or would prefer to be evaluted/assessed by other principals and agencies.

I expect that the EA community will want to know to what extent various orgs... (read more)

2Ben Pace3y
nods I think I understand your motivation better. I'll leave a different top-level answer.

Summary: I'm aware of a lot of examples of real debates that inspired this dialogue. It seems in those real cases, a lot of disagreement or criticism of public claims or accusations of lying of different professional organizations in effective altruism, or AI risk, have repeatedly been generically interpreted as a blanket refusal to honestly engage with the clams being made. Instead of a good-faith effort to resolve different kinds of disputes with public accusations of lying being made, repeat accusations, and justifications for them, are made into l... (read more)

2romeostevensit3y
In metaphysical conflicts people don't win by coming up with the best evidence, they win by controlling what gets counted as evidence. By default, memeplexes gain stability by creating an environment in which evidence against them can't be taken seriously. Arguments that EA has failed to actually measure the things it claims are worth measuring should be taken very seriously on their face, since that is core to the claims of moral obligation (which is itself a bad frame, but less serious.)
3jessicata4y
This is a fictional dialogue demonstrating a meta-level point about how discourse works, and your comment is pretty off-topic. If you want to comment on my AI timelines post, do that (although you haven't read it so I don't even know which of my content you're trying to comment on).
BTW, it might be worth separating out the case where controversial topics are being discussed vs boring everyday stuff. If you say something on a controversial topic, you are likely to get downvotes regardless of your position. "strong, consistent, vocal support" for a position which is controversial in society at large typically only happens if the forum has become an echo chamber, in my observation.

On a society-wide scale, "boring everyday stuff" is uncontroversial by definition. Conversely, articles that have a high total number of... (read more)

You make a good point I forgot to add: the function karma on an article or comment serves in providing info to other users, as opposed to just the submitting user. That's something people should keep in mind.

What bugs me is when people who ostensibly aspire to understand reality better let their sensitivity get in the way, and let their feelings colour the reality of how their ideas are being received. It seems to me this should be a basic skill of debiasing that people would employ if they were as serious about being effective or rational thinkers as they claim to be. If there is anything that bugs me you're suspicious of, it's that.

Typically, I agree with an OP who is upset about the low quality of negative comments, but I disagree with how upset t... (read more)

Some but not all academics also seek truth in terms of their own beliefs about the world, and their own processes (including hidden ones) for selecting the best model for any given decision. From a Hansonian perspective, that's at least what scientists and philosophers are telling themselves. Yet from a Hansonian perspective, that's what everyone is telling themselves about their ability to seek truth, especially if a lot of their ego is bound up in 'truth-seeking', including rationalists. So the Hansonian argument here would appear to ... (read more)

One thing about this comment that really sticks out to me is the fact I know several people who think LessWrong and/or the rationality community aren't that great at truth-seeking. There are a lot of specific domains where rationalists aren't reported to be particularly good at truth-seeking. Presumably, that could be excused by the fact rationalists are generalists. However, I still know people who think the rationality community is generally bad at truth-seeking.

Those people tend to hail from philosophy. To be fair, 'philosophy', as a... (read more)

2Dagon4y
Can you give some pointers to "philosophy" as a community? It feels like a type mismatch to compare a bunch of message boards and blogs ('rationalist community') to an academic pursuit ('philosophy'). I wonder if this post and thread is conflating multiple meanings of "truth-seeking" in a way that causes confusion. My version of rationality is about truth-seeking in terms of my beliefs about the world, and my processes (including hidden ones) for selecting the best model for any given decision. Influence over future experiences is the truth I'm seeking. Academics (including scientists and philosophers) are "truth-seeking" in a much more theoretical sense, looking for consistent descriptions of parts of the world (or sometimes other imagined worlds), and in getting agreement (or at least publication references) on such. Each observes and learns from the other, of course, but they're not really all that similar. I think of rationality as engineering more than science.
Load More