I just saw one recently on the EA forum to the effect that EAs who shortened their timelines only after chatGPT had the intelligence of a houseplant.
Somebody asked if people got credit for <30 year timelines posted in 2025. I replied that this only demonstrated more intelligence than a potted plant.
If you do not understand how this is drastically different from the thing you said I said, ask an LLM to explain it to you; they're now okay at LSAT-style questions if provided sufficient context.
In reply to your larger question, being very polite about the house burning down wasn't working. Possibly being less polite doesn't work either, of course, but it takes less time. In any case, as several commenters have noted, the main plan is to have people who aren't me do the talking to those sorts of audiences. As several other commenters have noted, there's a plausible benefit to having one person say it straight. As further commenters have noted, I'm tired, so you don't really have an option of continuing to hear from a polite Eliezer; I'd just stop talking instead.
It is a damn shame to hear of this tiredness, and I hope that your mood improves, somehow, somewhen, hopefully sooner than you expect.
This reply, though, I am forced to say, does not quite persuade, and to be totally frank even disappoints me a little. It was my understanding that both MIRI and to some extent one of your goals as of now was one of public outreach and communication (as a sub-goal for policy change) - this was at least how I understood this recent tweet describing what MIRI is doing and why people should donate to it, as well as other things you've been doing somewhat recently going on a bunch of podcasts and interviews and things of that nature (as well as smaller things such as separating out a 'low-volume' public persona account for reach as well as a shitpost-y side one).
Therefore, to put it maybe somewhat bluntly, I thought that thinking deeply and being maximally deliberate about what you communicate and how, and in particular how well it might move policy or the public, was, if not quite the whole idea, maybe a main goal or job of your organization and indeed your public persona. So, though of course a great many allowances are to be made when it c...
I accept your correction that I misquoted you. I paraphrased from memory and did miss real nuance. My bad.
Looking at the comment now, I do see that it has a score of -43 currently, and is the only negative karma comment on the post. So maybe a more interesting question is why I (and presumably several others) interpreted it as insult when logical content of "Intelligence(having <30y timeline in 2025) > Intelligence(potted plant)" doesn't contain any direct insult. My best guess is that people are running informal inference on ...
I'm sorry to hear about your health/fatigue. That's a very unfortunate turn of events, for everyone really.
It’s actually been this way the whole time. When I first met Eliezer 10 years ago at a decision theory workshop at Cambridge University, I asked him what his AI timelines were over lunch; he promptly blew a raspberry as his answer and then fell asleep.
My model of Eliezer thinks relatively carefully about most of his comms, but sometimes he gets triggered and says some things in ways that seem quite abrasive (like the linked EA Forum comment). I think this is a thing that somewhat inevitably happens when you are online a lot, and end up arguing with a lot of people who themselves are acting quite unreasonably.
Like, if you look at almost anyone who posts a lot online in contexts that aren't purely technical discussion, they almost all end up frequently snapping back at people. This is true of Gwern, Zvi, Buck, and to a lesser degree even Scott Alexander if you look at a bunch of his older writing, and most recently I can see even Kelsey Piper who has historically been extremely measured end up snapping back at people on Twitter in ways that suggests to me a lot of underlying agitation. I also do this not-too-infrequently.
I feel pretty confused about the degree to which this is just a necessary part of having conversations on the internet, or to what degree this is a predictable way people make mistakes. I am currently tending towards the former, but it seems like a hard question I would like to think more about.
I dispute that I frequently snap at people. I just read over my last hundred or so LessWrong comments and I don't think any of them are well characterized as snapping at someone. I definitely agree that I sometimes do this, but I think it's a pretty small minority of things I post. I think Eliezer's median level of obnoxious abrasive snappiness (in LessWrong comments over the last year) is about my 98th percentile.
I think your top-level answer on this very post is pretty well-characterized as snapping at someone, or at least part of the broader category of abrasiveness that this post is trying to point to (and I was also broadly pointing to in my comment).
I also think if you look at all of Eliezer's writing he will very rarely snap at people. The vast majority of his public writing this year are in If Anyone Builds It and the associated appendices, which as far as I can tell contain zero snapping/abrasiveness/etc. My sense is also approximately zero of his media interviews on the book have contained this thing (though I am less confident of this, since I haven't seen them all).
I don't super want to litigate this, though happy to talk with you about this. I do think you are basically #2 in terms of people who do this in my mind who I am socially close to (substantially above everyone else except maybe Eliezer in that list, and I don't know where I would place you relative to him). You do this much less in public, and much more in person and semi-public.
I feel pretty confused about the degree to which this is just a necessary part of having conversations on the internet, or to what degree this is a predictable way people make mistakes.
My intuition is that if our in-person conversations left a trail of searchable documentation similar to our internet comments, it would be at least similarly unflattering, even for very mild-mannered people.
(Unlike real life it's more available to conscious choice to be mild-mannered all the time, if you set your offense-vs-say-something threshhold in a sufficiently mild-man...
I think it’s important to distinguish irritation from insult. The internet is a stressful place to communicate. Being snappish and irritable is normal. And many people insult specific groups they disagree with, at least occasionally.
What sets Eliezer apart from Gwern, Scott Alexander and Zvi is that he insults his allies.
That is not a recipe for political success. I think it makes sense to question whether he’s well suited to the role of public communicator about AI safety issues, given this unusual personality trait of his.
Your conception of "allies" seems... flawed given the history here. I don't super want to litigate this, but this feels like a particularly weak analysis.
Eliezer definitely doesn't think of it as an ally (or at least, not a good ally who he is appreciative of and wants to be on good terms with).
A bunch of points that are kind of the same point:
Some other factors that are relevant:
To be clear I think he could do a better job of understanding people he's writing with via text format, and I am still confused about why he seems (to me) below average at this.
I'm not sure if his approach is actually productive for this, but for the longest time, the standard response to Eliezer's concerns was that they're crazy sci-fi. Now that they're not crazy sci-fi, the response is that they're obvious. Constantly reminding people that his crazy predictions were right (and everyone else was wrong in predictable ways) is a strategy to get people to actually take his future predictions seriously (even though they're obviously crazy sci-fi).
I think this makes sense as a model of where he is coming from. As a strategy, my understanding of social dynamics is that "I told you so" makes it harder, not easier, for people to change their minds and agree with you going forward.
(Again, not trying to excuse pointlessly being a dick. Plausibly Eliezer is not infrequently a big pointless dick, I do not know, no strong opinion.)
Another hypothesis: It's possible that he thinks some people should be treated with public contempt.
As an intuition pump for how it might be hypothetically possible that someone should be treated with public contempt, consider a car salesman who defrauds desperate people. He just straightforwardly lies about the quality of the cars he sells; he picks vulnerable people desperate for a cheap way to juggle too many transport needs; he charms them, burning goodwill. He has been confronted about this, and just pettily attacks accusers, or if necessary moves towns. He has no excuse, he's not desperate himself, he just likes making money.
How should you treat him? Plausibly contempt is correct--or rather, contempt in reference to anything to do with his car sales business. IDK. Maybe you can think of a better response, but contempt does seem to serve some kind of function here: a very strong signal of "this stuff is just awful; anyone who learns much about it much will agree; join in on contempt for this stuff; this way people will know to avoid this stuff".
a very strong signal of "this stuff is just awful; anyone who learns much about it much will agree; join in on contempt for this stuff; this way people will know to avoid this stuff".
When Eliezer says that something is just awful, I interpret that to mean "this is just awful." Period. I'm sure that from long experience he has observed that in fact very few people who learn much about it will agree, because they are not able to follow his reasoning, let alone derive his conclusions for themselves. I also doubt he would find it useful to have a crowd of potplants dogpiling on the target in imitation of his excoriation but without his understanding.
The world needs all types of activism: from the firebrands to the bridge-builders. I too find his tone to be abrasive at times. He can be self-aggrandizing, pompous, and downright insulting. In my experience this is not uncommon for people (most typically men) who believe they’re the smartest person in the room.
Personally, I try to live by, “first, be kind.” I’ve found the most success with leading with empathy, but it’s not effective in every circumstance. Some people you can reach better by showing them our commonalities. Some people need to be shocked into thinking about what the implications of their beliefs are. Sometimes a sit-in is effective in bringing about needed change, other times it takes a riot.
(This is not a defense of poor behavior; people are responsible for not pointlessly being dicks.) A hypothesis I keep in mind which might explain some instances of this would be The Bughouse Effect.
To give this hypothesis a bit more color, I think people get invested in hope. Often, hope is predicated on a guess / leap of faith. It takes the form: "Maybe we [some group of people] are on the same page enough that we could hunt the same stag; I will proceed as though this is the case.".
By investing in hope that way, you are opening up ports in your mind/soul, and plugging other people into those ports. It hurts extra when they don't live up to the supposed shared hope.
An added wrinkle is that the decision to invest in hope like this, is often not cleanly separated out mentally. You don't easily, cleanly separate out your guesses about other people, your wishes, your plans, your "just try things and find out" gambles, and so on. Instead, you do a more compressed thing, which often works well. It bundles up several of these elements (plans, hopes, expectations, action-stances, etc.) into one stance. (Compare: https://www.lesswrong.com/posts/isSBwfgRY6zD6mycc/eliezer-s-unteachable-methods-of-sanity?commentId=Hhti6oNe3uk8weiFL and https://tsvibt.blogspot.com/2025/11/constructing-and-coordinating-around.html#flattening-levels-of-recursive-knowledge-into-base-level-percepts ) It's not desirable, in the sense that any specific instance would probably be better to eventually factor out; but that can take a lot of effort, and it's often worth it to do the bundled thing compared to doing nothing at all (e.g. never taking a chance on any hopes), and it might be that, even in theory, you always have some of this "mixed-up-ness".
Because of this added wrinkle, doing better is not just a matter of easily learning to invest appropriately and not getting mad. In other words, you might literally not know how to both act in accordance with having hope in things you care about, while also not getting hurt when the hope-plans get messed up--such as by others being unreliable allies. It's not an available action. Maybe.
An author is not their viewpoint characters. Nevertheless, there is often some bleed-through. I suggest you read Harry Potter and the Methods of Rationality. [Warning: this is long and rather engrossing, so don't start it when you are currently short of time.] I think you may conclude that Eliezer has considered this question at some length, and then written a book that discusses it (among a variety of other things). Notably, his hero has rather fewer, and some different, close friends that J.K. Rowling's does.
This ACX post, Your Incentives Are Not The Same, explains it adequately. He is prominent enough that what might be obviously negative-value behavior for you might not be for him.
I'm not aware of good reason to believe 1. 2 seems likely; MIRI has a number of different people working on its public communications, which I would expect to produce more conservative decisions than Eliezer alone, and which means that some of its communications will likely be authored by people less inclined to abrasiveness. (Also, I have the feeling that Eliezer's abrasive comments are often made in his personal capacity rather than qua MIRI representative, which I think makes them weaker evidence about the org).
As far as I understand, Eliezer is abrasive for these reasons:
As evidenced by him claiming that an approach is "Not obviously stupid on a very quick skim" and congratulating the author on eliciting a THAT positive review. Alas, I also have seen obviously stupid alignment-related ideas make their way at least to LessWrong.
However, it would be possible if the ASIs required OOMs more resources per token than humans. In this case applying the ASIs would be too expensive. Alas, this is unlikely.
IMO Eliezer also believes that the entire approach is totally useless. However, a case against this idea is found in comments mentioning Kokotajlo (e.g. mine)
I'm sticking this in comments (not answers) section, because this doesn't directly bear on the OP's (1) and (2), nor on Eliezer in particular. But: a different important aspect of public, and private, communication, is that they have direct effects on what the speaker learns, and on whether others can see how the speaker is seeing the world. I mean: communication is sometimes about communicating, rather than about having consequentialist effects on those one is talking to.
Leo Szilard is in the running for all time best rationalists IMO, and one of the "ten commandments" he tried to live by was
Speak to all men as you do to yourself, with no concern for the effect you make, so that you do not shut them out from your world; lest in isolation the meaning of life slips out of sight and you lose the belief in the perfection of creation.
I think there's something to that.
I think this is an important perspective, especially for understanding Eliezer, who places a high value on truth/honesty, often directly over consequentialist concerns.
While this explains true but unpleasant statements like "[Individual] has substantially decreased humanity's odds of survival", it doesn't seem to explain statements like the potted plant one or other obviously-not-literally-true statements, unless one takes the position that full honesty also requires saying all the false and irrational things that pass through one's head as well. (And even then, I'd expect to see an immediate follow-up of "that's not true of course").
I appreciate the comment, and agree the case for venting feelings, allowing one's own status-beliefs to be visible, etc., is worth considering separately from the case for sharing facts accurately.
I do think the quote from Szilard, above, is discussing more than facts / [things with a truth value]. And I think there's real "virtue of having more actual contact with the world, and with other people" in sharing more of one's thoughts/feelings/attitudes/etc. Not, as you say, "all the false and irrational things that pass through one's head," because all kinds of unimportant nonsense passes through my head sometimes. But I do see some real "virtue of non-consequentialist communication" value to e.g. sharing those feelings, attitudes, viewpoints, ambitions, etc that are the persistent causes of my other thoughts and actions, and to sometimes trying to convey these via direct/poetical images ("smarter than a potted plant") rather than clinical self-description ("I seem to be annoyed").
Main upside to doing this:
(I agree not all triggered sentences are a good idea to say, because sometimes everybody goes haywire in a useless+damaging way, and sometimes other people don't want to have to deal with my nonsense and shouldn't need to, and there's a whole art to this, but I don't think it's an art based in asking whether communication will have good effects.)
I like this perspective. I would agree that there is more to knowing and being known by others than simply Aumann Agreement on empirical fact. I also probably have a tendency to expect more explicit goal-seeking from others than myself.
I haven't thought this through before, but I notice two things that affect how open I am. The first is how much the communication is private, has non-verbal cues, and has an existing relationship. So right now, I'm not writing this with a desired consequence in mind, but I am filtering some things out subconsciously - like if we were in person talking right now, I might launch into a random anecdote, but while writing online I stay on a narrower path.
The second is that I generally only start running my "consequentialist program" once I anticipate that someone may be upset by what I say. The anticipation of offense is what triggers me to think either "but it still needs to be said" or "saying this won't help". So maybe my implicit question was less "why does Eliezer not aim all his communication at his goals" and more "why doesn't he seem to have the same guardrail I do about only causing offense if it will help", which is a more subjective standard.
Agreed with this, good point.
I'd note that Szilard also convinced Fermi not to publish results on nuclear chain reactions, to keep them out of Nazi hands. So presumably he also understood that this ideal was an approximation that must sometimes give way to consequentialist concerns.
Yes; a different one of the "ten commandments" Szilard tried to live by (they're really short and are worth reading IMO, will take you 3 min) was "never lie without need." (This ofc suggests there are times when one does need to; and Szilard helped many Jewish families get out of Nazi Germany at the last minute, in addition to convincing Fermi to not publish; so I would guess he navigated many actual such needs).
In terms of what's a "need" to lie: IMO the differentiator between "worth lying" and "worth telling the truth" isn't the stakes (AI existential risk is of course extremely high stakes); it's more like, how much one needs to avoid "isolation" / having "the meaning of life slip through your fingers" vs how much one needs to get people to do something very specific and local that one already has a sufficient map of, e.g. to walk away from the attic containing Anne Frank. This claim is similar to the claim that AI risk is not well-served by some "emergencies"-suited heuristics, despite being hugely urgent and important.
When you start wondering if one of your heroes is playing 11D chess when they do things that run counter to your idealized notion of who they are... it probably means you've idealized them a bit too much. Eliezer is still "just a man", as the Merovingian might say.
You may also underestimate the kind of pressures to which he is subjected. Are you familiar with /r/sneerclub at all? This was a group of redditors united by contempt for Less Wrong rationalism, who collected dirt on anything to do with the rationalist movement, and even supplied some of it to a New York Times journalist. And that was before the current AI boom, in which even billionaires can feel threatened by him, to the point that pioneer MIRI donor Peter Thiel now calls him a legionnaire of Antichrist.
Add to that 10+ years of daily debate on social media, and it shouldn't be surprising that any idea of always being diplomatic, has died a death of a thousand cuts.
I do think that this is probably part of my misprediction - that I simply idealize others too much and don't give enough credit to how inconsistent humans actually are. "Idealize" is probably just the Good version of "flatten", with "demonize" being the Bad version, both of which are probably because it takes less neurons to model someone else that way.
I actually just recently had the displeasure of stumbling upon that reddit and it made me sad that people wanted to devote their energies to just being unkind without a goal. So I'm probably also not modeling how my own principle of avoiding offense unless helpful would erode over time. I've seen it happen to many public figures on twitter - it seems to be part of the system.
I think Eliezer is just really rude and uninterested in behaving civilly, and has terrible intuitions about a wide variety of topics, especially topics related to how other people think or behave. And he substantially evaluates whether people are smart or reasonable based on how much they agree with him or respect him, and therefore writes off a lot of people and behaves contemptuously toward them. And he ends up surrounded by people who either hero worship him or understate their disagreements with him in order to get along with him—many of his co-workers would prefer he didn't act like an asshole on the internet, but they can't make that happen.
I think the core problem with Eliezer is that he spent his formative years arguing on the internet with people on listservs, most of whom were extremely unreasonable. And so he's used to the people around him being mostly idiots with incredibly stupid takes and very little value to add. So he is quite unused to changing his mind based on things other people say.
I don't think you should consider him to be rational with respect to this kind of decision. (I also don't think you should consider him to be rational when thinking about AI.)
I personally would not recommend financial support of MIRI, because I'm worried it will amplify net negative communications from him, and I'm worried that it will cause him to have more of an effect on discourse e.g. on LessWrong. I like and respect many MIRI staff, and I think they should work elsewhere and on projects other than amplifying Eliezer.
(Eliezer is pleasant and entertaining in person if you aren't talking about topics where he thinks your opinion is dumb. I've overall enjoyed interacting with him in person, and he's generally treated me kindly in person, and obviously I'm very grateful for the work he did putting the rationalist community together.)
I personally would not recommend financial support of MIRI, because I'm worried it will amplify net negative communications from him
Small note: Eliezer is largely backing off from direct comms and most of our comms in the next year will be less Eliezer's-direct-words-being-promoted than in the past (as opposed to more). Obviously still lots of Eliezer thoughts and Eliezer perspectives and goals, but more filtered as opposed to less so. Just FYI.
Oh alas, I think that is a major update downwards on MIRI's work here. Happy to chat about it if you want sometime, but it appears to me that almost every time Eliezer intentionally writes substantial public comms here, things get non-trivially better (e.g. I think the Time article was much better than other things MIRI had done for a while). I am not super confident here.
To be clear, it's not because we agree with Buck's model. It's more that Eliezer has persistent health and stamina issues and others (Nate, Malo, etc.) need to step up and receive the torch.
(Also "less" doesn't mean "zero".)
Tracking your attitudes here is pretty important to me, because I respect you a lot and also work for MIRI. Still, it's been kind of hard, because sometimes it looks like you're pleasantly surprised (e.g., about the first two sections of IABIED: "After reading the book, it feels like a shocking oversight that no one wrote it earlier" and "it's hard for me to imagine someone else writing a much better [book for general audiences on x-risk]"), and then other times it looks like that pleasant surprise hasn't propagated through your attitudes more broadly.
"The main thing Eliezer and MIRI have been doing since shifting focus to comms addressed a 'shocking oversight' that it's hard to imagine anyone else doing a better job addressing" (lmk if this doesn't feel like an accurate paraphrase) feels like it reflects a pretty strong positive update in the speaker! (especially having chatted about your views before that)
I guess I was just surprised / confused by the paragraph that starts "I personally would not...", given the trajectory over the past few months of your impressions of MIRI's recent work. Would you have said something much more strongly negative in August? Does IABIED not significantly inform your expectations of future MIRI outputs? Something else?
I can see why the different things I've said on this might seem inconsistent :P It's also very possible I'm wrong here, I'm not confident about this and have only spent a few hours in conversation about it. And if I wasn't recently personally angered by Eliezer's behavior, I wouldn't have mentioned this opinion publicly. But here's my current model.
My current sense is that IABIED hasn't had that much of an effect on public perception of AI risk, compared to things like AI 2027. My previous sense was that there are huge downsides of Eliezer (and co) being more influential on the topic of AI safety, but MIRI had some chance of succeeding at getting lots of attention, so I was overall positive on you and other MIRI people putting your time into promoting the book. Because the book didn't go as well as seemed plausible, promoting Eliezer's perspective seems less like an efficient way of popularizing concern about AI risk, and less outweighs the disadvantages of him being having negative effects inside the AI safety community.
For example, my guess is that it's worse for the MIRI governance team to be at MIRI than elsewhere except in as much as they gain prominence due to Eliezer association; if that second factor is weaker, it looks less good for them to be there.
I think my impression of the book is somewhat more negative than it was when it first came out, based on various discussions I've had with people about it. But this isn't a big factor.
Does this make sense?
"The main thing Eliezer and MIRI have been doing since shifting focus to comms addressed a 'shocking oversight' that it's hard to imagine anyone else doing a better job addressing" (lmk if this doesn't feel like an accurate paraphrase)
This paraphrase doesn't quite preserve the meaning I intended. I think many people would have done a somewhat better job.
For example, my guess is that it's worse for the MIRI governance team to be at MIRI than elsewhere except in as much as they gain prominence due to Eliezer association
Or if they want to work from a frame that isn't really supported by other orgs (i.e., they're closer to Eliezer's views than to the views/filters enforced at AIFP, RAND, Redwood, and other alternatives). I think people at MIRI think halt/off-switch is a good idea, and want to work on it. Many (but not all) of us think it's Our Best Hope, and would be pretty dissatisfied working on something else.
I agree that visible-impact-so-far for AI2027 is > it is for IABIED, but I'm more optimistic than you about IABIED's impact into the future (both because I like IABIED more than you do, and because I'm keeping an eye on ongoing sales, readership, assignment in universities, etc).
I think my impression of the book is somewhat more negative than it was when it first came out, based on various discussions I've had with people about it.
Consider leaving a comment on your review about this if you have the time and inclination in the future; I'm at least curious, and others may be, too.
(probably I bow out now; thanks Buck!)
I think you probably didn't read the moderation guidelines for this post:
Moderation Note: Please don't comment with "sides", eg. "Eliezer is [good]/[bad]", "people who find him abrasive are [right]/[wrong]".
This comment seems to me to straightforwardly violate them. To be clear, I am not saying the things you are saying here should not be said, it just seems like the author was trying to have a pretty different conversation (and my guess is the author is right that whatever macro conversation is going on here will go better if people follow these guidelines for now).
FWIW I almost missed the moderation guidelines for this post, it's rare that people actually edit them.
Fair enough! Agree it's not super widely used, but still seems like we should enforce it when people do use them.
Oh, you're right, I didn't read those. Feel free to remove the comment or whatever you think is the right move.
Makes sense. I think I'll move it out of the answers into the comments but leave it around, but might delete it if it ends up dominating the rest of the conversation.
I agree with this decision. You reference the comment in one of your answers. If it starts taking over, it should be removed, but can otherwise provide interesting meta-commentary.
it just seems like the author was trying to have a pretty different conversation
I think mostly in tone. If I imagine a somewhat less triggered intro sentence in Buck's comment, it seems to be straightforwardly motivating answers to the two questions at the end of OP:
1. None of Eliezer's public communication is -EV for AI Safety
2. Financial support of MIRI is likely to produce more consistently +EV communication than historically seen from Eliezer individually.
ETA: I do think the OP was trying to avoid spawning demon threads, which is a good impulse to have (especially when it comes to questions like this).
Meta: any discussion or reaction you observe to abrasiveness and communication style (including the discussion here) is selected for people who are particularly sensitive to and / or feel strongly enough about these things one way or the other to speak up. I think if you don't account for this, you'll end up substantially overestimating the impact and EV in either direction.
To be clear, I think this selection effect is not simply, "lots of people like to talk about Eliezer", which you tried to head off as best you could. If you made a completely generic post about discourse norms, strategic communication, the effects and (un)desirability of abrasiveness and snark, when and how much it is appropriate, etc. it might get less overall attention. But my guess is that it would still attract the usual suspects and object-level viewpoints, in a way that warps the discussion due to selection.
As a concrete example of the kind of effect this selection might have: I find the norms of discourse on the EA Forum somewhat off-putting, and in general I find that thinking strategically about communication (as opposed to simply communicating) feels somewhat icky and not particularly appealing as a conversational subject. From this, you can probably infer how I feel about some of Eliezer's comments and the responses. But I also don't usually feel strongly enough about it to remark on these things either way. I suspect I am not atypical, but that my views are underrepresented in discussions like this.
EAs who shortened their timelines only after chatGPT had the intelligence of a houseplant
Nitpick: his actual comment only suggests that EAs whose current timelines are above 30 years are dumber than a potted plant. Furthermore, the intelligence threshold of non-potted houseplants is significantly lower, as they're generally too small to support recursive self-inflorescence
I experience cognitive dissonance, because my model of Eliezer is someone who is intelligent, rational, and aiming at using at least their public communications to increase the chance that AI goes well.
Consider that he is just as human and fallible as everyone else. "None of Eliezer's public communication is -EV for AI safety" is such an incredibly high bar it is almost certainly not true. We all say things that are poor.
I suspect that some of my dissonance does result from an illusion of consistency and a failure to appreciate how multi-faceted people can really be. I naturally think of people as agents and not as a collection of different cognitive circuits. I'm not ready to assume that this explains all of the gap between my expectations and reality, but it's probably part of it.
I don't want to ruffle any feathers, but this has been bugging me for a while and has now become relevant to a decision since MIRI is fundraising and is focused on communication instead of research.
I love Eliezer's writing - the insight, the wit, the subversion. Over the years though, I've seen many comments from him that I found off-putting. Some of them, I've since decided, are probably net positive and I just happen to be in a subgroup that they don't work for (for example, I found Dying with Dignity discouraging, but saw enough comments that it had been helpful for people that I've changed my mind to think it was a net positive).
However, other comments are really difficult for me to rationalize. I just saw one recently on the EA forum to the effect that EAs who shortened their timelines only after chatGPT had the intelligence of a houseplant. I don't have any model of social dynamics by which making that statement publicly is plausibly +EV.
When I see these public dunks/brags, I experience cognitive dissonance, because my model of Eliezer is someone who is intelligent, rational, and aiming at using at least their public communications to increase the chance that AI goes well. I'm confident that he must have considered this criticism before, and I'd expect him to arrive at a rational policy after consideration. And yet, I see that when I recommend "If Anyone Builds It", people's social opinions of Eliezer affect their willingness to read/consider it.
I searched LW, and if it has been discussed before it is buried in all the other mentions of Eliezer. My questions are:
1. Does anyone know if there is some strategy here, or some model for why these abrasive statements are actually +EV for AI Safety?
2. Does MIRI in its communication strategy consider affective impact?
Phrased differently, are there good reasons to believe that:
1. None of Eliezer's public communication is -EV for AI Safety
2. Financial support of MIRI is likely to produce more consistently +EV communication than historically seen from Eliezer individually.
Note: I've intentionally not cited many examples here. I know that "abrasive" is subjective and am confident that many people don't have the same reaction. None of this is intended to put down Eliezer, for whom I have great respect.