Anna reports that Habryka claims that the slapfight on "Zetetic Explanation" is a good chunk of why you don't use the website anymore. Is that true?
People, come on, what kind of multi-step game of telephone is going on here. The thing that I said to Anna, in a spoken and largely assumed to be private conversation, which was 4 hours long with a general vibe of her trying to understand my models of LessWrong moderation (and vice-versa) without much of an advocacy component, was, as far as I remember, something vaguely like "I know that Benquo got in a bunch of fights with Said, and my guess is Said played a non-trivial role in him leaving, and clearly if I had to choose between Said and Benquo on the site, I would choose Benquo".
I think I still stand behind that![1] It was a pretty big deal at the time, and that conflict together with the Duncan conflict seemed like roughly the thing that caused you to leave.
On the rest: I agree with 1) and 3) of your concerns, and have commented at least on the latter extensively. I don't... really know what this has to do with banning people who are otherwise causing problems on the site since though, I don't really think those are things that could be somehow solved with banning (like, sure, it's plausible to me they invalidate a bunch of the premises of the community, and are important to address, but even if your government is engaging in unjustified wars, that doesn't mean you don't want normal everyday police doing normal police things).
Unrelatedly:
TurnTrout is not a co-author of the SolidGoldMagikarp post. Maybe he collaborated with it, but at least it doesn't say anywhere that I can easily find that he was involved with writing it.
But even if I didn't, I didn't make any statement with great confidence and I would summarize similar epistemic states the same way in the future again.
Thanks for closing the loop on the game of telephone. Oops, not sure how I misremembered about TurnTrout. Will delete that bit.
I mean more like MIRI apologists who didn’t notice that the Death with Dignity post really ought to be a halt, melt, catch fire moment.
I wasn't sure what you meant here, where two guesses are "the models/appeal in Death with Dignity are basically accurate, but, should prompt a deeper 'what went wrong with LW or MIRI's collective past thinking and decisionmaking?, '" and "the models/appeals in Death with Dignity are suspicious or wrong, and we should be halt-melting-catching-fire about the fact that Eliezer is saying them?"
The former. There should have been a lot of open-minded postmortem discussion, not just a pivot.
Hmm. I HMCFed after that, I think, but I don't remember why I didn't talk much about it publicly. (Also I think there was a CFAR postmortem that I don't recall getting written up and discussed online, tho there was lots of in-person discussion.)
I remember being told by an attendee that I hadn’t been invited to a CFAR postmortem weekend because I’m “not a STEM person.” Since I did statistical programming professionally for five years and have an MS in Mathematics and Statistics from an elite-ish university, I can only interpret that as meaning I’m unwilling to use technical math to dissociate from substantive humanities problems.
I have to rate all the time spent that didn’t result in improvements visible from the outside as nothing but costs paid to sustain internal narcissistic supply; I can’t credit it as an attempt to solve or even discuss a problem unless I receive further evidence. The uniformly positive things I’ve heard about “Don’t Create the Torment Nexus II: If Anyone Builds It, Everyone Dies” implies not much in the way of new perspective or even consensus that one is needed.
I can't comment on why you weren't invited [to the CFAR postmortem], because I was not involved with the decision-making for who would be invited; I just showed up to the event. Naively, I would've guessed it was because you didn't work at CFAR (unless you did and I missed it?); I think only one attendee wasn't in that category, for a broad definition of 'work at'.
I have to rate all the time spent that didn’t result in improvements visible from the outside as nothing but costs paid to sustain internal narcissistic supply
This seems fair to me.
The uniformly positive things I’ve heard about “Don’t Create the Torment Nexus II: If Anyone Builds It, Everyone Dies” implies not much in the way of new perspective or even consensus that one is needed.
I think the main difference between MIRI pre-2022 and post-2022 is that pre-2022 had much more willingness to play along with AI companies and EAs, and post-2022 is much more willing to be openly critical.
There are other differences, and also I think we might be focusing on totally different parts of MIRI. Would you care to say more about where you think there needs to be new perspective?
If the transition from less to more disagreeableness doesn't come along with an investigation of why agreeableness seemed like a plausible strategy and what was learned, then we're still stuck trying to treat an adversary as an environment.
I think I agree with your statement; I assume that this happened, though? Or, at least, in a mirror of the 'improvements visible from the outside' comment earlier, the question is whether MIRI is now operating in a way that leads to successfully opposing their adversaries, rather than whether they've exposed their reasoning about this to the public.
Naively, I would've guessed it was because you didn't work at CFAR (unless you did and I missed it?)
The attendee who told me about it never worked at CFAR, and neither did a couple other people I knew who went. Also I did guest-instruct at a CFAR workshop once.
FYI I am generally good at tracking inside baseball but I understand neither what specific failures[1] you would have wanted to see discussed in an open postmortem nor what things you'd consider to be "improvements" (and why the changes since 2022/04/01 don't qualify).
I'm sure there were many, but I have no idea what you consider to have been failures, and it seems like you must have an opinion because otherwise you wouldn't be confident that the changes over the last three years don't qualify as improvements.
Not sure what Benquo would say, but I think a natural question when a community of people fails at its goal after ~10 years, is to ask why it failed and what could've been done differently. It's a good opportunity to learn, and it's a good incentive to expect to do it so that you make sensible choices during the initial period that you're doing the work (to expect to have to justify them if you fail).
I think that the CFAR one is more natural, because it seems to me that MIRI set itself a great scientific challenge with an externally imposed deadline, whereas CFAR did not have the external deadline on developing an art of rationality (which is also a very difficult problem). So it's more naturally a place where the locus of control was within yourself, and a postmortem will be able to be accurate.
(I’m interested in this topic because I have myself considered trying to put together a public retro on the past ten years of safe-the-world efforts from this scene.)
The main effects of the sort of “AI Safety/Alignment” movement Eliezer was crucial in popularizing have been OpenAI, which Eliezer says was catastrophic, and funding for “AI Safety/Alignment” professionals, whom Eliezer believes to predominantly be dishonest grifters. This doesn't seem at all like what he or his sincere supporters thought they were trying to do.
I’ve written extensively on this sort of perverse optimization, but I don’t see either serious public engagement with my ideas here, or a serious alternative agenda.
For instance, Ben Pace is giving me approving vibes but it didn’t occur to him to respond to your message by talking about the obvious well-publicized catastrophic failures I mentioned in the OP. And it seems like you forgot about them too by the time you wrote your comment.
I am sympathetic to your takes here, but I am not that sympathetic to statements like this:
but I don’t see either serious public engagement with my ideas here, or a serious alternative agenda.
As it happens I also happen to have written many tens-of-thousands of words about this in many comments across LW and the EA Forum. I also haven't seen you engage with those things! (and my guess is the way you are phrasing it suggests you are not aware of them)
Like, man, I do feel like I resonate with the things that you are saying, but it just feels particularly weird to have you show up and complain that no one has engaged with your content on this, while having that exact relationship to approximately the people you are talking to. I, the head admin of LessWrong, have actually spent on the order of many hundreds of hours, maybe 1000+ hours on doing postmortem-ish things in the space, or at least calling for them. I don't know whether you think what I did/do makes any sense, but I think there is a real attempt of the kind of thing you are hoping for (to be clear, mostly ending with a kind of disappointment and resulting distancing from much of the associated community's, but it's not like you can claim a better track record here).
And in contrast to your relationship with my content, I have read your content and have engaged with it a good amount. You can read through my EA Forum comments and LW comments on the topic if you want to get a sense of how I think about these things.
I’m aware that you’ve complained about these problems, but I’m specifically calling for the development and evaluation of explanatory models, which is a different activity. If you’ve done much of that in your public writing I missed it - anything you’d like to point me to?
I have tried to do that, though it’s definitely more dispersed.
Most of it is still in comments and so a bit hard to extract, but one post I did write about this was My tentative best guess on how EAs and Rationalists sometimes turn crazy.
That does seem like it's overtly concerned with developing an explanation, but it seems concerned with deviance rather than corruption, so it's on a different topic than the ones I complain about in the OP. I was aware of that one already, as I replied with a comment at the time.
And it seems like you forgot about them too by the time you wrote your comment.
It was not clear from your comment which particular catastrophic failures you meant (and in fact it's still not clear to me which things from your post you consider to be in that particular class of "catastrophic failures", which of them you attribute at least partial responsibility for to MIRI/CFAR, by what mechanisms/causal pathways, etc).
ETA: "OpenAI existing at all" is an obvious one, granted. I do not think EY considers SBF to be his responsibility (reasonable, given SBF's intellectual inheritance from the parts of EA that were least downstream of EY's thoughts). You don't mention other grifters in your post.
"Death with dignity" was clearly intended to trigger the audience to HMCF right? He was doing exactly what you are asking for
"Trigger the audience into figuring out what went wrong with MIRI's collective past thinking and decision-making" would be a strange purpose from a post written by the founder of MIRI, its key decision-maker, and a long-time proponent of secrecy in how the organization should relate to outsiders (or even how members inside the organization should relate to other members of MIRI).
Not disagreeing with your point, just want to add the datapoint that for me it did lead me to something like "giving up faith in" MIRI. I no longer believed that they were working on a plan for getting the problem solved, and so I resigned myself to the world where I had to take responsibility for the problem getting solved.
Zack Davis emailed me[1] asking me to weigh in on the moderators' request for comment on their proposal to ban Said_Achmiz. I've had a conflict with Said in the past in this thread and apparently they're claiming in private communications that this is a major reason I don't use the site anymore. That's not true (see email chain in footnote, and see also habryka’s comment), and if you want to look for personal reasons I'm off the site the stupidity of the attempt at conflict mediation between me and Duncan Sabien in response to this comment, which mediation attempt effectively denied the importance of the object-level disagreement, is a better candidate. (I think Duncan also thought it was handled poorly, and I don't think he disagrees strongly with me as to how it was handled poorly.)
But I'm torn as to whether and how much to comment on this, because I think my position on this is too complex and conceptual to be properly made use of. While I wouldn't be surprised if people play nicely with the content of what I'm saying as a sort of verbal game, I don't at all expect this structural critique to be made use of to improve things. In other words, I think things have long since deteriorated to the point where for the most part the only intelligible feature of this comment would be to express a pro-Said or anti-Said sentiment. And I don't want to play that drama.
I don't exactly think it's wrong to ban Said. But it's stupid.
Said is annoying, both because his demands for rigor don't seem prioritized reasonably, and because he's simultaneously insulting and rude, dismissive of others' feelings around being "insulted," and sensitive to insults himself. He's also disagreeable. I asked Zack for a list of Said's best comments (see email), and they're pretty much all procedural criticisms or calls for procedural rigor seemingly with no sense of proportion. In the spirit of his "show me the cake" principle, I don't see the cake there. On the other hand, he's a leading contributor to GreaterWrong, which makes this site more usable.
But someone who makes stupid and emotional comments that don't upset the typical reader or contributor - say, because they don't stick out, they side with existing power gradients, they cover up what people want covered up, and they only attack already-designated scapegoats, occupying a niche in the fashion cycle of ideas and approval - is much worse for the site's epistemics than Said is. And there are lots more of those. I don't mean people vacuously saying "great point!", I mean more like MIRI apologists who didn’t notice that the Death with Dignity post really ought to be a halt, melt, catch fire moment.
There are three burning crises I would’ve expected halt-melt-catch-fire responses to if this were a proper rationality community:
One is Zack Davis documenting endorsement of anti-epistemology (see Where to Draw the Boundaries? and A Hill of Validity in Defense of Meaning) to placate trans ideology even many important transgender Rationality community members overtly reject. (I don't remember whether she wrote anything to this effect at the time, but I remember Jessica Taylor explaining to Zack in person several years back that she didn't want her narcissistic fantasy of being a woman validated, she just wanted to be acknowledged as a woman iff she were a woman, and also she believed as a matter of fact that in a meaningful and important sense she was a woman.) Instead, after a long campaign of persuasion we got a halfhearted Twitter concession by Eliezer that he didn't endorse the anti-epistemology, which was largely ignored.
The second is the Death with Dignity April-Fools-only-serious post showing in conjunction with the OpenAI fiasco that the implied political strategy of AI safety had failed, and especially the culture of silencing complained about by jessicata, the researcher responsible for much of MIRI's 2010's published research (they largely stopped publishing once she left). Instead we got transparent derailing.
The third is less centrally connected to Lesswrong's identity and social position, but it's not exactly marginal either. While I’m often able to get narrow concessions or “everybody knows” deflections in response to my criticisms of Effective Altruism, People continue to uncritically report naïve cost benefit numbers as though they were literally true and not get consistent pushback and down voting for it. And even after Zvi Mowshowitz published an account of Sam Bankman-Fried asking my friends to publicly disassociate from me because I was making concrete criticisms of Effective Altruism, I’m not aware that anyone’s responded to that by trying to engage better and more publicly with my criticisms or take them to heart, rather than just disavowing Sam Bankman-Fried as a bad guy who has nothing to do with the any structural problems in the community.
If anyone's seriously curious I hope this explains why I can't be bothered with a "pro" or "anti" opinion here.
Zetetic testimony
Zack M. Davis
Jul 11, 2025, 1:54 AM (2 days ago)
to me, Said
Dear Ben (cc Said):
(First, sorry for erroneously including you on my previous email from tonight (Subj: "war planning status"); I was indecisive about what kind of email I wanted to send tonight and forgot you were still in the headers. This one is actually for you.)
The Less Wrong mods are making another push to ban Said. I think this is terrible and intend to fight it with every weapon at my disposal. I was rereading "On the Construction of Beacons" the other day, and it was very striking in how much detail your model predicts exactly this situation, where Said can very eloquently explain why he thinks his comment style is good for error-correction, and rather than disagreeing with that on the intellectual merits, all his persecutors say is that his comments allegedly make people feel bad and therefore less likely to use the website, and it's bad if fewer people want to use the website!! (As you wrote, "The sociopath need only know that some attention, money, volunteer hours, and social proof have been brought within reach of a competent marketing and sales effort. From the sociopaths' perspective, they are not introducing errors—they are correcting them.")
Anna reports that Habryka claims that the slapfight on "Zetetic Explanation" is a good chunk of why you don't use the website anymore. Is that true?
Separately from whether or not that's true, the fact that Habryka is citing you as evidence makes me think your public political support would be helpful, even though you're mostly checked out from the scene these days. (Even if you really didn't like that one comment thread from 2018, I pretty strongly expect you to understand why this is important if my paragraph about "Beacons" above wasn't sufficient.) Any thoughts?
Benjamin Hoffman <benjaminrhoffman@gmail.com>
Jul 11, 2025, 10:59 PM (2 days ago)
to Zack, Said
The main reasons I don't use the site much anymore:
1 The mods' politicized behavior, e.g. their treatment of me vis-a-vis Duncan (in which they tried to mediate the conflict as though there were no object level that mattered), and their treatment of you.
2 Low-value comments.
The exchange with Said might have been the precipitating event - I don't strongly remember - but if so that was only because I was pretty much ready to leave.
I haven't gotten much out of Said's comments on LW - maybe you could link to cases where you think he helped clarify things on the object level and not just executed procedures that seem like the sort of thing that ought to be helpful. Greaterwrong seems more to me like the sort of thing that ought to be helpful and as I understand it he was a major contributor to that.
Ben H
Benjamin Hoffman <benjaminrhoffman@gmail.com>
Jul 11, 2025, 10:59 PM (2 days ago)
to Zack, Said
I do think it's stupid to ban Saïd and they should examine their own sins first.
Ben H
Benjamin Hoffman <benjaminrhoffman@gmail.com>
Jul 11, 2025, 11:02 PM (2 days ago)
to Zack, Said
If I'd gotten a really good comment on Zetetic Explanations that changed my mind on something important that would have far outweighed any annoyance at Said's conduct on that thread. In hindsight I can kind of extrapolate what his objection might have been and might have a reasonable reply on the object level.
Ben H
Benjamin Hoffman <benjaminrhoffman@gmail.com>
Jul 11, 2025, 11:08 PM (2 days ago)
to Zack, Said
On reviewing Douglas Knight left a pretty good comment. So I think maybe Said's comment overshadowed that by drawing too much of my attention. Overall I'm pretty confused about how to deal with people asking questions or raising objections in ways that seem probably-but-not-definitely unhelpful. Said is of course correct that in principle, one can simply filter. But filtering is costly and requires prejudice. My recent exchange on Twitter with Goblinodds is an example where I don't know where I should have ended the conversation, but probably at some point before I actually did.
This seems worth *investigating* rather than suppressing.
Ben H
Benjamin Hoffman <benjaminrhoffman@gmail.com>
Jul 11, 2025, 11:18 PM (2 days ago)
to Zack, Said
Overall I think there would be some value in compiling The Best of Said's Comments.
Ben H
Benjamin Hoffman <benjaminrhoffman@gmail.com>
Jul 11, 2025, 11:27 PM (2 days ago)
to Zack, Said
Here's the transcript of my chat with Oliver at the time:
LESSWRONG
Go back to Inbox
Re comment thread w/ Said
Participants:habryka,Benquo
Conversation Options
habryka 7y
Just wanted to ping you here that I agree with you that Said has been consistently rude in that thread, and we are working on making a more public moderation response about that. I just also liked that specific comment (though I didn't like any of his other ones), and wanted to make sure that we give some kind of differential feedback and it seemed to good to encourage him to write up his ideas in a more top-level format.Sorry for that stuff being published in a bit of an asynchronous matter that creates the temporary perception of us thinking his actions are purely positive on that thread (I think on aggregate they are quite bad and on net I would have preferred him to not comment at all, over the things he has written so far).
Benquo 7y
Thanks. Good to hear. I decided to delete my comment because it felt like likely-unproductive whining & I should be more direct, but it's reassuring to hear that you're tracking this.Benquo 7y
My overall sense is also "bad on net but with many specific good points," and that sort of pattern makes me wonder whether there's some underlying thing in the information environment that he's acting out a memetic immune response to. Not sure that would be productive to talk about openly, especially since I'd expect Said not to engage with the idea productively, but if we could get him to talk more explicitly about the thing that seems like it's bothering him instead of just indignantly insinuating it, that could be helpful.habryka 7y
Yep, that's also roughly our read. I hope our response succeeds in that, but we will see. I think he's definitely defending some kind of epistemic standard, more than he is trying to respond to any object-level arguments.Benquo 7y
FYI based on the way Said was engaging with other commenters on that thread, I now think that discouragement of the bad parts is much more important than encouragement of the good ones. Ikaxas' comment was the click moment for me. I've switched my moderation policy to "reign of terror" and intend to delete any comment by Said that seems annoying to be because I think that the chance is very low that (a) he's saying something important that I'm missing, (b) he'll be willing to do the interpretive labor to successfully communicate it at all, or (c) the price of this won't be a ridiculous amount of time spent figuring out how to read him charitably while being repeatedly insulted.habryka 7y
That also roughly matches with my read. So I think that's a reasonable response.Benquo 7y
A mod tool that doesn't exist that I'm noticing myself wanting is "freeze all replies in this comment thread without deleting anything"Benquo 7y
Or at least doesn't seem available to ordinary usersBenquo 7y
If the official mods have that power, might be good to use on the current Said thread, since I don't think things are gonna get clarified furtherhabryka 7y
Ah, yeah. We have that power. Seems maybe a good idea to make it available to mods.Benquo 7y
I have a much clearer idea of the underlying pattern that I imagine Said to be perceiving and responding to now.Benquo 7y
Explanation here: https://www.lesswrong.com/posts/45oMPv7cjp9FRkyKy/why-should-ea-care-about-rationality-and-vice-versa#gm7J3ZbAGMu3GtG6TBenquo 7y
This seems like an important problem to get good at finessing. People getting irritable can be a sign that they perceive some sort of information threat, and I think something about Said's behavior pattern in particular that I'd still doing a bad job describing seems characteristic of this.Benquo 7y
Something like, latching onto strong standards against nonsense even if they're not a perfect fit for the specific attack vector, since they'll at least slow it down (at the price of also slowing related things down).Benquo 7y
Countermeasure from A Fire Upon the Deep is an appealing metaphor here.
dd
Ben H
Zack M. Davis
Jul 12, 2025, 12:30 AM (1 day ago)
to me, Said
The exchange with Said might have been the precipitating event - I don’t strongly remember - but if so that was only because I was pretty much ready to leave.
I’m confused about the timing here. “Zetetic Explanation” was in August 2018. You made 21 posts between then and June 2020 (and then there’s a gap to April 2022). Were you “pretty much ready to leave” for more than 23 months (August 2018–June 2020)?
maybe you could link to cases where you think he helped clarify things on the object level and not just executed procedures that seem like the sort of thing that ought to be helpful. […] compiling The Best of Said’s Comments.
I imagine that by “procedures that seem like the sort of thing that ought to be helpful”, you mean the asking-for-definitions-and-examples thing?
Here are some longer, “non-procedural” comments I liked—
on why asking for examples is not insulting (April 2023, already linked in my initial message)
the Citadel of Truth (July 2019)
“An amazing new cake?!” (January 2018)
on the efficiency of not confirming interpretations before criticizing (in the “Zetetic Explanation” trainwreck)
on the corruption of “charity” (in the “Zetetic Explanation” trainwreck)
Two examples that were defending me personally (such that my recommendation comes with a conflict-of-interest flag)—
“You are […] attacking someone’s writings about abstract epistemology and Bayesian inference, on Less Wrong, of all places (!!), not because there is anything at all mistaken about them, but because of some alleged ‘sinister context’ that you are bringing in from somewhere else. […] shameful” (May 2019, it needed to be said and no one else had the guts)
various comments on offendedness and moderation on “Causal vs. Social Reality” (June 2019)
Benjamin Hoffman <benjaminrhoffman@gmail.com>
Jul 12, 2025, 1:29 AM (1 day ago)
to Zack, Said
I guess it wasn't the precipitating event, then! Like I said I don't distinctly remember the order. Looks like I switched from auto-crossposting to deciding when to linkpost slightly *before* the "zetetic explanations" post, and continued doing that for a while, so it wasn't obviously a turning point at all.
Ben H
Said Achmiz
Jul 12, 2025, 2:03 AM (1 day ago)
to me, Zack
On Friday, July 11th, 2025 at 11:02 PM, Benjamin Hoffman <benjaminrhoffman@gmail.com> wrote:
If I'd gotten a really good comment on Zetetic Explanations that changed my mind on something important that would have far outweighed any annoyance at Said's conduct on that thread. In hindsight I can kind of extrapolate what his objection might have been and might have a reasonable reply on the object level.
Said Achmiz
Jul 12, 2025, 2:54 AM (1 day ago)
to me, Zack
On Friday, July 11th, 2025 at 11:18 PM, Benjamin Hoffman <benjaminrhoffman@gmail.com> wrote:
Overall I think there would be some value in compiling The Best of Said's Comments.
I agree that there’s value in this, and I’d already begun a project along just those lines, before this latest kerfuffle. I confess to feeling somewhat discouraged by (a) said kerfuffle (after all, I was planning to post this “comment anthology” on LW—where else?—and being banned from LW would put a damper on those plans), and (b) the fact that someone of my best (in my own estimation) comments are also some of the most heavily downvoted. The latter observation suggests that maybe LW users just don’t have very good epistemic/intellectual taste, in the aggregate. Shall I cast my pearls before such swine? What is the profit in that? But on the other hand, if my writing benefits the few, what does it matter if the many do not appreciate it? (Except that it would then be sad to realize that LW nowadays is “the many”, whereas it used to be “the few”, once…)
(But then we’re back to the whole “about to get banned” thing, which makes that concern moot. Alas.)
—Said
Zack M. Davis
Jul 12, 2025, 2:58 AM (1 day ago)
to Said, me
You could put the anthology on your own domain, and I'd linkpost it to LW.
Said Achmiz
Jul 12, 2025, 3:03 AM (1 day ago)
to Zack, me
Yeah, of course, I have plenty of places to post it (my blog, my wiki, DSL, etc.), but that’s not really the point, right? Like, LW is a discussion forum, i.e. a collective of people who are the natural audience for this sort of thing, and from whom one can expect the best (highest quantity/density of useful) feedback… or that’s what LW was, anyhow! Now what is it? I guess I don’t know anymore.
—Said
Zack M. Davis
12:26 AM (14 hours ago)
to Said, me
I do think it’s stupid to ban Saïd and they should examine their own sins first.
Ben, I think it would be good for shared maps for you say this part in public where Habryka can see it. I guess the most convenient place would be to reply to the comment where he said it’s “more likely than not that [he] want[s] to give [Said] a site-wide ban and would like to […] hear counterarguments before I do it”.