LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
So, I don't know what actually happened here. But I at least want to convey support for:
"There are ways of communicating other than being blunt that can... unsettlingly affect you [or, at least, some people], which are hard to explain, and their being hard to explain makes it psychologically harder to deal with because when you try to explain it or complain about it people are kinda dismissive."
(I'm not expressing a strong opinion here about whether Nate should have done something different in this case, or what the best way for Turntrout, Vivek's team, or others should relate to it. I'm just trying to hold space for "I think there's a real thing people should be taking seriously as a possibility and not just rounding off to 'Turntrout should have thicker skin' or something)
I have some guesses about the details but they're mostly informed by my interactions with people other than Nate, which give me sort of an existence proof, and I'm wary of speculating myself here without having actually had this sort of conversation with Nate.
This isn't quite how I'd frame the question.
[edit: My understanding is that] Eliezer and Nate believe this. I think it's quite reasonable for other people to be skeptical of it.
Nate and Eliezer can choose to only work closely/mentor people who opt into some kind of confidentiality clause about it. People who are skeptical or don't think it's worth the costs can choose not to opt into it.
I have heard a few people talk about MIRI confidentiality norms being harmful to them in various ways, so I do also think it's quite reasonable for people to be more cautious about opting into working with Nate or Eliezer if they don't think it's worth the cost.
Presumably, Nate/Eliezer aren't willing to talk much about this precisely because they think it'd leak capabilities. You might think they're wrong, or that they haven't justified that, but, like, the people who have a stake in this are the people who are deciding whether to work with them. (I think there's also a question of "should Eliezer/Nate have a reputation as people who have a mindset that's good for alignment and capabilities that'd be bad to leak?", and I'd say the answer should be "not any moreso than you can detect from their public writings, and whatever your personal chains of trust with people who have worked closely with them that you've talked to.")
I do think this leaves some problems. I have heard about the MIRI confidentiality norms being fairly paralyzing for some people in important ways. But something about the Muireall's comment felt like a wrong frame to me.
Nod.
In this case I don't think the claim you're ascribing to the experts are Elizabeth are actually in conflict. You say:
vegan diets are generally healthy (in the vast majority of cases) unless you don't take the proper supplementation regularly.
And I think Elizabeth said several times "If you actually are taking the supplementation, it's healthy, but I know many people who aren't taking that supplementation. I think EA vegan activists should put more effort into providing good recommendations to people they convince to go vegan." So I'm not sure why you're thinking of the expert consensus here as saying a different thing.
I feel a bit confused about what the argument is about here. I think the local point of "hey, you should be quite skeptical of self-reports" is a good, important point (thanks for bringing it up. I don't think I agree with you on how much I should discount this data, but I wasn't modeling all the possible failure modes you're pointing out). But it feels from your phrasing like there's something else going on, or the thread is overall getting into a cycle of arguing-for-the-sake-of-arguing, or something. (Maybe it's just that Elizabeth's post is long and it's easy to lose track of the various disclaimers she made? Maybe it's more of a "how much are you supposed to even have an opinion if all your evidence is weak?" frame clash)
Could you (or Natalie) say more about what this thread is about from your perspective?
The question I assumed Stephen was asking (and at least my question for myself) here is, "okay, but what do we believe in the meanwhile?".
Natalia responded with a process that might find some good evidence (but, might not, and looks like at least several hours of skilled-labor search to find out). I agree someone should do that labor and find out if better evidence exists.
I also realize Vaniver did explicitly ask "what alternate framework you prefer?" and it makes sense that your framework is interested in different questions than mine or Elizabeths or Stephens or whatnot. But, for me, the question is "what should vegan activist's best guess be right now", not "what might it turn out to be after doing a bunch more research that maybe turns out to have good data and maybe doesn't."
Seems maybe true, although I also think there's generally a lot lower bandwidth over text, and often text is more time-intensive to write. (Not saying it's necessarily not worth the tradeoff overall, but I'd probably bet against it being obviously better)
I, personally, have been on the receiving end of (what felt to me like) a Nate-bulldozing, which killed my excitement for engaging with the MIRI-sphere
Was this via public comments or private communications? (Regardless, sucks that this happened to you. But it changes my guesses what sorts of solutions would help)
LessWrong has a pretty established norm of not using unnecessarily political examples. (See Politics is the Mind-Killer). I don't object to you writing up a top level post arguing for the point you're trying to make here. But I do object to you injecting your pet topic into various other comment threads in particularly distracting ways (especially ones that are only tangentially about AI, let alone about your particular concern about AI and culture/politics/etc).
When you did it last week, it didn't seem like something it felt right for the mods to intervene on heavy-handedly (some of us downvoted as individuals). But, it sounds like you're going out of your way to use an inflammatory example repeatedly. I am now concretely asking you as a moderator to not do that.
I'm locking this thread since it's pretty offtopic. You can go discuss it more at the meta-level over in the Open Thread, if you want to argue about the overall LessWrong moderation policy.
Hey, uh, I don't wanna overly police people's language, but this is the second time in a week you've used the n-word specifically as your example here, and it seems like, at best, an unnecessarily distracting example.
I do quite like your overall idea and background theory here. Having more cohabitive games (sorry, I like that name better) seems probably important.
Something feels off about "just try to get as many points as you can". I don't know if this is just 37 years of competitive-game propaganda talking, but if you give everyone a score, the only way I really have to tell me "how good is this abstract number?" is to compare myself to other players, and that naturally puts me into somewhat of a competitive mindset.
The things that actually put me in a collaborative mindset in multiplayer games is when we're trying to build something or accomplish something other than "get points". Minecraft is maybe the best example (although admittedly people often feel like it's "not a game, it's a toy/sandbox"). Maybe in Magic the Gathering where people get invested in pulling off a "cool move", especially in multiplayer games.
That's useful additional information, thanks.
I made a slight edit to my previous comment to make my epistemic state more clear.
Fwiw, I feel like I have a pretty crisp sense of "Nate and Eliezers communication styles are actually pretty different" (I noticed myself writing out a similar comment about communication styles under the Turntrout thread that initially said "Nate and Eliezer" a lot, and then decided that comment didn't make sense to publish as-is), but I don't actually have much of a sense of the difference between Nate, Eliezer, and MIRI-as-a-whole with regards to "the mindset" and "confidentiality norms".