FWIW, "plausible" sounds to me basically the same as "possibly". So my guess is this is indeed a linguistic thing.
Yeah, let's also make it a link post then. Some people prefer more prominence, some prefer less, for their cross-posts.
I am also interested in this.
Huh, that's surprising, if by that he means "no contracts between anyone currently at Leverage and anyone at CEA". I currently still think it's the case, though I also don't see any reason for Geoff to lie here. Maybe there is some technical sense in which there is no contract between Leverage and CEA, but there are contracts between current Leverage employees, who used to work at CEA, and current CEA employees?
One of my ideas for this (when thinking about voting systems in general) is to have a rating that is trivially inconvenient to access. Like, you have a ranking system from F to A, but then you can also hold the A button for 10 seconds, and then award an S rank, and then you can hold the S button for 30 seconds, and award a double S rank, and then hold it for a full minute, and then award a triple S rank.
The only instance I've seen of something like this implemented is Medium's clap system, which allows you to give up to 50 claps, but you do have to click 50 times to actually give those claps.
I think some of these are pretty reasonable points, but I am kind of confused by the following:
This Leverage drama is not important to anyone except a small group of people and does not belong on LW. Perhaps the relatively small group of Bay Area rationalists who are always at the center of these things need to create a separate forum for their own drama. Nobody outside of Berkeley needs to hear about this. This sort of thing gets upvoted because tribal instincts are being activated, not because this is good and ought to be here.
It seems to me that L... (read more)
I donate a meaningful amount to CFAR and MIRI (without being overly specific, >1% of my income to those two orgs), and check LW weekly-ish, and I had never even heard of Leverage until the recent kerfuffle. Anecdote isn't data but I sort of agree with this comment's grandparent here.
Given that there is lots of "let's comment on what things about a comment are good and which things are bad" going on in this thread, I will make more explicit a thing that I would have usually left implicit:
My current sense is that this comment maybe was better to write than no comment, given the dynamics of the situation, but I think the outcome would have been better if you had waited to write your long comment. This comment felt like it kicked up the heat a bunch, and while I think that was better than just leaving things unresponded, my sense is the discussion overall would have gone better if you had just written your longer comment.
Seems great! It's a bit on ice this week, but we've been thinking very actively about changes to the voting system, and so right now is the right time to strike the iron if you want to change the teams opinion on how we should change things, and what we should experiment with.
I liked the effort put into this comment, and found it worth reading, but disagree with it very substantially. I also think I expect it to overall have bad consequences on the discussion, mostly via something like "illusion of transparency" and "trying to force the discussion to happen that you want to happen, and making it hard for people to come in with a different frame", but am not confident.
I think the first one is sad, and something I expect would be resolved after some more rounds of comments or conversations. I don't actually really know what... (read more)
When counting down we are all savages dancing to the sun gods in a feeble attempt to change the course of history.
More seriously though, yeah, definitely when I count down, I see a ton of stuff that could be a lot better. A lot of important comments missing, not enough courage, not enough honesty, not enough vulnerability, not enough taking responsibility for the big picture.
I am also surprised by this! I think this sentence is kind of true, and am dissatisfied with the threads, but I don't feel like my take is particularly well-summarized with the above language, at least in the context of this post (like, I feel like this sentence implies a particular type of agreement with the OP that I don't think summarizes my current position very well, though I am also not totally confident I disagree with the OP).
I am in favor of experimenting more with some karma stuff, and have been encouraging people to work on that within the... (read more)
I like this idea. It has a lot of nice attributes.
I wrote some in the past about what all the different things are that a voting/karma system on LW is trying to produce, with some thoughts on some proposals that feel a bit similar to this: https://www.lesswrong.com/posts/EQJfdqSaMcJyR5k73/habryka-s-shortform-feed?commentId=8meuqgifXhksp42sg
We do offer an RSS with all posts, but no emails for all posts, sorry.
We do! Just subscribe to Zvi's post via the "Subscribe" button on their profile, and then set the "Posts by users I am subscribed to" setting in the notification settings to "email" and "immediately":
That thread (the subset of it that was happening on LW 1.0) was one of the things that convinced me to build LW 2.0 (I was already working on it, but wasn't sure how much I would commit to it). Because that thread was really quite bad, and a lot of it had to do with deep site architecture things that were hard to change.
I did indeed misunderstand that! I didn't downvote, but my misunderstanding did cause me to not upvote.
Yes, sorry! We realized the opportunity to run this over the winter only a few weeks ago, and have been moving as fast as we can since then. There is a decent chance there will be future cohorts, so if anyone is in this situation, they could still apply for future cohorts.
Thank you so much! This is great!
That is vastly more readership than I had thought. A naive look at these numbers suggests that a small city’s worth of people read Elizabeth’s latest post. But I assume that these numbers can’t be taken at face value.
I think this number can be relatively straightforwardly taken at face value. Elizabeth's post was at the top of HN for a few hours, so a lot of people saw it. A small city's worth seems about right for the number of people who clicked through and at least skimmed it.
I don't think LW gets at all fewer comments than ACX. I think indeed LW has more comments than ACX, it's just that LW comments are spread out over 60+ posts in a given week, whereas ACX has like 2-3 posts a week. LessWrong gets about 150-300 comments a day, which is roughly the same as what ACX gets per day.
https://www.similarweb.com/website/astralcodexten.substack.com/?competitors=lesswrong.com Currently shows ACX at something like 1.7x of LessWrong. At some points in the past LessWrong was slightly ahead.
LessWrong is a pretty big website. Here is a random snapshot of top-viewed pages from the last month from Google Analytics:
As you can see from the distribution, it's a long tail of many pages getting a few hundred pageviews each month, which adds up a lot.
I believe that LW must be have a readership two orders of magnitude lower than SSC/ACX (in the thousands, or even just the hundreds, for LW, in the hundreds of thousands for SSC/ACX)
LW's readership is about the same order of magnitude as SSC. Depending on the mood of the HN and SEO gods.
Not that I don't believe you, but that's also really hard for me to wrap my head around. Can you put numbers on that claim? I'm not sure if ACX has a much smaller readership than I'd imagined, or if LW has a much bigger one, but either way I'd like to know!
This is a very good point. IIRC Paul is working on some new blog posts that summarize his more up-to-date approach, though I don't know when they'll be done. I will ask Paul when I next run into him about what he thinks might be the best way to update the sequence.
The Babble and Prune sequence might also be relevant here.
After discussing the matter with some other (non-Leverage) EAs, we've decided to wire $15,000 to Zoe Curzi (within 35 days).
A number of ex-Leveragers seem to be worried about suffering (financial, reputational, etc.) harm if they come forward with information that makes Leverage look bad (and some also seem worried about suffering harm if they come forward with information that makes Leverage look good). This gift to Zoe is an attempt to signal support for people who come forward with accounts like hers, so that people in Zoe's reference class are more inc... (read more)
A few quick thoughts:1) This seems great, and I'm impressed by the agency and speed.
2) From reading the comments, it seems like several people were actively afraid of how Leverage could retaliate. I imagine similar for accusations/whistleblowing for other organizations. I think this is both very, very bad, and unnecessary; as a whole, the community is much more powerful than individual groups, so it seems poorly managed when the community is scared of a specific group. Resources should be spent to cancel this out.
In light of this, if more money were availa... (read more)
Note that my goal isn't to evaluate harms caused by Leverage and try to offset such harms. Instead, it's trying to offset any incentives against sharing risky honest accounts like Zoe's.
I like the careful disambiguation here.
FWIW, I independently proposed something similar to a friend in the Lightcone office last week, with an intention that was related to offsetting harm. My reasoning:
There's often a problem in difficult "justice" situations, where people have only a single bucket for "make the sufferer feel better" and "address the wrong that was d... (read more)
I think the epistemic state of CEA was some mixture of something pretty close to what you list here, and something that I would put closer to something more like "Leverage maybe is bad, or maybe isn't, but in any case it looks bad, and I don't think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth".
"Leverage maybe is bad, or maybe isn't, but in any case it looks bad, and I don't think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth"
That has the collary: "We don't expect EA's to care enough about the truth/being transparent that this is a huge reputational risk for us."
Pretty sure you should just be able to copy the structure of the query from the Chrome network tab, and reverse engineer it this way. IIRC the structure was pretty straightforward, and the response pretty well structured.
Geoff was interested in publishing a transcript and a video, so I think Geoff would be happy with you publishing the audio from the recording you have.
Yep, I think CEA has in the past straightforwardly misrepresented (there is a talk on the history of EA by Will and Toby that says some really dubious things here, IIRC) and sometimes even lied in order to not mention Leverage's history with Effective Altruism. I think this was bad, and continues to be bad.
My initial thought on reading this was 'this seems obviously bad', and I assumed this was done to shield CEA from reputational risk.
Thinking about it more, I could imagine an epistemic state I'd be much more sympathetic to: 'We suspect Leverage is a dangerous cult, but we don't have enough shareable evidence to make that case convincingly to others, or we aren't sufficiently confident ourselves. Crediting Leverage for stuff like the EA Summit (without acknowledging our concerns and criticisms) will sound like an endorsement of Leverage, which might cause o... (read more)
Yep, I think the situation is closer to what Jeff describes here, though, I honestly don't actually know, since people tend to get cagey when the topic comes up.
I would be happy to contribute my part of this, with the memory I have. I think I could cover a decent amount of the questions above, though would also likely get some things wrong, so wouldn't be a totally dependent observer.
drug addicts have or develop very strong preferences for drugs. The assertion that they can't make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so.
I really don't think this is an accurate description of what is going on in people's mind when they are experiencing drug dependencies. I've spent a good chunk of my childhood with an alcoholic father, and he would have paid most of his wealth to stop being addicted to drinking, went through great lengths trying to tie himself to various masts to stop, and ge... (read more)
No, Eliezer's comment seems like a straightforward "I am making a non-anonymous upvote" which is indeed a functionality I also sometimes want, since sometimes the identity of the upvoter definitely matters. The comment above seems like it's doing something different, especially in combination with the other comment I linked to.
Outside of "these people probably talked to each other like once every few months" I think there is no major connection between Leverage and the Vassarites that I am aware of.
I mostly assumed this; I suppose in the opposite case someone probably would have already mentioned that. But I prefer to have it confirmed explicitly.
I noticed I had downvoted this comment, and kind of felt bad about it. I think this is a reasonable suggestion to make, but also think it is a bad suggestion for a variety of reasons. Generally I prefer voting systems to reward comments I think are good to have been made, and punish comments that seem to detract from the conversation, and despite my disagreement with the proposed policy, I do think this comment overall made things better. So I changed my downvote to an upvote, and am now leaving this comment to definitely disambiguate.
I don't think the context in which I heard about this communication was very private. There was a period where Michael seemed to try to get people to attack GiveWell and Holden quite loudly, and the above was part of the things I heard from that time. The above did not to me strike me as a statement intended to be very private, and also my model of Michael has norms that encourage sharing this kind of thing, even if it happens in private communication.
Fixed it. Looks like it was going to the edit-form version of the post on the EA Forum, which of course nobody but Ozzie has permission to see.
Ah, sorry, yes. Me being unclear on that was also bad. The phrasing you give is the one I intended to convey, though I sure didn't do it.
I feel like the paragraph you cited just seems like the straightforward explanation of where my belief comes from?
Among 16 studies reporting the prevalence of clinically significant symptoms of depression across 23,469 Ph.D. students, the pooled estimate of the proportion of students with depression was 0.24 (95% confidence interval [CI], 0.18–0.31; I2 = 98.75%). In a meta-analysis of the nine studies reporting the prevalence of clinically significant symptoms of anxiety across 15,626 students, the estimated proportion of students with anxi
Sorry, maybe this is too nitpicky, but clinically significant symptoms =/= clinically diagnosed, even in worlds where the clinically significant symptoms are severe enough to be diagnosed as such.
If you instead said in "population studies 30-40% of graduate students have anxiety or depression severe enough to be clinically diagnosed as such were they to seek diagnosis" then I think this will be a normal misreading from not jumping through enough links.
Put another way, if someone in mid-2020 told me that they had symptomatic covid and was formally diagnosed... (read more)
Thank you! I am glad you are doing this!
The UC Berkeley study was the one that I had cached in my mind as generating this number. I will reread it later today to make sure that it's right, but it sure seems like the most relevant reference class, given the same physical location.
Ah, sorry, yeah, I agree my mod notice wasn't specific enough. Most of my mod notice was actually about a mixture of this comment, and this other comment, that felt like it was written by the same generator, but feels more obviously bad to me (and probably to others too).
Like, the other comment that TAG left on this post felt like it was really trying to just be some kind of social flag that is common on the rest of the internet. Like, it felt like some kind of semi-ironic "Boo, outgroup" comment, and this comment felt like it was a parallel "Yay, in... (read more)
I do think most people are undervaluing their time, but I currently believe that my time now is much more valuable than my time in 10 years, and that i also believe this to be true about your time (like, specifically you, Mark Xu).
I think this is mostly because I think that in worlds where we have any chance of solving the AI Alignment problem, we are currently setting in place a field and network of people who will eventually solve the problem, and the founder effects here seem much larger than the experience effects. As a concrete example, I would ... (read more)
I am not fully sure. I have heard him say very similar things to the above directed at Holden (and have heard reports of the things I put in quotes above). I think in that case the thing he is attacking him for is leveraging people's desire to be a morally good person in a way that they don't endorse (and plays into various guilt narratives), to get them to give him money, and to get them to dedicate their life towards Effective Altruism, and via that technique, preventing a substantial fraction of the world's top talent to dedicate themselves towards actually important problems, and also causing them various forms of psychological harm.
Ah, yeah, my model is that the person had spent a lot of time with MV's clique, though I wasn't super confident they had talked to Michael in particular. Not sure whether I would still count this as being an effect of Michael's actions, seems murkier than I made it out to be in my comment.
I think that's not really what the OP said, at least not in the naive way you seem to express here. You might disagree with them, but the OP is very specifically saying that they thought what they experienced was better than what they would have experienced in most other places in the world, or most other paths they could have taken.
I think judging the author to be wrong about their preferences, given their experiences, is not a totally crazy thing to do, given the situation, but your comment seems to somewhat misrepresent the author. To be clear, I do thi... (read more)
Mod note: I don't think LessWrong is the right place for this kind of comment. Please don't leave more of these. I mean, you will get downvoted, but we might also ban you from this and similar threads if you do more of that.
Are you going to tell Eliezer the same thing? https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe#EJPSjPv7nNzsam947
It seems worthwhile to give a little more of the "why" here, lest people just walk away with the confusing feeling that there are invisible electric fences that they need to creep and cringe away from.
I'll try to lay out the why, and if I'm wrong or off, hopefully one of the mods or regular users will elaborate.
Some reasons why this type of comment doesn't fit the LW garden:
In particular, I want to remind people here that something like 30-40% of grad students at top universities have either clinically diagnosed depression or anxiety (link). I think given the kind of undirected, often low-paid, work that many have been doing for the last decade, I think that's the right reference class to draw from, and my current guess is we are roughly at that same level, or slightly below it (which is a crazy high number, and I think should give us a lot of pause).
I want to remind people here that something like 30-40% of grad students at top universities have either clinically diagnosed [emphasis mine] depression or anxiety (link)
I'm confused about how you got to this conclusion, and think it is most likely false. Neither your link, the linked study, or the linked meta-analysis in the linked study of your link says this. Instead the abstract of the linked^3 meta-analysis says:
Among 16 studies reporting the prevalence of clinically significant symptoms of depression across 23,469 Ph.D. students, the pooled estimate
Huh, some chance I am just wrong here, but to me it didn't feel like Kegan levels had more prominence or expectation of being understood than e.g. land value taxes, which is also a topic some people are really into, but doesn't feel to me like it's very core to the community.
Datapoint: I understand neither Kegan levels nor land value taxes.
Agree. I do think the two are often kind of entwined (like, I am not capable of holding arbitrarily many maps of the world in my mind at the same time, so when I arrive at some unconventional beliefs that has broad consequences, the new models based on that belief will often replace more conventional models of the domain, and I will have to spend time regenerating the more conventional models and beliefs in conversation with someone who doesn't hold the unconventional belief, which does frequently make the conversation kind of harder, and I still don't think is evidence of something going terribly wrong)