All of habryka's Comments + Replies

AI Safety Needs Great Engineers

FWIW, "plausible" sounds to me basically the same as "possibly". So my guess is this is indeed a linguistic thing.

Base Rates and Reference Classes

Yeah, let's also make it a link post then. Some people prefer more prominence, some prefer less, for their cross-posts.

AI Safety Needs Great Engineers

I am also interested in this.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Huh, that's surprising, if by that he means "no contracts between anyone currently at Leverage and anyone at CEA". I currently still think it's the case, though I also don't see any reason for Geoff to lie here. Maybe there is some technical sense in which there is no contract between Leverage and CEA, but there are contracts between current Leverage employees, who used to work at CEA, and current CEA employees? 

6ChristianKl8dWhat he said is compatible with Ex-CEA people still being bound by the NDA's they signed they were at CEA. I don't think anything happened that releases ex-CEA people from NDAs. The important thing is that CEA is responsible for those NDA and is free to unilaterally lift them if they would have an interest in the free flow of information. In the case of a settlement with contracts between the two organisations CEA couldn't unilaterally lift the settlement contract. Public pressure on CEA seems to be necessary to get the information out in the open.
Improving on the Karma System

One of my ideas for this (when thinking about voting systems in general) is to have a rating that is trivially inconvenient to access. Like, you have a ranking system from F to A, but then you can also hold the A button for 10 seconds, and then award an S rank, and then you can hold the S button for 30 seconds, and award a double S rank, and then hold it for a full minute, and then award a triple S rank. 

The only instance I've seen of something like this implemented is Medium's clap system, which allows you to give up to 50 claps, but you do have to click 50 times to actually give those claps. 

3Yoav Ravid12dIf we were making just a small change to voting, then the one I would have liked to make is having something like the clap system instead of weakvotes and strongvotes, and have the cap decided by karma score (as it is now, if your strongvote is X, your cap would be X).
Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation

I think some of these are pretty reasonable points, but I am kind of confused by the following: 

This Leverage drama is not important to anyone except a small group of people and does not belong on LW. Perhaps the relatively small group of Bay Area rationalists who are always at the center of these things need to create a separate forum for their own drama. Nobody outside of Berkeley needs to hear about this. This sort of thing gets upvoted because tribal instincts are being activated, not because this is good and ought to be here.

It seems to me that L... (read more)

I donate a meaningful amount to CFAR and MIRI (without being overly specific, >1% of my income to those two orgs), and check LW weekly-ish, and I had never even heard of Leverage until the recent kerfuffle. Anecdote isn't data but I sort of agree with this comment's grandparent here.

9AnonymousCoward0215dFor me personally this still rounds off to "not very important." Especially in the sense that there is nothing I, or the vast majority of people on this site, could possibly do with this information. I was already never going to join Leverage, or give any money to Geoff Anders. I have a lot of rationalist friends, both IRL and online, and none of us had ever heard about Geoff Anders prior to this recent drama. Think about it in terms of cost-benefit. The benefit of this kind of content to the vast majority of people on LW is zero. The cost is pretty high, because ~everybody who sees a big juicy drama fest is going to want to rubberneck and throw in their two cents. So on net posting content like this to the main LW feed is strongly net negative in aggregate. A post which is simply dumb/wrong but otherwise un-dramatic can at least be simply ignored. I think that if it were, say, Yudkowsky being accused of auditing people's thetans and having seances, I would find that relevant, because it would have implications for my future decisions.
9steven046115d"Problematic dynamics happened at Leverage" and "Leverage influenced EA Summit/Global" don't imply "Problematic dynamics at Leverage influenced EA Summit/Global" if EA Summit/Global had their own filters against problematic influences. (If such filters failed, it should be possible to point out where.)
Speaking of Stag Hunts

Given that there is lots of "let's comment on what things about a comment are good and which things are bad" going on in this thread, I will make more explicit a thing that I would have usually left implicit: 

My current sense is that this comment maybe was better to write than no comment, given the dynamics of the situation, but I think the outcome would have been better if you had waited to write your long comment. This comment felt like it kicked up the heat a bunch, and while I think that was better than just leaving things unresponded, my sense is the discussion overall would have gone better if you had just written your longer comment.

5Duncan_Sabien19dIn response to this, I'll bow out (from this subthread) for a minimum period of 3 days. (This is in accordance with a generally wise policy I'm trying to adopt.) EDIT: I thought Oli was responding to a different thing (I replied to this from the sidebar). I was already planning not to add anything substantive here for a few days. I do note, though, that even if two people both unproductively turn up the heat, one after the other, in my culture it still makes a difference which one broke peace first.
Speaking of Stag Hunts

Seems great! It's a bit on ice this week, but we've been thinking very actively about changes to the voting system, and so right now is the right time to strike the iron if you want to change the teams opinion on how we should change things, and what we should experiment with.

Speaking of Stag Hunts

I liked the effort put into this comment, and found it worth reading, but disagree with it very substantially. I also think I expect it to overall have bad consequences on the discussion, mostly via something like "illusion of transparency" and "trying to force the discussion to happen that you want to happen, and making it hard for people to come in with a different frame", but am not confident. 

I think the first one is sad, and something I expect would be resolved after some more rounds of comments or conversations. I don't actually really know what... (read more)

Speaking of Stag Hunts

When counting down we are all savages dancing to the sun gods in a feeble attempt to change the course of history.

More seriously though, yeah, definitely when I count down, I see a ton of stuff that could be a lot better. A lot of important comments missing, not enough courage, not enough honesty, not enough vulnerability, not enough taking responsibility for the big picture.

3Ruby19dI did indeed mean "dissatisfied" in a "counting down" sense.
Speaking of Stag Hunts

I am also surprised by this! I think this sentence is kind of true, and am dissatisfied with the threads, but I don't feel like my take is particularly well-summarized with the above language, at least in the context of this post (like, I feel like this sentence implies a particular type of agreement with the OP that I don't think summarizes my current position very well, though I am also not totally confident I disagree with the OP). 

I am in favor of experimenting more with some karma stuff, and have been encouraging people to work on that within the... (read more)

3Vladimir_Nesov19dThe most obvious/annoying issue with karma is false disagreement zero equilibrium controversy tug of war that can't currently be split into more specific senses of voting to reveal that actually there is a consensus. This can't be solved by pre-splitting, it has to act as needed, maybe co-opting the tagging system, with the default tag being "Boostworthy" (but not "Relevant" or anything specific like that), ability to see the tags if you click something, and ability to tag your vote with anything (one tag per voter, so to give a specific tag you have to untag "Boostworthy", and all tags sum up into the usual karma score that is the only thing that shows by default until you click something). This has to be sufficiently inconvenient to only get used when necessary, but then somehow become convenient enough for everyone to use (for that specific comment). On the other hand there is Steam that only has approve/disapprove votes and gives vastly more useful quality ratings than most rating aggregators that are even a little bit more nuanced. So any good idea is likely to make things worse. (Though Steam doesn't have a zero equilibrium problem because the rating is the percentage of approve votes.)
5Duncan_Sabien20d(As the author of the OP, I think my position is also consistent with "quite good, and mildly positively surprised." I think the difference is counting up vs. counting down? I'm curious whether you think quite good when counting down from your personal vision of the ideal LessWrong.)
Speaking of Stag Hunts

I like this idea. It has a lot of nice attributes. 

I wrote some in the past about what all the different things are that a voting/karma system on LW is trying to produce, with some thoughts on some proposals that feel a bit similar to this: https://www.lesswrong.com/posts/EQJfdqSaMcJyR5k73/habryka-s-shortform-feed?commentId=8meuqgifXhksp42sg 

6Raelifin20dNice. Thank you. How would you feel about me writing a top-level post reconsidering alternative systems and brainstorming/discussing solutions to the problems you raised?
Substack Ho?

We do offer an RSS with all posts, but no emails for all posts, sorry.

Substack Ho?

We do! Just subscribe to Zvi's post via the "Subscribe" button on their profile, and then set the "Posts by users I am subscribed to" setting in the notification settings to "email" and "immediately": 

3benjaminikuta21dIs there a way to get all LW posts in my inbox? That's what I currently use Blogtrottr for, but I thought LessWrong only offered that for curated posts.
Speaking of Stag Hunts

That thread (the subset of it that was happening on LW 1.0) was one of the things that convinced me to build LW 2.0 (I was already working on it, but wasn't sure how much I would commit to it). Because that thread was really quite bad, and a lot of it had to do with deep site architecture things that were hard to change.

Zoe Curzi's Experience with Leverage Research

I did indeed misunderstand that! I didn't downvote, but my misunderstanding did cause me to not upvote. 

Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22]

Yes, sorry! We realized the opportunity to run this over the winter only a few weeks ago, and have been moving as fast as we can since then. There is a decent chance there will be future cohorts, so if anyone is in this situation, they could still apply for future cohorts.

2philip_b24dSure, I understand, no hard feelings. Do you mean that if one would like to go to such a bootcamp but thinks they won't be able to get visa in time, they should apply now to get invited to a future cohort? Or should they apply later when you actually announce those future cohorts?
[Book Review] "The Bell Curve" by Charles Murray

That is vastly more readership than I had thought. A naive look at these numbers suggests that a small city’s worth of people read Elizabeth’s latest post. But I assume that these numbers can’t be taken at face value.


I think this number can be relatively straightforwardly taken at face value. Elizabeth's post was at the top of HN for a few hours, so a lot of people saw it. A small city's worth seems about right for the number of people who clicked through and at least skimmed it.

[Book Review] "The Bell Curve" by Charles Murray

I don't think LW gets at all fewer comments than ACX. I think indeed LW has more comments than ACX, it's just that LW comments are spread out over 60+ posts in a given week, whereas ACX has like 2-3 posts a week. LessWrong gets about 150-300 comments a day, which is roughly the same as what ACX gets per day. 

[Book Review] "The Bell Curve" by Charles Murray

https://www.similarweb.com/website/astralcodexten.substack.com/?competitors=lesswrong.com Currently shows ACX at something like 1.7x of LessWrong. At some points in the past LessWrong was slightly ahead.

LessWrong is a pretty big website. Here is a random snapshot of top-viewed pages from the last month from Google Analytics: 

As you can see from the distribution, it's a long tail of many pages getting a few hundred pageviews each month, which adds up a lot.

3hath25dI'm surprised to see how many people view the Roko's Basilisk tag. Is that a trend over more than just the last month?
7AllAmericanBreakfast25dThat is vastly more readership than I had thought. A naive look at these numbers suggests that a small city’s worth of people read Elizabeth’s latest post. But I assume that these numbers can’t be taken at face value. It’s very hard for me to square the idea that these websites get roughly comparable readership with my observation that ACX routinely attracts hundreds of comments on every post. LW gets 1-2 orders of magnitude fewer comments than ACX. So while I’m updating in favor of the site’s readership being quite a bit bigger than I’d thought, I still think there’s some disconnect here between what I’m thinking of by “readership” and the magnitude of “readership” is coming across in these stats.
[Book Review] "The Bell Curve" by Charles Murray

I believe that LW must be have a readership two orders of magnitude lower than SSC/ACX (in the thousands, or even just the hundreds, for LW, in the hundreds of thousands for SSC/ACX)

LW's readership is about the same order of magnitude as SSC. Depending on the mood of the HN and SEO gods.

Not that I don't believe you, but that's also really hard for me to wrap my head around. Can you put numbers on that claim? I'm not sure if ACX has a much smaller readership than I'd imagined, or if LW has a much bigger one, but either way I'd like to know!

Preface to the sequence on iterated amplification

This is a very good point. IIRC Paul is working on some new blog posts that summarize his more up-to-date approach, though I don't know when they'll be done. I will ask Paul when I next run into him about what he thinks might be the best way to update the sequence.

Zoe Curzi's Experience with Leverage Research

After discussing the matter with some other (non-Leverage) EAs, we've decided to wire $15,000 to Zoe Curzi (within 35 days).

A number of ex-Leveragers seem to be worried about suffering (financial, reputational, etc.) harm if they come forward with information that makes Leverage look bad (and some also seem worried about suffering harm if they come forward with information that makes Leverage look good). This gift to Zoe is an attempt to signal support for people who come forward with accounts like hers, so that people in Zoe's reference class are more inc... (read more)

A few quick thoughts:

1) This seems great, and I'm impressed by the agency and speed.

2) From reading the comments, it seems like several people were actively afraid of how Leverage could retaliate. I imagine similar for accusations/whistleblowing for other organizations. I think this is both very, very bad, and unnecessary; as a whole, the community is much more powerful than individual groups, so it seems poorly managed when the community is scared of a specific group. Resources should be spent to cancel this out.

In light of this, if more money were availa... (read more)

Note that my goal isn't to evaluate harms caused by Leverage and try to offset such harms. Instead, it's trying to offset any incentives against sharing risky honest accounts like Zoe's.

I like the careful disambiguation here.

FWIW, I independently proposed something similar to a friend in the Lightcone office last week, with an intention that was related to offsetting harm.  My reasoning:

There's often a problem in difficult "justice" situations, where people have only a single bucket for "make the sufferer feel better" and "address the wrong that was d... (read more)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I think the epistemic state of CEA was some mixture of something pretty close to what you list here, and something that I would put closer to something more like "Leverage maybe is bad, or maybe isn't, but in any case it looks bad, and I don't think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth".

"Leverage maybe is bad, or maybe isn't, but in any case it looks bad, and I don't think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth"

That has the collary: "We don't expect EA's to care enough about the truth/being transparent that this is a huge reputational risk for us."

LW Open Source – Overview of the Codebase

Pretty sure you should just be able to copy the structure of the query from the Chrome network tab, and reverse engineer it this way. IIRC the structure was pretty straightforward, and the response pretty well structured.

1Robert Miles1moAh ok, thanks! My main concern with that is that it goes to "https://z0gr6exqhd-dsn.algolia.net", which feels like it could be a dynamically allocated address that might change under me?
Zoe Curzi's Experience with Leverage Research

Geoff was interested in publishing a transcript and a video, so I think Geoff would be happy with you publishing the audio from the recording you have.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Yep, I think CEA has in the past straightforwardly misrepresented (there is a talk on the history of EA by Will and Toby that says some really dubious things here, IIRC) and sometimes even lied in order to not mention Leverage's history with Effective Altruism. I think this was bad, and continues to be bad.

My initial thought on reading this was 'this seems obviously bad', and I assumed this was done to shield CEA from reputational risk.

Thinking about it more, I could imagine an epistemic state I'd be much more sympathetic to: 'We suspect Leverage is a dangerous cult, but we don't have enough shareable evidence to make that case convincingly to others, or we aren't sufficiently confident ourselves. Crediting Leverage for stuff like the EA Summit (without acknowledging our concerns and criticisms) will sound like an endorsement of Leverage, which might cause o... (read more)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Yep, I think the situation is closer to what Jeff describes here, though, I honestly don't actually know, since people tend to get cagey when the topic comes up.

Zoe Curzi's Experience with Leverage Research

I would be happy to contribute my part of this, with the memory I have. I think I could cover a decent amount of the questions above, though would also likely get some things wrong, so wouldn't be a totally dependent observer.

8Eli Tyre1moSame of me.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

drug addicts have or develop very strong preferences for drugs. The assertion that they can't make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so.

I really don't think this is an accurate description of what is going on in people's mind when they are experiencing drug dependencies. I've spent a good chunk of my childhood with an alcoholic father, and he would have paid most of his wealth to stop being addicted to drinking, went through great lengths trying to tie himself to various masts to stop, and ge... (read more)

2NancyLebovitz8dhttps://en.wikipedia.org/wiki/Olivier_Ameisen [https://en.wikipedia.org/wiki/Olivier_Ameisen] A sidetrack, but a French surgeon found that Baclofen (a muscle relaxant) cured his alcoholism by curing the craving. He was surprised to find that it cured compulsive spending when he didn't even realize he had a problem. He had a hard time raising money for an official experiment, and it came out inconclusive, and he died before the research got any further.
7Benquo1moThis seems like some evidence that the principled liberal position is false - specifically, that it is not self-ratifying. If you ask some people what their preferences are, they will express a preference for some of their preferences to be thwarted, for their own good. Contractarianism can handle this sort of case, but liberal democracy with inalienable rights cannot, and while liberalism is a political philosophy, contractarianism is just a policy proposal, with no theory of citizenship or education.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

No, Eliezer's comment seems like a straightforward "I am making a non-anonymous upvote" which is indeed a functionality I also sometimes want, since sometimes the identity of the upvoter definitely matters. The comment above seems like it's doing something different, especially in combination with the other comment I linked to.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Outside of "these people probably talked to each other like once every few months" I think there is no major connection between Leverage and the Vassarites that I am aware of.

Thanks.

I mostly assumed this; I suppose in the opposite case someone probably would have already mentioned that. But I prefer to have it confirmed explicitly.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I noticed I had downvoted this comment, and kind of felt bad about it. I think this is a reasonable suggestion to make, but also think it is a bad suggestion for a variety of reasons. Generally I prefer voting systems to reward comments I think are good to have been made, and punish comments that seem to detract from the conversation, and despite my disagreement with the proposed policy, I do think this comment overall made things better. So I changed my downvote to an upvote, and am now leaving this comment to definitely disambiguate.

4Rob Bensinger1moI noticed the comment was in the negatives and strong-upvoted it because it seemed fine, though I disagree with it. :P I'll leave the strong upvote so as not to mess up others' votes.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I don't think the context in which I heard about this communication was very private. There was a period where Michael seemed to try to get people to attack GiveWell and Holden quite loudly, and the above was part of the things I heard from that time. The above did not to me strike me as a statement intended to be very private, and also my model of Michael has norms that encourage sharing this kind of thing, even if it happens in private communication. 

5Gunnar_Zarncke1moThank you for the clarification. I think it is valuable to include this context in your comment. I will adjust my comment accordingly.
Prioritization Research for Advancing Wisdom and Intelligence

Fixed it. Looks like it was going to the edit-form version of the post on the EA Forum, which of course nobody but Ozzie has permission to see.

4ozziegooen1moAh, thanks!
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Ah, sorry, yes. Me being unclear on that was also bad. The phrasing you give is the one I intended to convey, though I sure didn't do it.

3Linch1moThanks, appreciate the update!
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I feel like the paragraph you cited just seems like the straightforward explanation of where my belief comes from? 

Among 16 studies reporting the prevalence of clinically significant symptoms of depression across 23,469 Ph.D. students, the pooled estimate of the proportion of students with depression was 0.24 (95% confidence interval [CI], 0.18–0.31; I2 = 98.75%). In a meta-analysis of the nine studies reporting the prevalence of clinically significant symptoms of anxiety across 15,626 students, the estimated proportion of students with anxi

... (read more)

Sorry, maybe this is too nitpicky, but clinically significant symptoms =/= clinically diagnosed, even in worlds where the clinically significant symptoms are severe enough to be diagnosed as such.

If you instead said in "population studies 30-40% of graduate students have anxiety or depression severe enough to be clinically diagnosed as such were they to seek diagnosis" then I think this will be a normal misreading from not jumping through enough links.

Put another way, if someone in mid-2020 told me that they had symptomatic covid and was formally diagnosed... (read more)

[MLSN #1]: ICLR Safety Paper Roundup

Thank you! I am glad you are doing this!

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

The UC Berkeley study was the one that I had cached in my mind as generating this number. I will reread it later today to make sure that it's right, but it sure seems like the most relevant reference class, given the same physical location.

7Gunnar_Zarncke1moI had a look at the situation in Germany and it doesn't look much better. 17% of students are diagnosed with at least one psychical disorder. This is based on the health records of all students insured by one of the largest public health insurers in Germany (about ten percent of the population): https://www.barmer.de/blob/144368/08f7b513fdb6f06703c6e9765ee9375f/data/dl-barmer-arztreport-2018.pdf [https://www.barmer.de/blob/144368/08f7b513fdb6f06703c6e9765ee9375f/data/dl-barmer-arztreport-2018.pdf]
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Ah, sorry, yeah, I agree my mod notice wasn't specific enough. Most of my mod notice was actually about a mixture of this comment, and this other comment, that felt like it was written by the same generator, but feels more obviously bad to me (and probably to others too). 

Like, the other comment that TAG left on this post felt like it was really trying to just be some kind of social flag that is common on the rest of the internet. Like, it felt like some kind of semi-ironic "Boo, outgroup" comment, and this comment felt like it was a parallel "Yay, in... (read more)

Your Time Might Be More Valuable Than You Think

I do think most people are undervaluing their time, but I currently believe that my time now is much more valuable than my time in 10 years, and that i also believe this to be true about your time (like, specifically you, Mark Xu). 

I think this is mostly because I think that in worlds where we have any chance of solving the AI Alignment problem, we are currently setting in place a field and network of people who will eventually solve the problem, and the founder effects here seem much larger than the experience effects. As a concrete example, I would ... (read more)

2Samuel Shadrach1mo"his time doesn't seem as high leverage" because he chooses not to work on high leverage stuff or because he is incapable of it? For instance, if he provides advice and resources to 10 new startups (or 10 new exploratory divisions in his company), could that be equivalent to him doing a startup himself? Plus he has a ton of experience now. I don't have a strong opinion on it but maybe his ability to direct fledgling new divisions working on AI or self-driving cars or whatever could be higher impact than building a search engine 20 years, if he commits sufficient time to it. (And maybe he does, for all I know)
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I am not fully sure. I have heard him say very similar things to the above directed at Holden (and have heard reports of the things I put in quotes above). I think in that case the thing he is attacking him for is leveraging people's desire to be a morally good person in a way that they don't endorse (and plays into various guilt narratives), to get them to give him money, and to get them to dedicate their life towards Effective Altruism, and via that technique, preventing a substantial fraction of the world's top talent to dedicate themselves towards actually important problems, and also causing them various forms of psychological harm.

3ChristianKl1moDo you have an idea of when those things were directed at Holden?
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Ah, yeah, my model is that the person had spent a lot of time with MV's clique, though I wasn't super confident they had talked to Michael in particular. Not sure whether I would still count this as being an effect of Michael's actions, seems murkier than I made it out to be in my comment.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I think that's not really what the OP said, at least not in the naive way you seem to express here. You might disagree with them, but the OP is very specifically saying that they thought what they experienced was better than what they would have experienced in most other places in the world, or most other paths they could have taken.

I think judging the author to be wrong about their preferences, given their experiences, is not a totally crazy thing to do, given the situation, but your comment seems to somewhat misrepresent the author. To be clear, I do thi... (read more)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Mod note: I don't think LessWrong is the right place for this kind of comment. Please don't leave more of these. I mean, you will get downvoted, but we might also ban you from this and similar threads if you do more of that.

It seems worthwhile to give a little more of the "why" here, lest people just walk away with the confusing feeling that there are invisible electric fences that they need to creep and cringe away from.

I'll try to lay out the why, and if I'm wrong or off, hopefully one of the mods or regular users will elaborate.

Some reasons why this type of comment doesn't fit the LW garden:

  • Low information density.  We want readers to be rewarded for each comment that strays across their visual field.
  • Cruxless/opaque/nonspecific.  While it's quite valid to leave a
... (read more)
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

In particular, I want to remind people here that something like 30-40% of grad students at top universities have either clinically diagnosed depression or anxiety (link). I think given the kind of undirected, often low-paid, work that many have been doing for the last decade, I think that's the right reference class to draw from, and my current guess is we are roughly at that same level, or slightly below it (which is a crazy high number, and I think should give us a lot of pause). 

I want to remind people here that something like 30-40% of grad students at top universities have either clinically diagnosed [emphasis mine] depression or anxiety (link)

I'm confused about how you got to this conclusion, and think it is most likely false. Neither your link, the linked study, or the linked meta-analysis in the linked study of your link says this. Instead the abstract of the linked^3 meta-analysis says:

Among 16 studies reporting the prevalence of clinically significant symptoms of depression across 23,469 Ph.D. students, the pooled estimate

... (read more)
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Huh, some chance I am just wrong here, but to me it didn't feel like Kegan levels had more prominence or expectation of being understood than e.g. land value taxes, which is also a topic some people are really into, but doesn't feel to me like it's very core to the community.

Datapoint: I understand neither Kegan levels nor land value taxes.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Agree. I do think the two are often kind of entwined (like, I am not capable of holding arbitrarily many maps of the world in my mind at the same time, so when I arrive at some unconventional beliefs that has broad consequences, the new models based on that belief will often replace more conventional models of the domain, and I will have to spend time regenerating the more conventional models and beliefs in conversation with someone who doesn't hold the unconventional belief, which does frequently make the conversation kind of harder, and I still don't think is evidence of something going terribly wrong)

6jessicata1moOh, something that might not have been clear is that talking with other people Michael knows made it clear that Michael was less insular than MIRI/CFAR people (who would have been less able to talk with such a diverse group of people, afaict), not just that he was less insular than people in cults.
Load More