Larks

Wiki Contributions

Comments

I would typically aim for mid-December, in time for the American charitable giving season.

After having written an annual review of AI safety organisations for six years, I intend to stop this year. I'm sharing this in case someone else wanted to in my stead.

Reasons

  • It is very time consuming and I am busy.
  • I have a lot of conflicts of interests now.
  • The space is much better funded by large donors than when I started. As a small donor, it seems like you either donate to:
    • A large org that OP/FTX/etc. support, in which case funging is ~ total and you can probably just support any.
    • A large org than OP/FTX/etc. reject in which case there is a high chance you are wrong.
    • A small org OP/FTX/etc. haven't heard of, in which case I probably can't help you either.
  • Part of my motivation was to ensure I stayed involved in the community but this is not at threat now.

Hopefully it was helpful to people over the years. If you have any questions feel free to reach out.

Larks7moΩ10258

Alignment research: 30

Could you share some breakdown for what these people work on? Does this include things like the 'anti-bias' prompt engineering?

I would expect that to be the case for staff who truly support faculty. But many of them seem to be there to directly support students, rather than via faculty. The number of student mental health coordinators (and so on) you need doesn't scale with the number of faculty you have. The largest increase in this category is 'student services', which seems to be definitely of this nature.

Thanks very much for writing this very diligent analysis.

I think you do a good job of analyzing the student/faculty ratio, but unless I have misread it seems like this is only about half the answer. 'Support' expenses rose by even more than 'Instruction', and the former category seems less linked to the diversity of courses offered than to things like the proliferation of Deans, student welfare initiatives, fancy buildings, etc.

Is your argument about personnel overlap that one could do some sort of mixed effect regression, with location as the primary independent variable and controls for individual productivity? If so I'm so somewhat skeptical about the tractability: the sample size is not that big, the data seems messy, and I'm not sure it would capture necessarily the fundamental thing we care about. I'd be interested in the results if you wanted to give it a go though!

More importantly, I'm not sure this analysis would be that useful. Geography-based-priors only really seem useful for factors we can't directly observe; for an organization like CHAI our direct observations will almost entirely screen off this prior. The prior is only really important for factors where direct measurement is difficult, and hence we can't update away from the prior, but for those we can't do the regression. (Though I guess we could do the regression on known firms/researchers and extrapolate to new unknown orgs/individuals).

The way this plays out here is we've already spent the vast majority of the article examining the research productivity of the organizations; geography based priors only matter insomuchas you think they can proxy for something else that is not captured in this.

As befits this being a somewhat secondary factor, it's worth noting that I think (though I haven't explicitly checked) in the past I have supported bay area organisations more than non-bay-area ones.   

Load More