Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

It has now been just shy of three years since the first Alignment Newsletter was published. I figure it’s time for an update to the one-year retrospective, and another very short survey. Please take the survey! The mandatory questions take just 2 minutes!

This retrospective is a lot less interesting than the last one, because not that much has changed. You can tell because I don’t have a summary or key takeaways, and instead I’m going to launch into nitty gritty details.

Newsletter stats

We now have 2443 subscribers, and tend to get around a 39% open rate and 4% click through rate on average (the click rate has higher variance though). In the one-year retrospective, I said 889 subscribers, just over 50% open rate, and 10-15% click through rate. This is all driven by organic growth; there hasn’t been any push for publicity.

I’m not too worried about the decreases in open rate and click rate:

  1. I expect natural attrition over time as people’s interests change. Many of these people probably just stop opening emails, or filter them, or open and immediately close the emails.
  2. In absolute terms, the number of opens has gone way up (~450 to ~950).
  3. My summaries have gotten more pedagogic (see below), so people might feel less need to click through to the original.
  4. I now summarize fewer items, so there are fewer chances to “catch people’s interests”.
  5. We haven’t done any publicity, which I would guess is a common way to boost open rates (since newer subscribers are probably more likely to open emails?)

There was this weird thing where at the beginning of the pandemic, open rates would alternate between < 20% and > 40%, but would never be in between. I have no idea what was going on there.

I was also a bit confused why we’re only at #145 instead of #157, given that this is a weekly publication -- I knew I had skipped a couple of weeks but twelve seemed like too many. It turns out this newsletter was published every fortnight during the summer of 2019. I had no memory of this but it looks like I did take steps to fix it -- in the call for contributors, I said:

I’m not currently able to get a (normal length) newsletter out every week; you’d likely be causally responsible for getting back to weekly newsletters.

(This was probably true, since I did get back to weekly newsletters after getting new contributors!)

Changes

My overall sense is that the newsletter has been pretty stable and on some absolute scale has not changed much since the last retrospective two years ago.

Pedagogy

There are roughly two kinds of summaries:

  1. Advertisements: These summaries state what the problem is and what the results are, without really explaining what the authors did to get those results. The primary purpose of these is to inform readers whether or not they should read the full paper.
  2. Explanations: These summaries also explain the “key insights” within the article that allow them to get their results. The primary purpose is to allow readers to gain the insights of the article without having to read the article; as such there is more of a focus on pedagogy (explaining jargon, giving examples, etc.)

Over time I believe I’ve moved towards fewer advertisements and more explanations. Thus, the average length of a summary has probably gotten longer. (However, there are probably fewer summaries, so the total newsletter length is probably similar.)

Long-form content. Some topics are sufficiently detailed and important that I dedicate a full newsletter to them (e.g. Cartesian frames, bio anchors, safety by default, assistance games). This is basically the extreme version of an explanation. I’ve also done a lot more of these over time.

More selection, less overview

Two years ago, I worried that there would be too much content to summarize. Yet, somehow my summaries have become longer, not shorter. What gives?

Basically, I’ve become more opinionated about what is and isn’t important for AI alignment researchers to know, and I’ve been more selective about which papers to summarize as a result. This effectively means that I’m selecting articles in part based on how much they agree with my understanding of AI alignment.

As a result, despite the general increase in alignment-related content, I now summarize fewer articles per newsletter than I did two years ago. The articles I do summarize are selected for being interesting in my view of AI alignment. Other researchers would likely pick quite a different set, especially when choosing what academic articles to include.

I think this is mostly because my views about alignment stabilized shortly after the one year retrospective. At that point, I had been working in AI safety for 1.5 years, and I probably still felt like everything was confusing and that my views were changing wildly every couple of months. Now though it feels like I have a relatively firm framework, where I’m investigating details within the framework. For example, I still feel pretty good about the things I said in this conversation from August 2019, though I might frame them differently now, and could probably give better arguments for them. In contrast, if you’d had a similar conversation with me in August 2018, I doubt I would have endorsed it in August 2019.

This does mean that if you want to have an overview of what the field of AI alignment is up to, the newsletter is not as good a source as it used to be. (I still think it’s pretty good even for that purpose, though.)

Team

Georg Arndt (FHI) and Sawyer Bernath (BERI) are helping with the publishing and organization of the newsletter, freeing me to work primarily on content creation. After a call for contributors, I took on six additional contributors; this gives a total of 9 people (not including me) who could in theory contribute to the newsletter. However, at this point over half don’t write summaries any more, and the remainder write them pretty occasionally, so I’m still writing most of the content. I think this is fine, given the shift towards being a newsletter about my views and the decrease in amount of content covered.

To be clear, I think the additional contributors worked out great, and had the effect I was hoping for it to have. We got back to a weekly newsletter schedule, I put in less time into the newsletter, it was even easier to train the contributors than I thought, and most new contributors wrote a fair number of good summaries before effectively leaving. I expected that to continue this long term I’d have to periodically find new contributors; I think this should be seen as a decision not to continue the program despite its success because I ended up evolving towards a different style of newsletter.

(I’m still pretty happy to have additional contributors, as long as they can commit to ~20 summaries upfront. If you’d be interested, you can send me an email at rohinmshah@gmail.com.)

Appearance

In March 2020, the newsletter got an updated design, that made it look much less like a giant wall of text.

Impact

I was pretty uncertain about the impact of the newsletter in the last retrospective. That hasn’t changed. I still endorse the discussion in that section.

Advice for readers

Since I’m making this “meta” post anyway, I figured I might as well take some time to tell readers how I think they should interact with the newsletter.

Don’t treat it as an evaluation of people’s work. As I mentioned above, I’m selecting articles based in part on how well they fit into my understanding of AI alignment. This is a poor method for evaluating other people’s work. Even if you defer to me completely and ignore everyone else’s views, it still would not be a good method, because often I am mistaken about how important the work is even on my own understanding of AI alignment. Almost always, my opinion about a paper I feel meh about will go up after talking to the authors about the work.

I also select articles based on how useful I think it would be for other AI alignment researchers to learn about the ideas presented. (This is especially true for the choice of what to highlight.) This can be very different from how useful the ideas are to the world (which is what I’d want out of an evaluation): incremental progress on some known subproblem like learning from human feedback could be very important, but still not worth telling other AI alignment researchers about.

Consider reading just the highlights section. If you’re very busy, or you find yourself just not reading the newsletter each week because it’s too long, I recommend just reading the highlights section. I select pretty strongly for “does this seem good for researchers to know?” when choosing the highlight(s).

If you’re busy, consider using the spreadsheet database as your primary mode of interaction. Specifically, rather than reading the newsletter each week, you could instead keep the database open, and whenever you see a vaguely interesting new paper, you can check (via Ctrl+F) whether it has already been summarized, and if so you can read that summary. (Even I use the database in this way, though I usually know whether or not I’ve already summarized the paper before, rather than having to check.)

Also, there may be a nicer UI to interact with this database in the near future :)

Survey

Take it!

New to LessWrong?

New Comment