I've lately been thinking I should prioritize a bit more keeping up with alignment-relevant progress outside of LessWrong/Alignment Forum. 

I'm curious if people have recommendations that stand out as reliably valuable, and/or have tips for finding "the good stuff" on places where the signal/noise ratio isn't very good. (Seems fine to also apply this to LW/AF)

Some places I've looked into somewhat (though not made major habits around so far) include:

  • Blogs of OpenAI/Deepmind/Anthropic
  • reddit.com/r/ControlProblem 
  • Stampy Discord 
  • EleutherAI Discord

I generally struggle with figuring out how much to keep up with stuff – it seems like there's more than one full-time-job's worth of stuff to keep up with, and it's potentially overanchoring to think about "the stuff people have worked on" as opposed to "stuff that hasn't been worked on yet."

I'm personally coming at this from a lens of "understand the field well enough to think about how to make useful infrastructural advances", but I'm interested in hearing thoughts about various ways people keep-up-with-stuff and how they gain value from it.

New Comment
14 comments, sorted by Click to highlight new comments since: Today at 7:57 PM
[-]maxnadeau10mo2115

"Follow the right people on twitter" is probably the best option. People will often post twitter threads explaining new papers they put out. There's also stuff like:

can you and others please reply with lists of people you find notable for their high signal to noise ratio, especially given twitter's sharp decline in quality lately?

Here are some Twitter accounts I've found useful to follow (in no particular order): Quintin Pope, Janus @repligate, Neel Nanda, Chris Olah, Jack Clark, Yo Shavit @yonashav, Oliver Habryka, Eliezer Yudkowsky, alex lawsen, David Krueger, Stella Rose Biderman, Michael Nielsen, Ajeya Cotra, Joshua Achiam, Séb Krier, Ian Hogarth, Alex Turner, Nora Belrose, Dan Hendrycks, Daniel Paleka, Lauro Langosco, Epoch AI Research, davidad, Zvi Mowshowitz, Rob Miles

For tracking ML theory progress I like @TheGregYang, @typedfemale, @SebastienBubeck, @deepcohen, @SuryaGanguli.

Podcasts are another possibility with less of a time trade-off.

I listen to these podcasts which often have content related to AI alignment or AI risk. Any other suggestions?

[-]Meiren10mo73

https://theinsideview.ai/ is also quite good.

Other podcasts that have at least some relevant episodes: Hear This Idea, Towards Data Science, The Lunar Society, The Inside View, Machine Learning Street Talk

Here are some resources I use to keep track of technical research that might be alignment-relevant:

  • Podcasts: Machine Learning Street Talk, The Robot Brains Podcast
  • Substacks: Davis Summarizes Papers, AK's Substack

How I gain value: These resources help me notice where my understanding breaks down i.e. what I might want to study, and they get thought-provoking research on my radar.

I haven't kept up with it so can't really vouch for it but Rohin's alignment newsletter should also be on your radar. https://rohinshah.com/alignment-newsletter/

[This comment is no longer endorsed by its author]Reply

This seems to have stopped in July 2022.

Whoops - thanks!

[-]Writer10mo10

This is probably not the most efficient way for keeping up with new stuff, but aisafety.info is shaping up to be a good repository of alignment concepts.

Some people post about AI Safety in the EA Forum without crossposting here