CEO at Redwood Research.
AI safety is a highly collaborative field--almost all the points I make were either explained to me by someone else, or developed in conversation with other people. I'm saying this here because it would feel repetitive to say "these ideas were developed in collaboration with various people" in all my comments, but I want to have it on the record that the ideas I present were almost entirely not developed by me in isolation.
Please contact me via email (bshlegeris@gmail.com) instead of messaging me on LessWrong.
If we are ever arguing on LessWrong and you feel like it's kind of heated and would go better if we just talked about it verbally, please feel free to contact me and I'll probably be willing to call to discuss briefly.
This is a great list!
Here's some stuff that isn't in your list that I think comes up often enough that aspiring ML researchers should eventually know it (and most of this is indeed universally known). Everything in this comment is something that I've used multiple times in the last month.
And some stuff I'm personally very glad to know:
I think it's worth drilling your halfish-power-of-ten times tables, by which I mean memorizing the products of numbers like 1, 3, 10, 30, 100, 300, etc, while pretending that 3x3=10.
For example, 30*30=1k, 10k times 300k is 3B, etc.
I spent an hour drilling these on a plane a few years ago and am glad I did.
(Note that this message and its parent are talking about different things: the parent talked about whether the current value is negative, and the child talked about whether the total value has been negative.)
For your point 2, are you thinking about founders in organizations that have theories of change other than doing research? Or are you thinking of founders at research orgs?
I don't think I quite understand the distinction you are trying to draw between "founders" and (not a literal quote) "people who do object-level work and make intellectual contributions by writing".
If you're the CEO of a company, it's your job to understand the space your company works in and develop extremely good takes about where the field is going and what your company should do, and use your expertise in leveraged ways to make the company go better.
In the context of AI safety, the key product that organizations are trying to produce is often itself research, and a key input is hiring talented people. So I think it makes a lot of sense that e.g. I spend a lot of my time thinking about the research that's happening at my org.
Analogously, I don't think it should be considered surprising or foolish if Elon Musk knows a lot about rockets and spends a lot of his time talking to engineers about rockets.
I do think that I am personally more motivated to do novel intellectual work than would be optimal for Redwood's interests.
I also think that the status gradients and social pressures inside the AI safety community have a variety of distorting effects on my motivations that probably cause me to take worse actions.
I think you personally feel the status gradient problems more than other AI safety executives do because a lot of AI safety people undervalue multiplier efforts. And this has meant that working at MATS is less prestigious and therefore has more trouble hiring than I'd like.
I’ve been loving this sequence, please keep these coming.
Do you have data on the distribution of lengths of the posts people are publishing?
Joe Bankman, better known for other reasons, had this idea:
Joe Bankman decided to make easy tax filing his personal mission, and he spent $30,000 to hire a lobbyist to counter lobbying by Intuit, the maker of TurboTax software.
“I can’t cure cancer,” says Bankman. “But I can help simplify tax filing.”
For example, loosening voter ID laws.
My understanding is that voter ID laws are probably net helpful for Democrats at this point.
Sure; I think extra speed from practicing it (and e.g. more instantly knowing that 100M is 1e8) is worth it.