CEO at Redwood Research.
AI safety is a highly collaborative field--almost all the points I make were either explained to me by someone else, or developed in conversation with other people. I'm saying this here because it would feel repetitive to say "these ideas were developed in collaboration with various people" in all my comments, but I want to have it on the record that the ideas I present were almost entirely not developed by me in isolation.
Please contact me via email (bshlegeris@gmail.com) instead of messaging me on LessWrong.
If we are ever arguing on LessWrong and you feel like it's kind of heated and would go better if we just talked about it verbally, please feel free to contact me and I'll probably be willing to call to discuss briefly.
I don't think I quite understand the distinction you are trying to draw between "founders" and (not a literal quote) "people who do object-level work and make intellectual contributions by writing".
If you're the CEO of a company, it's your job to understand the space your company works in and develop extremely good takes about where the field is going and what your company should do, and use your expertise in leveraged ways to make the company go better.
In the context of AI safety, the key product that organizations are trying to produce is often itself research, and a key input is hiring talented people. So I think it makes a lot of sense that e.g. I spend a lot of my time thinking about the research that's happening at my org.
Analogously, I don't think it should be considered surprising or foolish if Elon Musk knows a lot about rockets and spends a lot of his time talking to engineers about rockets.
I do think that I am personally more motivated to do novel intellectual work than would be optimal for Redwood's interests.
I also think that the status gradients and social pressures inside the AI safety community have a variety of distorting effects on my motivations that probably cause me to take worse actions.
I think you personally feel the status gradient problems more than other AI safety executives do because a lot of AI safety people undervalue multiplier efforts. And this has meant that working at MATS is less prestigious and therefore has more trouble hiring than I'd like.
I’ve been loving this sequence, please keep these coming.
Do you have data on the distribution of lengths of the posts people are publishing?
Joe Bankman, better known for other reasons, had this idea:
Joe Bankman decided to make easy tax filing his personal mission, and he spent $30,000 to hire a lobbyist to counter lobbying by Intuit, the maker of TurboTax software.
“I can’t cure cancer,” says Bankman. “But I can help simplify tax filing.”
For example, loosening voter ID laws.
My understanding is that voter ID laws are probably net helpful for Democrats at this point.
Notably, I think I disagree with Eliezer on what his moat is! I think he thinks that he's much better at coming to correct conclusions or making substantial intellectual progress than I think he is.
This doesn't feel that surprising to me. I guess my model is that different skills are correlated, and then if you pick someone who's extremely capable at a couple of skills, it's not that surprising if no one Pareto dominates them.
I agree that my point isn't really responding to whether it's surprising that there's no one who Pareto dominates him.
(Hopefully it's not rude to state my personal impression of Eliezer as a thinker. I think he's enough of a public figure that it's acceptable for me to comment on it. I'd like to note that I have benefited in many important ways from Eliezer's writing and ideas, and I've generally enjoyed interacting with him in person, and I'm sad that as a result of some of our disagreements our interactions are tense.)
Yeah, I agree that there's no one who Pareto dominates Eliezer at his top four most exceptional traits. (Which I guess I'd say are: taking important weird ideas seriously, writing compelling/moving/insightful fiction (for a certain audience), writing compelling/evocative/inspiring stuff about how humans should relate to rationality (for a certain audience), being broadly knowledgeable and having clever insights about many different fields.)
(I don't think that he's particularly good at thinking about AI; at the very least he is nowhere near as exceptional as he is at those other things.)
I'm not trying to disagree with you. I'm just going to ruminate unstructuredly a little on this:
I know a reasonable number of exceptional people. I am involved in a bunch of conversations about what fairly special people should do. In my experience, when you're considering two people who might try to achieve a particular goal, it's usually the case that each has some big advantages over the other in terms of personal capabilities. So, they naturally try to approach it fairly differently. We can think about this in the case where you are hiring CEOs for a project or speculating about what will happen when companies headed by different CEOs compete.
For example, consider the differences between Sam Altman and Dario Amodei (I don't know either that well, nor do I understand the internal workings of OpenAI/Anthropic, so I'm sort of speculating here):
Both of them have done pretty well for themselves in similar roles.
As a CEO, it does feel pretty interesting how non-interchangeable most people are. And it's interesting how in a lot of cases, it's possible to compensate for one weakness with a strength that seems almost unrelated.
If Eliezer had never been around, my guess is that the situation around AI safety would be somewhat but not incredibly different (though probably overall substantially worse):
Maybe a relevant underlying belief of mine is that Eliezer is very good at coming up with terms for things and articulating why something is important, and he also had the important strength of realizing how important AI was before that many other people had done so. But I don't think his thinking about AI is actually very good on the merits. Most of the ideas he's spread were originally substantially proposed by other people; his contribution was IMO mostly his reframings and popularizations. And I don't think his most original ideas actually look that good. (See here for an AI summary.)
I think Eliezer underestimates other people because he evaluates them substantially based on how much they agree with him, and, as a consequence of him having a variety of dumb takes, smart people usually disagree with him about a bunch of stuff.
For your point 2, are you thinking about founders in organizations that have theories of change other than doing research? Or are you thinking of founders at research orgs?