Summary

One way to become a better researcher is to analyze what works for existing researchers. This post lists attributes that many successful professors share, specifically:

  • Context switching 
  • “Hedgehog”-y (= committed to their specific field or point of view) 
  • Networking (both breadth and depth) 
  • Selling their work (to various audiences)
  • Switching between high-level and detail-oriented perspectives
  • Allocating their time to what’s important 

At the end I also share a few takes on why AI safety researchers might want to adopt some of these attributes

This post is based on anecdotal evidence from top STEM universities in the US, and may not generalize accordingly. 

Professors are incredibly good at context switching 

Many professors have schedules which are completely packed: 

  • 9am faculty meeting
  • 10am teaching
  • 11:30am research meeting with student
  • 12pm lunch seminar
  • 1pm research meeting with colleague on a completely different topic
  • 1:30pm research meeting with visitor on yet another different topic
  • ...

Professors are able to successfully keep up with all these diverse obligations in part because they are skilled at context switching. (Though, some professors find it difficult to accomplish focused work on days like these, and report reserving blocks of time on e.g. weekends for focused work only.) 

Professors are hedgehog-y

This refers to the hedgehog/fox dichotomy: hedgehogs “view the world through the lens of a single defining idea” and foxes “draw on a wide variety of experiences”. Academics vary on this spectrum, but when compared to other researchers (e.g. industry researchers), or rationalists, they skew much more hedgehog-y. 

Here I interpret hedgehoggyness in a broad sense. Some examples:

  • “I apply X to Y”: professor cares about any projects at the intersection of X and Y. These can be quite broad, for example ML for social good.
  • “My X helps solve/explain Y”: professor has an expertise in X, and thinks it is a good way to view Y. For example: using tools from statistical physics to explain neural networks.
  • “I like X”: professor likes everything having anything to do with X, where X is one or more relatively narrow subfields.

Reasons why hedgehoggyness might evolve in successful professors:  

  • If you are narrowly focused on one particular point of view, it’s easy to check whether something is relevant, and it’s tractable to absorb all relevant information. 
  • If you have a well-defined perspective, it’s easier to come up with new research questions. For example, if your perspective is “I apply X to Y”, then whenever someone comes out with a new X, you can quickly check whether it can apply to Y. Or, whenever someone works on a new Y, you can check whether X does it better. 

Advantages and disadvantages of being hedgehog-y: (some quotes from an interview with Jacob Steinhardt, an AI safety researcher who is also a professor)

  • “First of all, I think foxes are just generally more right about things. Do you want to have accurate beliefs? You should just be a fox [...]”
  • “[O]n the other hand, hedgehogs might be more likely to really change how people think about something. [...] One thing I've been thinking about is while you're working on a problem you want to be more of a hedgehog.”

Professors are good at networking

Two important types of networking professors often accomplish are: 

  • Depth networking: professors personally know many of the best researchers in their field. “Oh, X? Y has done work in X, I should talk to them.” 
  • Breadth networking: professors are acquainted with many other professors at their university, even those in mostly unrelated fields (e.g. CS professor knowing law professor). Whenever a professor needs to navigate an unfamiliar literature, they can say: “Oh, X? I can ask Y about this…”

Professors are good at selling their work

"Selling one's work" can take many forms: writing a paper, giving a talk, writing a grant proposal, giving an elevator pitch to a non-expert, and so on. Successful professors are good at this. 

Some elaborations: 

  • On writing a paper: this blog post by Jacob Steinhardt gives valuable advice, which successful professors have internalized. Some salient quotes:
    • (On writing an abstract) "The first sentence / phrase should be something that all readers will agree with. The second should be something that many readers would find surprising, or wouldn’t have thought about before; but it should follow from (or at least be supported by) the first sentence. The general idea is that you need to start by warming the reader up and putting them in the right context, before they can appreciate your brilliant insight."
    • (On describing the importance of your work) "Don’t beat around the bush; if the point is “A, therefore B” (where B is some good fact about your work), then say that, rather than being humble and just pointing out A."
  • On giving a talk: there are countless guides online, they all give roughly the same good advice, which again, successful professors have internalized. Some common points: 
    • Know your audience's background. 
    • Explain clearly the important ideas, without getting bogged down in details (both easy details, which everybody knows, and hard details, which nobody will remember). 
    • Prepare high-quality slides. 

Professors easily switch between high-level and detail-oriented perspectives

Many people primarily track either the high-level or the detail-oriented perspective in their head. Professors seem especially good at doing both high-level and detail-oriented thinking simultaneously. For example, if a professor attends a talk, they are likely doing many of the following: 

  • (detail-oriented) Looking at each graph or equation and sanity checking it 
  • (detail-oriented) Asking about the relevance of technical boilerplate 
  • (high-level) Thinking, “what story does this research tell?” 
  • (high-level) Thinking, “what are the limitations of this approach?” 
  • (high-level) Thinking, “how does this research fit in with the literature I am familiar with?”

Professors allocate their time to what’s important

Successful professors seem to both have no time and plenty of time. Their schedules are completely full, but also, if something important comes up, they make time for that. This is because professors have the freedom to allocate their time to what’s important. Some examples:

Prioritizing high-value research

  • Most professors have many more research ideas than they have time. 
  • Thus, professors are good at estimating the value of research ideas and picking the highest value ones (based on how good they might turn out, what resources they might need, how likely they are to succeed, how long they might take, etc.)

80/20ing teaching

  • Teaching especially is something professors approach very differently. 
  • Some notable archetypes, which all share the common theme of budgeting time effectively:
    • Working hard initially to prep a good class, then teaching that same class every year with little to no changes 
    • Teaching a topic you don’t know, in order to give yourself an opportunity to learn it better
    • Teaching a topic close to your research so that little prep is required 
    • Teaching as a way of recruiting students (e.g. offer positions to those who do well in your course)
    • Putting very little effort into your teaching

Understanding important bureaucracy and ignoring everything else 

A common criticism of academia is bureaucratic bloat in universities significantly reduces researcher productivity. This may be true for some professors, but it seems that the most successful professors have evolved the skill of filtering for bureaucracy that matters. 

Bureaucracy that matters: 

  • Funding. Professors pay careful attention to the inner workings of the mechanisms that cause them to be paid, for obvious reasons. 
  • Publishing. Professors pay careful attention to journal/conference submissions, understand which papers are appropriate for which venues, understand who is organizing which conferences, and so on. 
  • University politics. For example, if pre-tenure, professors have to pay careful attention that they are on track.

And then, everything else is bureaucracy for which the professor is on a need-to-know basis: e.g. random extra committees, most things involving undergraduates, ignoring the endless deluge of unimportant emails, minute details involving teaching, etc.

What non-academic AI safety can learn from professors

The main point of this post is to list various attributes that professors share, and let readers draw their own conclusions. However I do want to list a few direct considerations for AI safety researchers: 

The value in being more hedgehog-y. Here I do not mean hedgehog in the sense of "Eliezer Yudkowsky has been warning us about AI risk for 200 years." Instead I mean to propose that some junior AI safety researchers might benefit from spending more time making progress on a specific technical approach ("What do X and Y say about AI Safety?"), rather than doing general fox-y thinking. 

The value in networking. For horizontal networking: if you know people who are experts in ML engineering, neuroscience, mathematics, and so on, then you can bounce research ideas off these people. For vertical networking: the more people you know doing different kinds of AI safety, the more potential collaborators you have. 

The value in selling your work. Here I have two claims:

  1. Many AI safety researchers should write papers, not just blog posts. 
  2. AI safety researchers should put effort into making these papers actually well-written.

A well-written paper can inspire all sorts of academic researchers to work on safety (who would have otherwise not), if it clearly describes why the considered problem is interesting. Three good overview papers are The alignment problem from a deep learning perspective, Unsolved Problems in ML Safety, and Eight Things to Know about Large Language Models, two good technical papers are Constitutional AI: Harmlessness from AI Feedback and Discovering Latent Knowledge in Language Models Without Supervision

New to LessWrong?

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 9:25 AM

I agree that all of these attributes are plausible attributes of successful professors. However, I'd still like to know where you're drawing these observations from? Is it personal observation? And if so, how have you determined whether a professor is successful or not? Is there a study that correlates academic impact across these traits?

However, I'd still like to know where you're drawing these observations from? Is it personal observation?

 

Yes, personal observation, across quite a few US institutions. 

And if so, how have you determined whether a professor is successful or not?

One crude way of doing it is saying that a professor is successful if they are a professor at a top 10-ish university. Academia is hypercompetitive so this is a good filter. Additionally my personal observations are skewed toward people who I think do good research, so additionally "successful" here means "does research which electroswing thinks is good". 

Is there a study that correlates academic impact across these traits?

I haven't looked for one. A lot of them seem tough to measure, hence my qualitative analysis here. 


In my experience, successful professors are often significantly better at the skills I've listed than similarly intelligent people who are not successful professors. My internal model is that this is because aptitude in these skills is necessary to survive academia, so anybody who doesn't make the cut never becomes a successful professor in the first place.

Specifically I think professors are at least +2σ at "hedgehog-y" and "selling work" compared to similarly intelligent people who are not successful professors, and more like +σ at the other skills. 

You can imagine a post "Attributes of successful athletes", where the author knows a bunch of top athletes, and finds shared traits in which the athletes are +2σ or +σ  such as 1) good sleep hygiene, 2) always does warm ups, 3) almost never eats junk food, 4) has a good sports doctor and so on. Even in the absence of a proper causal study, the average person who wants to improve fitness can look at this list and think: "Hmm (4) seems only relevant for professionals, but (1) and (3) seem like they probably have a strong causal effect and (2) seems plausible but hard to tell." 

One crude way of doing it is saying that a professor is successful if they are a professor at a top 10-ish university.

But why should that be the case? Academia is hypercompetitive, but the way it selects is not solely on the quality of one's research. Choosing the trendiest fields has a huge impact. Perhaps the professors that are chosen by prestigious universities are the ones that the prestigious universities think are the best at drawing in grant money and getting publications into high-impact journals, such as Nature, or Science.


Specifically I think professors are at least +2σ at “hedgehog-y” and “selling work” compared to similarly intelligent people who are not successful professors, and more like +σ at the other skills.

How does one determine this?


Overall, it seems like your argument is that AI safety researchers should behave more like traditional academia for a bunch reasons that have mostly to do with social prestige. While I don't discount the role that social prestige has to play in drawing people into a field and legitimizing it, it seems like overall, the pursuit of prestige has been a net negative for science as a whole, leading to, for example, the replication crisis in medicine and biology, or the nonstop pursuit of string theory over alternate hypotheses in physics. Therefore, I'm not convinced that importing these prestige-oriented traits from traditional science would be a net positive for AI safety research.

Furthermore, I would note that traditional academia has been moving away from these practices, to a certain extent. During the early days of the COVID pandemic, quite a lot of information was exchanged not as formal peer-reviewed research papers, but as blog posts, Twitter threads, and preprints. In AI capabilities research, many new advances are announced as blog posts first, even if they might be formalized in a reseach paper later. Looking further back in the history of science, James Gleick, in Chaos relates how the early researchers into chaos and complexity theories did their research by informally exchanging letters and draft papers. They were outside the normal categories that the bureaucracy of academia had established, so no journal would publish them.

It seems to me that the foundational, paradigm-shifting research always takes place this way. It takes place away from the formal rigors of academia, in informal exchanges between self-selected individuals. Only later, once the core paradigms of the new field have been laid down, does the field become incorporated into the bureaucracy of science, becoming legible enough for journals to routinely publish findings from the new field. I think AI safety research is at this early stage of maturity, and therefore it doesn't make sense for it to import the practices that would help practitioners survive and thrive in the bureaucracy of "Big Science".

Overall, it seems like your argument is that AI safety researchers should behave more like traditional academia for a bunch reasons that have mostly to do with social prestige.

 

That is not what I am saying. I am saying that successful professors are highly successful researchers, that they share many qualities (most of which by the way have nothing to do with social prestige), and that AI safety researchers might consider emulating these qualities. 

Furthermore, I would note that traditional academia has been moving away from these practices, to a certain extent. During the early days of the COVID pandemic, quite a lot of information was exchanged not as formal peer-reviewed research papers, but as blog posts, Twitter threads, and preprints. In AI capabilities research, many new advances are announced as blog posts first, even if they might be formalized in a reseach paper later. [...]

This is a non sequitur. I'm not saying stop the blog posts. In fact, I am claiming that "selling your work" is a good thing. Therefore I also think blog posts are fine. When I write about the importance of a good abstract/introduction, I mean not just literally in the context of a NeurIPS paper but also more broadly in the context of motivating ones' work better, so that a broader scientific audience can read your work and want to build off it. (But also separately I do think people should eventually turn good blog posts into papers for wider reach)

I think AI safety research is at this early stage of maturity

I disagree. Non-EA funding for safety is pouring in. Safety is being talked about in mainstream venues. Also more academic papers popping up, as linked in my post. In terms of progress on aligning AI I agree the field is in its early stages, but in terms of the size of the field and institutions built up around it, nothing about AI safety feels early stage to me anymore. 

How does one determine this?

I am confused by your repeated focus on empirics, when I have been very up front that this is a qualitative, anecdotal, personal analysis. 

I am saying that successful professors are highly successful researchers

Are they? That's why I'm focusing on empirics. How do you know that these people are highly successful researchers? What impressive research findings have they developed, and how did e.g. networking and selling their work enable them to get to these findings? Similarly, with regards to bureaucracy, how did successfully navigating the bureaucracy of academia enable these researchers to improve their work?

The way it stands right now, what you're doing is pointing at some traits that correlate with academic success, and are claiming that

  1. Aspiring to the standards of prestigious academic institutions will speed up AI safety research
  2. Researchers at prestigious academic institutions share certain traits
  3. Therefore adopting these traits will lead to better AI safety research

This reasoning is flawed. First, why should AI safety research aspire to the same standards of "publish or perish" and the emphasis on finding positive results that gave us the replication crisis? It seems to me that, to the greatest extent possible, AI safety research should reject these standards, and focus on finding results that are true, rather than results that are publishable.

Secondly, correlation is not causation. The fact that many researchers from an anecdotal sample share certain attributes doesn't mean that those attributes are causative of those researchers' success. There are lots of researchers who do all of the things that you describe, managing their time, networking aggressively, and focusing on understanding grantmaking, who do not end up at prestigious institutions. There are lots of researchers who do all of those things who don't end up with tenure at all.

This is why I'm so skeptical of your post. I'm not sure that the steps your take are actually causative of academic success, rather than merely correlating with academic success, and furthermore, I'm not even sure that the standards of academic success are even something that AI safety research should aspire to.

There are lots of ways a researcher can choose to adopt new productivity habits. They include:

  1. Inside view, reasoning from first principles 
  2. Outside view, copying what successful researchers do

The purpose of this post is to, from an outside view perspective, list what a class of researchers (professors) does, which happens to operate very differently from AI safety.

Once again, I am not claiming to have an inside view argument in favor of the adoption of each of these attributes. I do not have empirics. I am not claiming to have an airtight causal model. If you will refer back to the original post, you will notice that I was careful to call this a list of attributes coming from anecdotal evidence, and if you will refer back to the AI safety section, you will notice that I was careful to call my points considerations and not conclusions. 

You keep arguing against a claim which I've never put forward, which is something like "The bullshit in academia (publish or perish, positive results give better papers) causes better research to happen." Of course I disagree with this claim. There is no need to waste ink arguing against it. 

It seems like the actual crux we disagree on is: "How similar are the goals success in academia with success in doing good (AI safety) research?" If I had to guess the source of our disagreement, I might speculate that we've both heard the same stories about the replication crisis, the inefficiencies of grant proposals and peer review, and other bullshit in academia. But, I've additionally encountered a great deal of anecdotal evidence indicating: in spite of all this bullshit, the people at the top seem to overwhelmingly not be bogged down by it, and the first-order factor in them getting where they are was in fact research quality. The way to convince you of this fact might be to repeat the methodology used in Childhoods of exceptional people, but this would be incredibly time consuming. (I'll give you 1/20th of such a blog post for free: here's Terry Tao on time management.) 

This crux clears up our correlation vs causation disagreement: since I think the goals are very similar, correlation is evidence for causation, whereas since you think the goals are very different, it seems like you think many of the attributes I've listed are primarily relevant for the 'navigating academic bullshit' part of academia. 


I've addressed your comment in broad terms, but just to conclude I wanted to respond to one point you made which seems especially wrong. 

how did e.g. networking [...] enable them to get to these [impressive research] findings?

In the networking section, you will find that I defined "networking" as "knowing many people doing research in and outside your field, so that you can easily reach out to them to request a collaboration". People are more likely to respond to collaboration requests from acquaintances than from strangers. Thus for this particular attribute you actually do get a causal model: networking causes collaborations which causes better research results. I guess you can dispute the claim "collaborations cause better research results", but I think this would be an odd hill to die on, considering most interdisciplinary work relies on collaborations. 

What I'm questioning is the implicit assumption in your post that AI safety research will inevitably take place in an academic environment, and therefore productivity practices derived from other academic settings will be helpful. Why should this be the case when, over the past few years, most of the AI capabilities research has occurred in corporate research labs?

Some of your suggestions, of course, work equally well in either environment. But not all, and even the ones which do work would require a shift in emphasis. For example, when you say professors should be acquainted with other professors, that's valid in academia, where roughly everyone who matters either has tenure or is on a tenure track. However, that is not true in a corporate environment, where many people may not even have PhDs. Furthermore, in a corporate environment, limiting one's networking to just researchers is probably ill advised, given that there are many other people who would have influence upon the research. Knowing a senior executive with influence over product roadmaps could be just as valuable, even if that executive has no academic pedigree at all.

Prioritizing high value research and ignoring everything else is a skill that works in both corporate and academic environments. But 80/20-ing teaching? In a corporate research lab, one has no teaching responsibilities. One would be far better served learning some basic software engineering practices, in order to better interface with product engineers. Similarly, with regards to publishing, for a corporate research lab, having a working product is worth dozens of research papers. Research papers bring prestige, but they don't pay the bills. Therefore, I would argue that AI safety researchers should be keeping an eye on how their findings can be applied to existing AI systems. This kind of product-focused development is something that academia is notoriously bad at.

I also question your claim that academic bureaucracy doesn't slow good researchers down very much. That's very much not in line with what anecdotes I've heard. From what I've seen, writing grant proposals, dealing with university bureaucracy, and teaching responsibilities are a significant time suck. Maybe with practice and experience, it's possible for a good researcher to complete these tasks on "autopilot", and therefore not notice the time that's being spent. But the tasks are still costing time and mental energy that, ideally, would be devoted to research or writing.

I don't think it's inevitable that academia will take over AI safety research, given the trend in AI capabilities research, and I certainly don't think that academia taking over AI safety research would be a good thing. For this reason I question whether it's valuable for AI safety researchers to develop skills valuable for academic research, specifically, as opposed to general time management, software engineering and product development skills.

I have 2 separate claims:

  1. Any researcher, inside or outside of academia, might consider emulating attributes successful professors have in order to boost personal research productivity. 
  2. AI safety researchers outside of academia should try harder to make their legible to academics, as a cheap way to get more good researchers thinking about AI safety. 

What I'm questioning is the implicit assumption in your post that AI safety research will inevitably take place in an academic environment [...]

This assumption is not implicit, you're putting together (1) and (2) in a way which I did not intend. 

Furthermore, in a corporate environment, limiting one's networking to just researchers is probably ill advised, given that there are many other people who would have influence upon the research. Knowing a senior executive with influence over product roadmaps could be just as valuable, even if that executive has no academic pedigree at all.

I agree but this is not a counterargument against my post. This is just an incredibly reasonable interpretation of what it means to be "good at networking" for a industry researcher. 

But 80/20-ing teaching? In a corporate research lab, one has no teaching responsibilities. One would be far better served learning some basic software engineering practices, in order to better interface with product engineers.

My post is not literally recommending that non-academics 80/20 their teaching. I am confused why you think that I would think this. 80/20ing teaching is an example of how professors allocate their time to what's important. Professors are being used as a case study in the post. When applied to an AI safety researcher who works independently or as part of an industry lab, perhaps "teaching" might be replaced with "responding to cold emails" or "supervising an intern". I acknowledge that professors spend more time teaching than non-academic researchers spend doing these tasks. But once again, the point of this post is just to list a bunch of things successful professors do, and then non-professors are meant to consider these points and adapt the advice to their own environment. 

Similarly, with regards to publishing, for a corporate research lab, having a working product is worth dozens of research papers. Research papers bring prestige, but they don't pay the bills. Therefore, I would argue that AI safety researchers should be keeping an eye on how their findings can be applied to existing AI systems. This kind of product-focused development is something that academia is notoriously bad at.

This seems like a crux. It seems like I am more optimistic about leveraging academic labor and expertise, and you are more optimistic about deploying AI safety solutions to existing systems. 

I also question your claim that academic bureaucracy doesn't slow good researchers down very much. That's very much not in line with what anecdotes I've heard. [...]

This is another crux. We both have heard different anecdotal evidence and are weighing it differently. 

I don't think it's inevitable that academia will take over AI safety research, given the trend in AI capabilities research, and I certainly don't think that academia taking over AI safety research would be a good thing.

I never said that academia would take over AI safety research, and I also never said this would be a good thing. I believe that there is a lot of untapped free skilled labor in academia, and AI safety researchers should put in more of an effort (e.g. by writing papers) to put that labor to use. 

For this reason I question whether it's valuable for AI safety researchers to develop skills valuable for academic research, specifically, as opposed to general time management, software engineering and product development skills.

One of the attributes I list is literally time management. As for the other two, I think it depends on the kind of AI safety researcher we are talking about -- going directly back to our "leveraging academia" versus "product development" crux. I agree that if what you're trying to do is product development, that the skills you list are critical. But also, I think product development is not at all the only way to do AI safety, and other ways to do AI safety more easily plug into academia.