Some have argued that one should tend to act as if timelines are short since in that scenario it's possible to have more expected impact. But I haven't seen a thorough analysis of this argument.
Question: Is this argument valid and if yes how strong is it?
The basic argument seems to be: if timelines are short, the field (AI alignment) will be relatively smaller and have made less progress. So there will be more low-hanging fruits and so you have more impact.
The question affects career decisions. For example, if you optimize for long timelines, you can invest more time into yourself and delay your impact.
The question interacts with the following questions in somewhat unclear ways:
- How fast do returns to more work diminish (or increase)?
- If returns don't diminish, the argument above fails.
- If the field will grow very quickly, returns will diminish faster.
- Is your work much more effective when it's early?
- This may happen because work can be hard to parallelize - ‘9 women can't make a baby in 1 month’. And field-building can be more effective earlier as the field can compound-grow over time. So someone should start early.
- If work is most effective earlier, you shouldn’t lose too much time investing in yourself.
- Is work much more effective at crunch time?
- If yes, you should focus more on investing in yourself (or do field-building for crunch time) instead of doing preparatory research.
- If timelines are longer, is this evidence that we'll need a paradigm shift in ML that makes alignment easier/harder?
- (This question seems less tractable than the others.)
- Is your comparative advantage to optimize for short or long timelines?
- For example, young people can contribute more easily given longer timelines and vice versa.
If someone would like to seriously research the overall question, please reach out. The right candidate can get funding.
Related question: What considerations influence whether I have more influence over short or long timelines
I think the shorter the timeline, the more specific your plan and actions need to be. For short (< 10 year, up to maybe 40 year with very high confidence) timelines for radical singularity-like disruption, you aren't talking about "optimizing", but "preparing for" or "reacting to" the likely scenarios.
It's the milder disruptions, or longer timelines for radical changes, that are problematic in this case. What have you given up in working to make the short-timeline more pleasant/survivable that you will be sad about if the world doesn't end?
Having kids and how much energy to invest in them (including before you have them, in earning money you don't donate, and in otherwise preparing your life) rather than in AIpocalypse preparedness is probably the biggest single decision related to this prediction.