I am not in AI, but I work with highly analytical people in different career stages.
I enjoyed your approach and calculating the possible trajectories. However I feel the human aspect falls too short. While "I can probably learn anything" might be true (within bounds), it is a trap. The people I have observed who were truly memorable and had impact all had an intrinsic pull towards their field - not a strategized, optimized fit. And I am not talking generally within a field but also within the sub-specializations. While you argue against the emphasis on "personal fit" I would argue it is not just meant in terms of suiting a career but also in terms of personal satisfaction and reduction of friction for learning and adaptaion. In a fast, high-stakes field, less friction and having things come to you naturally is mental hygiene, burn-out prevention and the sweet-spot where brilliance emerges.
Time lost is compensated by skill and interest. Sustained execution over years requires more than optimal strategy, it requires genuine engagement with the work itself.
(crossposted from my substack)
Being a freshman at university, I seem to have been bestowed the great privilege of infinite possibilities. There is this strange feeling of trying to plan a career in a world that might not exist in five years. Not in a doomer sense but that the world of 2031 might be so radically different from today that most attempts of planning are incoherent. I contemplate machine god super intelligence arriving before I graduate, intelligence explosions compressing 10,000 years of progress into one, the possibility of being among the last humans to die before death itself is solved; then I do my laundry and pick electives. I am surprised I am not entirely losing my mind.
Robin Hanson writes, in 2009, This is The Dream Time:
This post includes some of my thoughts for other people in a similar position, who take these possibilities seriously enough to let them reshape their decisions, but aren’t sure how. Specifically: how to translate a probability distribution over timelines into an actual plan for what to do with the next few years of your life, when the wrong choice might round your impact to zero. Even the right choice might. But doing nothing guarantees it.
I'm relatively uncertain about most of what follows. Tell me where I'm wrong.
Expected value calculation under timeline uncertainty
A lot of career decisions under AI timeline uncertainty seem to reduce to weird, complicated calculations that are running in the background. Here I try to write one down. Real career choices are certainly not binary, but let’s take a simplified version that illustrates the framework of aptitude-adjusted expected value.
Say you’re choosing between two paths. Path A: get into technical research as fast as possible right now, upskill full-time in ML, try to contribute to alignment as fast as possible. Path B: stay in school, build foundations, maybe do field-building.
You think short and medium timelines are about equally as likely. Under short timelines the sprint maybe gets you 5 units of impact. You are starting behind, competing with people with varying skills and dispositions, but you are contributing something. In this short timeline world the slow investment gets you roughly 0, everything changed before anything paid off. Under medium timelines, the sprint gets you maybe 2, you contribute but at the cost of burning through your runway without deep foundations. The slower investment gets you 50,
EV(sprint-heavy) = 0.5 × 5 + 0.5 × 2 = 3.5
EV(invest-heavy) = 0.5 × 0 + 0.5 × 50 = 25
The invest-heavy orientation wins, not because medium timelines are more likely, but because the magnitude of your impact there is so much larger. The ratio matters more than the probability. Even if you’re 80% confident in short timelines, the invest-heavy approach still wins in this example. You’d need to be around 91% confident before the sprint orientation becomes the better bet. And 91% confidence in anything this uncertain should be pretty concerning.
These numbers are made up and yours should differ. Maybe you think the sprint would get you more than 5 in a short-timeline world. Maybe you think field-building is worth less than 50 in a medium-timeline world. You should run it yourself. The strategy of betting on the world where you have the most leverage, rather than the one you deem most likely, focuses on maximizing asymmetric upside—situations where your potential gains significantly outweigh your potential losses. This approach prioritizes expected value over mere probability, focusing on high-impact scenarios where you possess a distinct advantage.
What should I do if everything is learnable:
EA career advice has historically emphasized “personal fit”, find what you’re naturally good at and apply it to important problems. I think this view is sometimes wrong and slightly in contradiction with the idea that most things are learnable.
Robert Musil writes in The Man Without Qualities:
Some evidence for malleability:
However, if you genuinely believe you can become and learn almost anything, the search space turns into infinity. This seems not conducive to decision making. Full malleability and infinitely possibilities can be pretty paralyzing.
Personally, I seem to have a comparative advantage in being able to speak Chinese, which is plausibly useful for AI governance and international coordination. But I’ve also barely done anything and have little evidence for what I might be good at. Even if I’m not maximally gifted at math, I can probably learn a good amount if I really tried. I could probably also learn a good amount of policy analysis, or ML engineering, or institutional design, or whatever. Or can I just do all of these?
Here, timeline views provide some constraints to the search space that disposition alone cannot. Short timelines suggest one to build the most immediately deployable skill and medium timelines favor investing in foundations that compound. For the same reason that the world in whole should have people betting on different timelines, you can diversify your skill profile. You don’t have to pick one timeline and go all in.
Should I have my own thoughts about timelines?
My thoughts about this are here.
Summary: The question of whether to defer or think for yourself relies on a false binary that ignores better options. You probably cannot out-predict Daniel Kokotajlo. But you should still actually understand why experts believe what they believe, and what the object-level grounds that inform timeline forecasts are. This is instrumentally important for being able to produce impactful work, maybe developing research taste, and being able to make informed decisions.
Doing something is probably better than doing nothing. Doing the best thing requires both luck and calculation. You can increase your surface area for luck, but you cannot control it. AGI is super scary but don’t go cheap on actually doing object level world modeling! Revisit your assumptions frequently. Do some healthy deference when needed. There has never been a better time to get into AI safety. But before signing up for any of these things actually maybe sit down and run your own numbers. Make decisions that are good enough across the range of plausible futures, and then to act, because acting on an imperfect view beats perfecting a view you never act on.