The post seems to assume that after the storm, there will be a human elite that have control of the labor of AI and robots, and the question is how to get into that elite, or how to at least have enough independent resources to support oneself. That's not how I think about things.
My expectation is that there will be superintelligence, for which all lesser beings may as well be puppets - automata whose workings are transparent and easily controlled. Superintelligence might be a completely nonhuman agent arising from AI, or maybe it would also contain humans with executive power somewhere in its workings, though they might not be very human by then.
Under this assumption, if you think that self-preservation requires being at the apex of the hierarchy, then you need to become or be part of a superintelligence. But that amounts to being on the winning team in the race to create superintelligence, as well as hoping that you get to be part of its executive component.
If that's too much, then you need to hope that superintelligence is benign or friendly towards you in some way. This is a version of the problem of alignment... Like you, I'm not an insider, and no one is paying me to do alignment research. However, forums like this one are read by some insiders. In principle, it is possible for a post here to be read by insiders and affect what they do. So, again in principle, it is possible for an outsider to have a (probably very small) effect on how things turn out.
I'd recommend putting some effort into strengthening the virtues, as these have a good track record over time, are flexible across many situations, and have application in multiple areas of life. Specific skills (like e.g. "learn to code") are more brittle in times of rapid change and less broadly-applicable. The virtues are also remarkably neglected in our society at the moment, which means that by developing them in yourself you can differentiate yourself from the crowd. They are also a way to be more self-sufficient in governing your life satisfaction: less dependent on unstable and stormy societal structures and unreliable external goods.
My general opinion about "staying afloat" is that trying to secure only your own immediate slice of the pie/capital allocation/lightcone is a bit like trying to install tsunami proof fencing around your seaside property. Trying to secure your own slice in such a chaotic and high-variance time period is really really hard.
With that in mind the choice is clear: Either we pitch in to society's collective defences (i.e. seawalls in this metaphor) or you are one of the lucky ones who had the privilege or foresight to be shielded from the turbulence (i.e. being born rich and living inland). Either way, not much hope trying to become a kingpin at the last minute.
As you note, opinions differ widely, on many axes, and while I also will like to see more people's viewpoints and advice made explicit, there is really no path you can actually be confident in. In that kind of scenario, there's IMO three factors to consider.
First, which predictions resonate with you, and best withstand scrutiny from you?
Second, which paths fail most gracefully? In the event you pick wrong (and in which there was a right thing to pick), what leaves you in an acceptable position anyway?
Third, by what criteria do you wish for your actions to be judged, and which paths best align with that?
I'm also a college student who has been wrestling with this question for my entire undergrad. In a short timelines world, I don't think there are very good solutions. In longer timelines worlds, human labor remains economically valuable for longer.
I have found comfort in the following ideas:
1) The vast majority of people (including the majority of wealthy white-collar college-educated people) are in the same boat as you. The distribution of how AGI unrolls is likely to be so absurd that it's hard to predict what holds value after this. Does money still matter after AGI/ASI? What kinds of capital matters after AGI/ASI? These questions are far from obvious for me. If you take these cruxes, then even people at AGI labs could be making the wrong financial bets. You could imagine a scenario where AGI lab X builds AGI first and comes to dominate the global economy so that everyone with stock options in AGI lab Y will be left with worthless capital ownership. You could even imagine a scenario of owning stock in an AGI lab that builds AGI and then that capital is no longer valuable.
2) For a period of time, I suspect that young people are likely to have an advantage in terms of using "spiky" AI tools to do work. Being in the top few percentile of competence for coding with LLMs or doing math with LLMs or even doing other economically valuable tasks using AI is likely to have career opportunities.
3) You can expect some skills to be important up until the point of AGI. For example, I see coding and math in this boat. Not only will they be important, but the people doing the most crucial and civilization altering research will likely be very good at these skills. These people are likely to be the 1 in a million Ilya Sutskever's of the world, but I still find it motivating to build up this skillset at a point which is really the golden age of computer science.
More generally, I have found it useful to think about outcomes as sampled from a distribution and working hard as pushing up the expected value of that distribution. I find this gives me much more motivation.
Help me settle this debate.
There was recently a post on here by a bright young guy about how it felt staring into the abyss, so to speak, and confusion about what next steps to take, knowing you really only get one shot. Quite a few others commented about how they're in a similar situation, but there was no consensus on how to proceed, given a shortened timeline (however long it may be). And given there are far more lurkers than posters, I suspect there are lots of people with these concerns but no concrete answers.
The canonical, impact-maximizing solutions are to "spread awareness" and "learn to code and work your way into a lab", which could have worked in the past, but seem to fall short today. With a non-target degree, proving your merit seems infeasible. Furthermore, it's not clear you can upskill or lobby or earn to give fast enough to contribute anything meaningful in time.
If the hour really has come, and contributing to the cause is unlikely, self-preservation becomes the goal. Western social safety nets (and culture in general) require immense future incomes that are far from guaranteed; "we used to be happy as farmers" is true, but avoids the problem. The jury's out on exactly how long we have, but I think whatever percentage you put on, say, AGI by 2027, it exceeds the threshold for a rational actor to make big changes. A new plan is needed.
There doesn't seem to be any conventional defense against shortened timelines. The advice given by the people will benefit from the incoming tidal wave of automation - the managers, the team leads - has ranged from "work on what you're interested in" to "I'll be retired when that becomes a problem." In the old world, it was okay to spend on grad school or try things out, because you had your entire life to work for a salary, but we face the real possibility that there's only a few years (if that) to climb out of the bucket.
Frankly, it's a little thrilling to consider this, in a Wild West, "the action is the juice" way. But there's a needle that needs to be threaded.
What's the best path forward? Specifically, what can a young adult without target credentials (but the drive and ability to punch in that weight class) do to stay afloat? We go back and forth on this, our group of college seniors.
One faction asserts it's still possible to scramble up the career ladder faster than the rungs get sawn off; artificial reasoning and automation won't develop uniformly, and there's still space to get a foothold due to frictions like slow business adoption and regulations.
The other says, look, it's past time, that the only way out is to throw it all out and start building, that earning from labor rather than capital is tethering yourself to a sinking boat, and the few months of head start you get from being diligent won't make a difference.
The first group counters. Competing in the online entrepreneurship space is exposing yourself to the most brutal arena in the free market, and even in the 99th percentile best outcome, you'd work far harder for a wage equivalent to the white collar worker, with far more volatility.
Sure, the second group says, but it's worth it for the Hail Mary chance at making something great and escaping the permanent underclass. Anything else and you're guaranteeing your demise; any capital you'd squirrel away won't budge the needle of the utility function of the future. The premium on intelligence and knowledge is only going to fall, and it's better to harness this than be a victim of it.
There isn't an option other than a career, claims the first, because the startup market's already saturated. Without domain knowledge, connections, and experience, there's nowhere to even begin, and anything you could come up with will get wiped out once a serious institution enters the ring.
It's only going to get far worse as honest, hardworking people get let go, replies the second. We can find a way, a gap in the market, but we have to hunker down and go all-in now. We can cut our own slice of the growing pie.
And on it goes, with no apparent verdict. We debate not out of spite, but out of a mutual concern that the ground is giving way under our feet. We gladly welcome your perspectives, to help both us and those in similar situations keep fighting forward.