I like the sentiment and much of the advice in this post, but unfortunately I don’t think we can honestly confidently say “You will be OK”.
I used to work with hospice patients, and typically the ones who were the least worried and most at peace were those who had most radically accepted the inevitable. The post you’ve written in response to read like healthy processing of grief to me, and someone trying to come to terms with a bleak outlook. To tell them essentially “it’s fine, the experts got this” feels disingenuous and like a recipe for denialism. When that paternalistic attitude dominates, then business as usual reigns often to catastrophic ends. Despite feeling like we don’t have control over the AI outcome broadly, we do have control over many aspects of our lives that are impacted by AI, and it’s reasonable to make decisions one way or another in those areas contingent on one’s P-doom (eg prioritizing family over career short term). There’s a reason in medicine people should be told the good and the bad about all options, and be given expectations before they decide on a course of treatment, instead of just leaving things to the experts.
P[doom] ... it makes sense for individuals to spend most of their time not worrying about it as long as it is bounded away from 1
That has no bearing on whether we'll be OK. Beliefs are for describing reality, whether they are useful or actionable doesn't matter to what they should say. "You will be OK" is a claim of fact, and the post mostly discusses things that are not about this fact being true or false. Perhaps "You shouldn't spend too much time worrying" or "You should feel OK" captures the intent of this post, but this is a plan of action, something entirely different from the claim of fact that "You will be OK", both in content and in the kind of thing it is (plan vs. belief), in the role it should play in clear reasoning.
The OP's point was a bit different:
However, I expect that, like the industrial revolution, even after this change, there will be no consensus if it was good or bad. Us human beings have an impressive dynamic range. We can live in the worst conditions, and complain about the best conditions. It is possible we will cure diseases and poverty and yet people will still long for the good old days of the 2020's where young people had the thrill of fending for themselves, before guaranteed income and housing ruined it.
Most likely it means that mankind will end up adapting to ~any future except from being genocided, but nostalgia wouldn't be that dependent on actual improvements in the quality of life.
"You will be OK", he says on the site started by the guy who was quite reasonably confident that nobody will be OK.
Seeing this post and its comments made me a bit concerned for young people around this community. I thought I would try to write down why I believe most folks who read and write here (and are generally smart, caring, and knowledgable) will be OK.
I agree that our society often is under prepared for tail risks. As a general planner, you should be worrying about potential catastrophes even if their probability is small. However as an individual, if there is a certain probability X of doom that is beyond your control, it is best to focus on the 1-X fraction of the probability space that you control rather than constantly worrying about it. A generation of Americans and Russians grew up under a non-trivial probability of a total nuclear war, and they still went about their lives. Even when we do have some control over possibility of very bad outcomes (e.g., traffic accidents), it is best to follow some common sense best practices (wear a seatbelt, don't drive a motorcycle) but then put that out of your mind.
I do not want to engage here in the usual debate of P[doom]. But just as it makes absolute sense for companies and societies to worry about it as long as this probability is bounded away from 0, so it makes sense for individuals to spend most of their time not worrying about it as long as it is bounded away from 1. Even if it is your job (as it is mine to some extent) to push this probability down, it is best not to spend all of your time worrying about it, both for your mental health and for doing it well.
I want to recognize that, doom or not, AI will bring about a lot of change very fast. It is quite possible that by some metrics, we will see centuries of progress compressed into decades. My own expectation is that, as we have seen so far, progress will be both continuous and jagged. Both AI capabilities and its diffusion will continue to grow, but at different rates in different domains. (E.g., I would not be surprised if we cured cancer before we significantly cut the red tape needed to build in San Francisco.) I believe that because of this continuous progress, neither AGI nor ASI will be discrete points in time. Rather, just like we call recessions after we are already in them, we will probably decide on the "AGI moment" retrospectively six months or a year after it had already happened. I also believe that, because of this "jaggedness", humans, and especially smart and caring ones, would be needed for at least several decades if not more. It is a marathon, not a sprint.
People have many justifiable fears about AI beyond literal doom. I cannot fully imagine the way AI will change the world economically, socially, politically, and physically. However, I expect that, like the industrial revolution, even after this change, there will be no consensus if it was good or bad. Us human beings have an impressive dynamic range. We can live in the worst conditions, and complain about the best conditions. It is possible we will cure diseases and poverty and yet people will still long for the good old days of the 2020's where young people had the thrill of fending for themselves, before guaranteed income and housing ruined it.
I do not want to underplay the risks. It is also possible that the future will be much worse, even by my cynical eyes. Perhaps the main reason I work on technical alignment is that it is both important and I am optimistic that it can be (to a large extent) solved. But we have not solved alignment yet, and while I am sure about its importance, I could be wrong in my optimism. Also as I wrote before, there are multiple bad scenarios that can happen even if we do "solve alignment."
This note is not to encourage complacency. There is a reason that "may you live in interesting times" is (apocryphally) known as a curse. We are going into uncharted waters, and the decades ahead could well be some of the most important in human history. It is actually a great time to be young, smart, motivated and well intentioned.
You may disagree with my predictions. In fact, you should disagree with my predictions, I myself am deeply unsure of them. Also, the heuristic of not trusting the words of a middle aged professor has never been more relevant. You can and should hold both governments and companies (including my own) to the task of preparing for the worst. But I hope you spend your time and mental energy on thinking positive and preparing for the weird.