While playing around with Stable Diffusion last week I had an epiphany. I realized that while I believe with high probability that the world will be radically different in 15 years, all my investments, living arrangements, and other long term plans implied a belief that the world would be roughly similar to now. This shook me to my core.

More concretely, I have investments in suburban American property, a 401k plan that I maximize and invest in a broadly diversified low cost fund, many children whom I educate in roughly conventional subjects (from college age down to toddler age), and a job that I intend to keep for the long term. But I believe there is a less than 1% probability that my 401k will mean anything by the time I can withdraw it. I don't believe that my property (or any suburban American property) will be valuable in 15 years under most scenarios I consider likely. I don't think my children's education will serve them particularly well during their adulthood. 

What are my beliefs about the future? I believe that AI and biology at least (two subjects I know well and follow closely) are accelerating quickly, with logarithmic increases of capability and similar log decreases of cost. I believe that it is early days for both fields, and these log curves will be extended further for years to come.

Given those beliefs, the thing I expect most is radical change, in a direction that I can't predict well. I'm not biased towards a doom scenario (AI takeover, engineered unstoppable plague), nor towards a utopian scenario (beneficent AGI, biological immortality). But I am biased strongly away from the continuation of the status quo, away from any regime where I can sell my house, live off my 401k, watch my children take jobs that exist today and have their own families. 

I suspect others here hold similar beliefs about the future. How are you preparing now? How do you invest, how do you rear your young, what actions do you take that let you sleep well, knowing that whatever tomorrow brings your rationality has armed you in advance?

New to LessWrong?

New Comment
35 comments, sorted by Click to highlight new comments since: Today at 5:10 AM

In terms of investment assuming doom: Barbel strategy, "eating the seedcorn" high-dividend stocks - oil and cigarette companies for example - should be underpriced assuming near-term doom. Then something that captures the upside in the unlikely scenario of a property-rights respecting AI takeoff scenario - Google/Nvidia/Facebook stock maybe.  Perhaps someone has some thoughts on investing while under the anthropic shadow - puts on Apple and TSMC maybe? Food, farmland and ammunition? 

None other than Peter Thiel wrote a huge essay about investing while under anthropic shadow, and I wrote a post analyzing said essay!  It is interesting, although pretty abstract in a way that probably makes it more relevant to organizations like OpenPhilanthropy than to most private individuals.  Some quotes from Thiel's essay:

Apocalyptic thinking appears to have no place in the world of money. For if the doomsday predictions are fulfilled and the world does come to an end, then all the money in the world — even if it be in the form of gold coins or pieces of silver, stored in a locked chest in the most remote corner of the planet — would prove of no value, because there would be nothing left to buy or sell. Apocalyptic investors will miss great opportunities if there is no apocalypse, but ultimately they will end up with nothing when the apocalypse arrives. Heads or tails, they lose. ...A mutual fund manager might not benefit from reflecting about the danger of thermonuclear war, since in that future world there would be no mutual funds and no mutual fund managers left. Because it is not profitable to think about one ’s death, it is more useful to act as though one will live forever.

Since it is not profitable to contemplate the end of civilization, this distorts market prices. Instead of telling us about the objective probabilities of how things will play out, prices are based on probabilities adjusted by the anthropic logic of ignoring doomed scenarios:

Let us assume that, in the event of [the project of civilization being broadly successful], a given business would be worth $ 100/share, but that there is only an intermediate chance (say 1:10) of that successful outcome. The other case is too terrible to consider. Theoretically, the share should be worth $ 10, but in every world where investors survive, it will be worth $100. Would it make sense to pay more than $10, and indeed any price up to $100? Whether in hope or desperation, the perceived lack of alternatives may push valuations to much greater extremes than in nonapocalyptic times.

See my post for more.

I agree and disagree.  Optionality is undervalued today - getting a second citizenship if possible, and a more diverse social support network across some borders, is worth a lot.  

I doubt, in the case of "no-more-individual-property-rights" changes (AI takeover and AI-assisted human takeovers), that the current levels of welfare and private incomes will be at all relevant.  In the medium-term (during takeoff, as some places are collapsing and some are maintaining), the places most different from your primary locale are probably best.  But the changes will spread and catch up with you anywhere.  

 

I am donating half my net worth over the next few years towards efforts to make it more likely that we end up in one of the better futures (focusing on highly frugal people, as they are more neglected by major grantmakers and my funds will go further), alongside most of my productive energies.

If anyone wants to commit to joining the effort, please reach out. There are many low hanging fruit in the alignment ecosystem space.

Well, I don't know that I'm a particularly good example to emulate but my responses to updating to a strong belief in a future of radical change in < ~15 years include:

quitting my comfy mainstream ML engineer job to do full-time AI safety research, despite this meaning a lot less money and security and additional challenges like having to work alone instead of in a good-feeling team environment. Also having to put extra will-power into self-motivating despite lack of clear objective rapid feedback loops.

stopping putting money into my 401k, stopping imagining I'm going to have anything like the normal retirement my grandparents and parent had

adjusting some of my stock investments

having tumultuous internal debate around whether to go forward with my plans to have a child (I lean yes, but it feels a lot trickier to be confident now than it did 5 years ago)

sleeping less well at night

becoming more anxious and acquiring corresponding ticks and bad habits

spending more time and energy on trying to actively combat this anxiety with better mental health practices like extra exercise, more time in nature, taking more frequent brief work breaks to play with my dog, active measures to improve my sleep (got a CPAP machine which helps a lot) plus simple stuff like better mattress and light blocking curtains.

being prepared to move my family to a new country in an emergency (passports up to date and such)

trying to intentionally put energy into social connections I could depend on in an emergency situation.

buying a home in a rural area near my extended family, with backup stores of food & fuel, wood stove for heat, generator, solar panels, electric and non-electric car, tools, various supplies, etc.

writing down plans and thoughts for ways in which I currently think are the best ways to handle a wide variety of futures. I've already spent a lot of my life trying to imagine weird ways the future might go and trying to acquire a very broad base of skills, so that's not much of a change. One change that's more recent is more of a focus on planning and skill development around how I might work with a complex digital world making good use of AI assistants. In a multi-polar world with a mix of aligned and unaligned proto-AGI that might or might not FOOM, what actions are beneficial to yourself and humanity for you to take? 

Can you come up with a set of questions that you could ask an agent over the internet using a text-based medium to try to determine if that agent is human vs AI, to determine how aligned with your interests that agent is? What sorts of questions would be harder to fake answers to?

trying to actively think and worry less about things which have a longer time horizon than 15 years. Like, practicing getting rid of habits like feeling guilty for failing to recycle personal waste when it is inconvenient. The landfill near my home won't run out of space in the next 15 years no matter how little I recycle! This is a silly worry! But there are so many little silly habits in my brain, and getting rid of them isn't easy.

Anyone who thinks they have even remotely the sort of competence which could help build aligned AI should work on that, even if they think they aren't of the highest caliber or not seeing immediate positive benefits accruing from their initial attempts. I'm definitely of the opinion that the more people we can get working on it the better (provided that you don't hinder the best thinkers with the fumbling of the worst).

Even seemingly non-technical tasks can be quite useful, even if you can only afford to do them in your spare time. For instance, I think that writing down your moral intuitions around a wide variety of subjects could be quite useful. The more samples we have of human moral intuitions the better, for a variety of different alignment-related uses. Also, if we do hit a strong AGI singularity, it could be really useful for your personal morals and desires to be represented in the datasets used to train and align the AGI! Much more likely that the resulting world will match your desires.

Here's a paper with some good examples of the sorts of questions it is useful for you to come up with and write down your personal answers to: https://arxiv.org/abs/2008.02275 

another related paper: https://psyarxiv.com/tnf4e/ 

I can relate to so many of your points. I too am getting less sleep, planning a rural well stocked estate, and stopping my 401k contributions.

The point about social connections makes a lot of sense, but that's the hardest one for me. I think it would be best to have connections with people who share my view of the future, and who want to prepare together. I have my large family, but I think it would be best to have a larger community.

I disagree with "sleeping less well at night".

I think if you're able to sleep well (if you can handle the logistics/motivation around it, or perhaps if sleeping well is a null action with no cost), it will be a win after a few days (or at most, weeks)

I don't think Nathan was suggesting that "sleeping less well at night" is a desirable response to the situation, merely that it's a response that they've developed, probably against their conscious will. Similarly to the next one on the list, "becoming more anxious and acquiring corresponding ticks and bad habits".

Ah,

I thought it was "I'm going to sacrifice sleep time to get a few extra hours of work"

My bad

I find myself in a similar situation. The way I think about this is that there are a few things that are essential to survive in this radically uncertain scenario:

  1. Health. That's the absolute first. Take care of it. Do sport regularly. See friends and have solid connections. Have a regular sleep pattern. Eat well.  Don't overwork.  Have fun. Having a bad health can seriously impair your capacity to maneuver in this uncertain world. It does not depend on you exclusively but there are many things that you can certainly do
  2. Finances. To me, it does not make a lot of sense to count on a 401k, because I don't really know if I will be alive by the time I can use that / the state as we know will be able to pay that. So I prefer to diversify my investments in different currencies, shares, etc and in different countries. It is very important to me not to have any debts (some people will disagree here). I prefer investments in ETFs or cryptos than in real estate because they are way more liquid and there is less bureaucracy involved. If you aren't rich, try at least to have a profession that allows you to find a job very easily in any part of the world, and be willing to move. Go where you are treated best. There are many places better to live than the US
  3. Knowledge:  Gather a broad general culture. It saddens me a bit that LW is too much about AI (which I consider extremely important, don't get me wrong). Other ideas are important to survive in this world and I wished they were discussed more often here. I am talking about economics, anthropology, psychology, political science, etc. 

Debt is a good issue. I think in most scenarios where a 401k or the stock market lose meaning debt also loses meaning. So it might make sense to take on maximum debt to invest in physical goods, or to enable you to suddenly react to a change that requires a lot of liquid wealth.

That's why It's important to do the EV calculations, and consider the pain of heavy debt loads if you're significantly wrong about timelines WRT continuity of property rights and/or monetary value.  Also, there really aren't that many cases where financial investments become irrelevant and physical goods remain secure and valuable.  

If you put a very high probability on collapse of financial value and loss of property rights on a short-ish timeline, you might consider high debt, but invest in experiences and capabilities rather than physical goods.  Go back to school (in a topic you enjoy, not necessarily to get a job).  Travel the world (has added advantage of making future permanent changes easier).  Party and make friends.

This entire post reminded me of this section from Human Compatible, especially the section I've put in bold: 

“There are some limits to what AI can provide. The pies of land and raw materials are not infinite, so there cannot be unlimited population growth and not everyone will have a mansion in a private park. (This will eventually necessitate mining elsewhere in the solar system and constructing artificial habitats in space; but I promised not to talk about science fiction.) The pie of pride is also finite: only 1 percent of people can be in the top 1 percent on any given metric. If human happiness requires being in the top 1 percent, then 99 percent of humans are going to be unhappy, even when the bottom 1 percent has an objectively splendid lifestyle. It will be important, then, for our cultures to gradually down-weight pride and envy as central elements of perceived self-worth.”

In scenarios where transformative AI can perform nearly all research or reasoning tasks for humanity, my pride will be hurt to some degree. I also believe that I will not be in the 1% of humans still in work, perhaps overseeing the AI, and I find this prospect somewhat bleak, though I imagine that the severity of this sentiment would wane with time, especially if my life and the circumstances for humanity were otherwise great as a result of the AI. 

The first point of your response calms me somewhat. Focusing more in the near-future on my body, health, friends, family, etc... the baselines would probably be good preparation for a future where AI forwards the state of human affairs to the point where humans are not needed for reasoning or research tasks.

These are very good points. For instance, I used to wonder, what is the point of learning how to compose music if soon a machine will be able to do it 1000 times better than I do?

But the thing is, I think that's a false problem. There is already a huge amount of people who can make music better than I do,  and I still can find it a pleasant activity. 

I put some probabalistic weight on radical-short-term-change (15-40 years) scenarios, but I think you're making a mistake to put ALL your belief in that, with none (or almost none) on more gradual changes.  Even if it's not over 50% (and I think it IS, but probably not over 70%), it's the most likely single kind of future that your property, investments, and knowledge remain valuable.  

For the radical changes, it's worth categorizing into "disasters I can't really do much about", and removing them from your calculations.  "Disasters I can help avert", and "Disasters I can prepare for" are worth an expected-value calculation: what do you give up in the most likely worlds (status-quo-ish) to have better experiences in those sets of worlds, multiplied by the probability of each?  Likewise "utopias where I win regardless of today's behavior" gets ignored, and "utopias I can help cause" and "utopias I can improve my experience in" get evaluated in terms of cost-benefit.

For me (and the children and younger people I'm involved with), the standard "live a good life, and prepare to continue that as long as possible, within the somewhat-predictable variations and complications of modern life" advice holds.  Monetary investments are a little more barbell than 25 years ago, and the risky/speculative portions need to be reviewed more often.  Among my circle, education is mostly seen as desirable on it's own, more than a required hurdle to pass, and a focus on breadth and problem-solving has always been a critical parts of it (both in school and out), so that doesn't change much either.  

My suggestions regarding the epistemics of the original post are fairly in line with the content in your first paragraph. I think allocating decision weight in proportion to the expected impacts different scenarios have on your life is the correct approach. Generating scenarios and forecasting their likelihood is difficult, and there is also a great deal of uncertainty with how you should change your behavior in light of these scenarios. I think that making peace with the outcomes of disastrous scenarios that you or humanity cannot avoid is a strong action-path for processing thinking about uncontrollable scenarios. As for scenarios that you can prepare for, such as the effects of climate change, shallow AI, embryo selection / gene-editing, and forms of gradual technological progress, among other things, perhaps determining what you value and want if you could only live / live comfortably for the next 5, 10, 15, 20, 30, etc... years might be a useful exercise, since each of these scenarios (e.g., only living 5 more years vs. only living 10 more years vs. only more 5 years in global business-as-usual) might lead you to make different actions. I am in a similar decision-boat as you, as I believe that in coming years the nature of the human operations in the world will change significantly and on many fronts. I am in my early 20s, I have been doing some remote work / research in the areas of forecasting and ML, want to make contributions to AI Safety, want to have children with my partner (in around 6 years), do not know where I would like to live, do not know what my investment behaviors should be, do not know what proportion of my time should be spent doing such things as reading, programming, exercising, etc... A useful heuristic for me has been to worry less. I think moving away from people and living closer to the wilderness have benefitted me as well; the location I am in currently seem robust to climate change and mass exoduses from cities (should they ever occur), has few natural disasters, has good air quality, is generally peaceful and quiet, and is agriculturally robust w/ sources of water. Perhaps finding some location or set of habits that are in line with "what I hoped to retire into / do in a few years or what I've always desired for myself" might make for a strong remainder-of-life / remainder-of-business-as-usual, whichever you attach more weight to.

But if I assume doom, my safe withdrawal rate gets so high!

I am not doing anything different from you, but I don't see any major tactical shifts that make much sense. The problem is that 401k and index funds already are the maximum-uncertain-future choices, for any future where the stock market succeeds as an institution. Residential real estate already is the lowest risk bet for any future where land is assessed according to price rather than according to use.

So mostly what I am trying to do is:

  • Identify ways to make my property more useful. This is basic things, like growing a chunk of our food, increasing the amount of maintenance I can do myself by owning tools and practicing, etc.
  • Try to identify triggers, which is to say things which clearly indicate it is time to dispose of a particular asset (or at least stop investing further in it). I am not successful in this so far, and the alternative remains "accumulate cash."

The core of my intuition about this problem: the less certainty there is, the higher the premium on options. On the other hand, the only real options are the ones we can actually execute. This causes me to believe that the best investments have more to do with skills and knowledge - particularly of coherent approaches to problems that you expect to crop up, or at least where to find out about them. This is to save time and resources spent on search when the circumstances change.

There is an entirely different approach which I probably invest more thinking in, though less money and physical effort (so far): opportunities to make a contribution. By this I mean pro-social business ideas. The most recent example was following the supply chain crunch, and after reading A Brief Introduction to Container Logistics and I considered a business which went around buying up these containers and leases from the various participants in the name of being able to agree to both sides of the impasse described in the article. Still might if we hit another major crunch.

I expect climate wars to happen in the 20-30 year time frame, and the beneficiaries will be those living further north and able to fend for themselves and fend off the rest of the horde. It is really hard to prepare for something like that, other than learning survival skills and people skills, since controlling people will be more important than anything else.

Also, I think you might have meant exponential when you said logarithmic.

Which countries will go to war with who?  Doesn't strike me as plausible that, eg, individual random countries in the tropics would literally declare war on much-richer countries far away.

I think you are confusing the interests of citizens in the tropics (who might be motivated to immigrate from eg the Middle East to Europe, or from Indonesia to New Zealand, or from Venezuela to Uruguay, just as the poor are always motivated to move to more prosperous lands) with diplomacy -- why would the leaders of places like Indonesia declare war on places like New Zealand?  We don't see countries in Central America trying to declare war on the USA today.

I never said it would be countries declaring wars. The reality will be a lot messier. People will get desperate and just go where they can survive. Imagine Mexicans trying to cross the US border, multiplied by 1000 times or more, all over the world. Can Canada accept tens of millions of refugees from the US? Can Russia deal with untold millions of Chinese, Indians and others trying to escape to the North? 

You didn't say countries declaring wars but you called them climate wars, wouldn't it be easier and classier to admit a mistake with your word choice?  Nevertheless, I do agree with you in fact with a scenario like this

What terminology would you use instead?

Crisis. Upheaval

Special operation.

I am an Indian, do you think migrating Northwards should be a priority?

It would be for me 

Well, a way to prepare for that is moving to a northern country for starters!

Good points. I think of investing in skills, physical survival goods, or a resilient dwelling as possible choices. For climate this might be building an underground house in the north, in an area predicted to be wetter and warmer in the future. Yeah, exponential is what I mean.

There's some prior discussion here.

  • I try to both [be useful] and [have a good life / don't burn out]
  • I started thinking a few days ago about investments. Initial thoughts:
    • Given we're not all dead, what happens and how to get rich?
      • Guess 1: There was a world war or something similar that got to all AI labs worldwide
        • My brain: OMG that sounds really bad. Can I somehow avoid the cross fire?
      • Guess 2: One specific org can generate 100x more tech and science progress than the entire rest of the world combined
        • My brain: I hope they will be publicly tradable, still respect the stock market, and I can buy their stock in advance?
        • Problem: Everyone wants to invest in AI companies already. Do I have an advantage?
    • If there will be a few years of vast-strangeness before we'll probably all die, can I get very rich beforehand and maybe use that for something?
      • (Similar to Guess 2 above, and also doesn't seem promising)

This is just initial, I'm happy in anyone joining the brainstorm, it's easier together

TL;DR Watch this video ...

or read the list of summary points for the book here

https://medium.com/steveglaveski/book-summary-21-lessons-for-the-21st-century-by-yuval-noah-harari-73722006805a

If you don't know who this guy is he is a historian who writes about the future (among other things).  

I'm 68 and retired. I've seen some changes. Investing in companies like Xerox and Kodak would have made sense early in my career. Would have been a bad idea in the long run. The companies that would have made sense to invest in didn't exist yet. 

I started my IT career in 1980 running an IBM mainframe the size of a semi-trailer while its peripherals took up an entire floor. Almost no one but hobbyists owned a PC and the internet did not exist as far as the public was concerned. Cell phones only existed in science fiction. In less than twenty years, by 2000, it was a radically different world. 

All my life I've been interested in the human story, reading widely about evolution, civilization, the arts and sciences. Never before have I seen so many signs that things are about to change dramatically. 

It's only human to try to do so but I don't believe you are going to be able to "figure this out". I suggest an analogy like the difference between an expert system and AlphaGo. The latter wins by learning, not by knowing a bunch of facts and rules. That's why I suggest this video. He talks about how to think about the future. 

When I retired, I thought about what to do. I had a lifetime's worth of knowledge in my head so I decided to write hard science fiction about the near future, 2025-2325. It's very difficult. Will the idea of cell phones be laughable in 2125? How long will it take for an AI to do a comparative analysis of two genomes in 2075? How will a population of eleven billion by 2100 change the world? Forget about 2100 - how will AI, climate change and geopolitics change the world by 2030?

Currently I'm writing a story about two students in a Masters Of Futures Studies program. They get a wild idea and the story follows their escapades. Futures Studies is not a mature science (if it even is a science) but it is a methodology used by major corporations and governments to plan for the future. Organizations like Shell Oil (Futures Studies aka Foresight was known as Scenario Planning there), the US military and the country of Finland among others use it and the stakes are pretty high for them. 

As I write hard science fiction, I have to do a ton of research on whatever I'm writing about be it genetics, human values, AI or what have you. So I am aware that unfortunately if you investigate Futures Studies you will encounter a lot of consultants who sound very woo-woo. But once you sort the wheat from the chaff there is a solid methodology underlying the discipline. It's not perfect (who can predict the future?) but it's as close to rigorous as you'll get. 

Here is a easily understandable explanation of the process by the person whom I have found to be the best communicator in the business. 

Here's the Wikipedia page about Futures Studies

https://en.wikipedia.org/wiki/Futures_studies 

And here's a PDF explaining the methodology as it is generally applied

https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/674209/futures-toolkit-edition-1.pdf

It's a lot I admit but is it worth your time? Think of it as an investment. 

It is a highly collaborative process so maybe get a group of like minded friends together and try it. There's no peer review process in Futures Studies. That issue is dealt with by the number of people you have involved.

Best of luck.

One way to plan for the future is to slow down the machinery taking us there to reduce the uncertainty about what is coming to some degree.

Another way to plan for the future is to do what I've done, which is to get old (70) so that you have far less chips on the table in the face of the uncertainty.  Ok, sorry, not very helpful.  But on the other hand, it's most likely going to happen whether you plan it or not, and some comfort might be taken from knowing that sooner or later we all earn a "get out of jail free" card.

For today, one of the things we have some hope of being able to control is our relationship with risk, living, dying etc.   In an era characterized by historic uncertainty, such a pursuit seems a good investment.

The world could be in the same state it is in now in 200 years (highly unlikely), but you could get run over by a bus tomorrow.

We could achieve biological immortality with the benefit of obsequious AI servants catering cheaply to our every whim. And, at the dawn of this new age, as you are lost in reverie contemplating the coming joys of you terrestrial eternity, an AI driver could direct its bus into you at 200mph, to avoid endangering the riders on the bus according to a calculus of minimizing harm. You would never know what hit you.

People who lived in more “predictable” times (did they?) could be less certain of their medical future or, indeed, of subsistence. I enjoy walks through old cemeteries. The markers often tell the stories of young families whose children were wiped out in a couple years, or young husbands who, already having suffered the loss of several children, lost their wives and her nearly newborns suddenly. And all kinds of other tragedy.

Those with appreciation of the larger forces at play easily perceive the precarious situation our society is. The less foresighted man on the street, born in the modern West into a relative Goldilocks zone of health, sustenance, and relatively peaceful predictability, is probably more confident of security and changelessness than people at any time or place, with the exception of members of tribes living traditional lifestyles of immemorial provenance.

My wife recently received a somewhat devastating diagnosis at a very young age. It is an interesting study to observe how folks react to it — the average response is, “you’re too young for such-and-such,” or, “you shouldn’t have to go through that”. Are such responses even imaginable for someone in 1873? We are implicitly certain of boundless health, limitless bounty, livable incomes or subsidies, and so forth. 

In a way, you just have to take it as it comes. Your dreams will be crushed, your plans ruined, in all likelihood, even without major change in society. Embrace that a little bit. Set manageable goals, particularly about how you wish to be living each day. Regularly prune what no longer contributes. Make your desired daily life a goal and try to achieve stability in that, riding the wave of change like a surfer, making adjustments to stay “put”. If you have a clear idea of daily life, you can often weather storms with it — think like a monk carving out a way of life in 1100, 1500, 1700, 1900, and 2022, living by the very same monastic rule. They’re masters at adjusting whilst maintaining an equilibrium built around a stable rule of life and a fixed goal for their lives.

Know the warning signs you are looking for to make major changes in a timely and responsive way. Is this inflation the signal to pull out of your 401k? Will energy price points signal the need to relocate where a vehicle is less necessary? What supply chain issues will you need to see to plant a Victory Garden, or to invest in land? But have a bias towards status quo in your daily life and seek equilibrium on your terms through all changes.