Problem

“You’ll have a great time wherever you go to college!” I constantly hear this. From my parents, my friends’ parents, my guidance counselor, and my teachers. I don’t doubt it. I’m sure I’ll have a lot of fun wherever I go. Since I’m trying to be very intentional about my college decision process, I’ve interviewed close to twenty students. And for the most part, all of them are having a great time!

This scares me. A lot.

If I can go anywhere and have a great time, then what should I choose my college based on? Ranking? Prestige? Food? Campus? Job opportunities? Cost?

After thinking more about this problem, I realized that although I’ll have fun wherever I’ll go, it will also change me as a person. More specifically, I think that each college will shift my values in a different way.

Value shift is when, through one event or another, a person changes what they value. Their past self, seeing their future self with different values, might think “oh no, I have become what I sought to destroy.” Milder versions also exist. The classic example is what happens to a parent after they have a child. If they had a value of “stay alive” before, it (usually) changes to “keep child alive then stay alive.” Yet if you had told them in the past that they would sacrifice themself for their child, they might not be okay with it (or they might be, in which case a value shift did not happen). The next question would be: why did you take an action (having children) that would shift your terminal values?

So when I look out over the imaginary cause and effect tree of me choosing colleges, I see further. Not only do I see the effects of

  1. how I will spend the next four years?
  2. what people I will meet that I will know for the rest of my life?
  3. what skills will I learn for the rest of my life?
  4. what meta-skills (or mannerisms/habits of being) will I pick up?

I also see the effect of

     5. what will I be motivated to do with my life?

I worry that the last one is the most important one, yet also the most overlooked. Naively, I could say that my future self will obviously be wiser than my present self in choosing what to do with my life so I should not worry at all. After all, it has all of my experience plus four extra years! Jacob in four years literally was me who was thinking these thoughts. Of course he won’t do something stupid.

And if I were perfectly rational agent, I’d say that this would be true. But I’m not, I’m a human. I have messy desires. Most (almost all?) of what I believe is not reasoned from first principles but instead picked up from my environment. Paul Graham says that cities have certain vibes that subtly change your values when you live in them. According to him, New York tells you to make money, Boston tells you to be smarter, and the Bay Area tells you to be more powerful. I suspect colleges have similar qualities. If all my friends are doing something, it will be insanely hard to not do it. And college is a hell of a strong optimizer! 41% of Harvard grads go into consulting or finance. Heck, I have an immediate family member who got sucked into consulting for ten years before realizing it wasn’t what he wanted to do. I don’t want to spend ten years figuring this out.

I hope you can now understand why “Don’t worry, you’ll have fun at college!” is so scary. Of course I’ll have fun (I’m lucky enough to have a pretty high hedonic set point), but will it corrupt my values in a way that I would currently regret? Maybe.

Solutions

How can I solve this problem?

An easy solution would be to make a list of all the things that my future self cannot do no matter what. I could put “NEVER do consulting or finance on it.” This strategy has a few nice benefits. If I always kept commitments to myself, then I could make even stronger commitments in the future. Austin Chen wrote an argument for this called “What We Owe the Past,” which I think is pretty funny. When he was 17, he made a commitment to go to church every Sunday, and he still does even though he has a “... a more complicated relationship with the Catholic church.”

This method seems pretty good, right? I think there are at least three ways it breaks down, some philosophical and some practical.

When making an acausal trade, as this method calls for, or really a trade of any kind, both parties need to benefit. If you only do this method once, the past agent is the only one that benefits. It knows that its values will be preserved into the future. But the future agent does not benefit; it is just beholden to its past self.

The only way that the future agent benefits is if we do an iterated version of this method. That way, it becomes the past agent and it then gets to impose its values on a future agent again. Since a trade only works when both parties get something, I would have to do this a bunch to reap the rewards of consistency. But since this is so demanding of the future agent, I would not take it lightly and would probably only do it a few times, which would kind of defeat the point.

Enough with the acausal trade philosophy. Practically, this breaks down because myself in the future might have more experience and information than myself in the past. What if I learn that consulting is actually the greatest thing ever? And I get a really convincing argument? Should I still not do it? To understand this point, we need to be clearer about what is a value?

Up until now, I’ve just been using value to refer loosely to anything that an iteration of an agent believes or wants. But there seems to be a hierarchy of types of value. At the bottom level are facts and beliefs about the world, which I would pretty much never want to hold static. On the second level are wants. These are things like “I want to make a lot of money” or “I want to have good relationships” or “I want to help people.” On the third and highest level are things like “I want my future self to pick the best values according to a certain set of meta-values.” You still need to specify the meta-values of course, but you do get more flexibility. I think I’d probably not want any ‘values’ on the first and second levels, but only want ones on the third. That is, I’d want my future self to choose his values based on a certain set of meta values. (Of course, I'd still have to worry about the meta values shifting.)

Finally, if you keep adding strict prohibitions onto your future self, you are limiting all your future options. With no way to amend them, it essentially amounts to adding more and more constraints to your future behavior. After not that long, some are bound to contradict and then the whole system breaks down.

Even if we bite all these bullets, there is still something weird to me about the contractual nature of it all. This is not some stranger I’m trying to make a deal with, it’s myself. There should be a gentler, nicer, way to achieve this same goal.

If agents have more mutual trust, they don’t need to form a rigid contract beforehand. They can negotiate a contract with each other by communicating causally. The reason this method doesn’t work in this case is because if you tried to negotiate, your past self could not speak!

What if we could find a way to have a gentler interaction that is less contractual without silencing the past self?

When you read, you are essentially having a one-way conversation with the author. So, I could actually just write up a long document explaining all the reasons why I think that my future self should do something (when I predict it will have a big choice). Even better, I should write another document with all my current values. Then, when making any decision, my future self should take a look at these documents and take them into account. But why would it be motivated to do this? Decision theoretically, it would be motivated because it knows that it might want a self even further in the future to take into account its preferences. However, as stated above, I don’t think this will actually work in practice since I’m not planning to do this that many times. Going along with the “gentler” reasoning, it should want to do it because it has camaraderie with its past self. It should want it’s past self to be happy and it knows that to make it happy, it should take its preferences into account.

A document, however, is still not really a conversation since it only goes one way. To make this really good, I should make a chatbot with a custom voice trained on my voice (using ElevenLabs or something similar) and then feed it my documents and ask it to discuss the decision with my future self. Then, both sides can communicate causally and have an equal say.

While at WARP, I talked to some more people about this and came up with even more ideas for giving my future self more information about what past iterations of itself wanted:

A way that could be even more salient than text is taking a video of me talking about my current values, what I want for the future, and what I would like my future self to do. I plan to do this at least once a year with the first one happening in the next 2 weeks. I would then re-watch these videos yearly and whenever I had a big decision to make.

Another method that I have started using is daily journaling. Besides the psychological benefits that come with journaling, it also will provide me with a plethora of information about what I currently value for my future self to go back upon and learn from.

Concrete Application

Lastly, I want to share another reason I’ve been thinking about this besides college: I’m considering taking a gap year and working at tech startup(s)/doing research/independently learning stuff. When I mentioned this to my dad, he said that he worried that I would have such a good time and make enough money that I would get stuck in a local maximum and not want to go to college if I did this. He then claimed that this would be bad for 'social reasons.' My response was that if I did this, I would take into account his concern at the end of the gap year when deciding if I should go to college. But to actually do this, I would have to make sure that I was working from a clean(ish) headspace without any value-shift from the types of people I would be around in startups. So as I forsee the possibility that I will have to make the decision, I think it will be wise to use some of the above techniques to combat some of the bad value drift also.

Concretely, I will use all of these methods now (maybe except the chatbot) to create a treasure trove of information about what my current self wants and values. Then at the end of the gap year (if I decide to take one), I will take into account all of this information before making any big decisions.

Benefits Right Now

As I’ve been implementing some of the above methods, I’ve realized that they have an unexpected benefit for even my current self: it is easier to make bolder decisions. I no longer have to worry that my future self will go astry on me, so I can make bolder decisions that will leave my future self in a position where it could do things that are totally against my current preferences, but won’t. For example, I feel more comfortable taking a gap year knowing that my future self will see all the media I’ve left it in the past saying that it should probably try college after the year is up, even if it really thinks it is a bad idea. (I currently think trying college is a good idea.)


Thanks to many people for discussing this with me.

New to LessWrong?

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 6:49 PM

if I were perfectly rational agent ...

Yeah, "perfectly rational" implies consistency over time, which is the whole question you're struggling with.

if you keep adding strict prohibitions onto your future self, you are limiting all your future options

Right, that's kind of the point, isn't it?  You don't trust your future self to be consistent with your present beliefs, so you constrain it.  Note that there are current-self tactics based on the same principle: you might not trust parts of your decision apparatus to make food choices, for instance, that other parts of you prefer, and therefore don't keep junk food at hand.

Most older people I know (including myself), recognize that their younger selves were jerks, or at least confused about a lot of things.  They often regret commitments made previously (and often agree with them, but in those cases the commitment isn't binding, as they like it anyway).

I'd recommend not framing this as a negotiation or trade (acausal trade is close, but is pretty suspect in itself).  Your past self(ves) DO NOT EXIST anymore, and can't judge you. Your current self will be dead when your future self is making choices.  Instead, frame it as love, respect, and understanding. You want your future self to be happy and satisfied, and your current choices impact that.  You want your current choices to honor those parts of your past self(ves) you remember fondly.  This can be extended to the expectation that your future self will want to act in accordance with a mosty-consistent self-image that aligns in big ways with it's past (your current) self.

This framing is consistent with most of your concrete suggestions - those are reinforcing the importance (to you currently) of these values, and the memory of this exploration, documentation, and thinking about why you currently care about future actions will inform your future self's values.  It's not a contract, it's persuasion.

I'd recommend not framing this as a negotiation or trade (acausal trade is close, but is pretty suspect in itself). Your past self(ves) DO NOT EXIST anymore, and can't judge you. Your current self will be dead when your future self is making choices. Instead, frame it as love, respect, and understanding. You want your future self to be happy and satisfied, and your current choices impact that. You want your current choices to honor those parts of your past self(ves) you remember fondly. This can be extended to the expectation that your future self will want to act in accordance with a mosty-consistent self-image that aligns in big ways with it's past (your current) self.

Yep, this is what I had in mind when I wrote this:

Even if we bite all these bullets, there is still something weird to me about the contractual nature of it all. This is not some stranger I’m trying to make a deal with, it’s myself. There should be a gentler, nicer, way to achieve this same goal.

and

Going along with the “gentler” reasoning, it should want to do it because it has camaraderie with its past self. It should want its past self to be happy and it knows that to make it happy, it should take its preferences into account.

Thanks for expanding on this :)

Unless you are going to one of the big prestige universities, I don’t think it matters which you choose all that much. Save money.

As for working with a startup, why not both? I worked through college. Yeah, you’ll be working part time, but frankly, you’re mostly just being introduced to the environment more than anything. Internships are a great start into many industries. Just make sure that you are doing a paid internship. In my experience the unpaid ones are more focused on how much value they can extract from you.

Unless you are going to one of the big prestige universities, I don’t think it matters which you choose all that much. Save money.

My experience is that this is right. The list of top-tier global institutions, in terms of prestige, is short: Oxford, Cambridge, Harvard, MIT, CalTech, maybe Berkeley, maybe Stanford and Waterloo if you want to work in tech, maybe another Ivy if you want to do something non-tech. The prestige bump falls off fast as you move further down the list. Lots of universities have local prestige but it gets lost as you talk to people with less context.

Prestige mostly matters if you want to do something that requires it as the cost of entry. If you can get in, it doesn't hurt to have the prestige of a top-tier institution, but there's lots of things you might do where the prestige will be wasted.

Sadly, this is a tough thing to know whether you will need the prestige or not. You'll have to make an expected value calculation against the cost and make the best choice you can to minimize the risk of regret.

After 5 years, I think experience matters more.

Also matters what the experience is like. High prestige university allows you to get a job at a high prestige company. Low prestige university makes it a lot harder to get considered for jobs at high prestige firms. You'll have to outperform high-prestige peers by, say, 50% to get noticed if you want access to the same sort of opportunities they get access to via prestige.

(To be clear, I'm not in favor of this sort of thing, I just want to be realistic about it and I wish someone had been real with me about it when I was 17 trying to decide where to go to college. Don't rely on your ability to outperform others. Take every advantage you can get and then leverage them to do even more!)

You should probably take reverse-causation into account here. I doubt the effect of the school is nearly as strong as you think, since people who want finance jobs are drawn to the schools known for getting people finance jobs. Add to that that the schools known for certain things are the outliers. If you go to a random state school, the students are going to have much more varying interests.

Thanks for this, it is a very important point that I hadn't considered.

I don’t want to spend ten years figuring this out.

A driving factor in my own philosophy around figuring out what to do with my life. Some people spend decades doing something or living with something they don't like, or even something more trivially correctable, like spending one weekend to clean up the basement vs. living with a cluttered mess for years on end.