I call the waypoints lighthouses. Getting you far enough in good enough shape that you can spot the next one.
Great post. Three comments:
If it were the case that events in the future mattered less than events now (as is the case with money, because money sooner can earn interest), one could discount far future events almost completely and thereby make the long-term effects of one’s actions more tractable. However I understand time discounting doesn’t apply to ethics (though maybe this is disputed by some).
That said, I suspect discounting the future instead on the grounds of uncertainty (the further out you go the harder it is to predict anything) - using say a discount rate per year (as with money) to model this - may be a useful heuristic. No doubt this is a topic discussed in the field.
Secondly, no doubt there is much to be said about what the natural social and temporal boundaries of people’s moral and other influence & plans are, eg family, friends, work, retirement, death (and contents of their will); and how these can change - eg if you gain or exercise power/influence, say by getting an important job, having children, or doing things with wider influence (eg donating to charity), which can be for better or worse.
Thirdly, a minor observation: chess has an equivalent to the Go thing about a local sequence of moves ending in a stop sign, viz. an exchange of pieces - eg capturing a pawn in exchange for a pawn, or a much longer & more complicated sequence involving multiple pieces, but either way ending in a ‘quiet position’ where not very much is happening. Before Alpha Zero, chess programs considering an exchange would look at all plausible ways it might play out, stopping each move sequence only when a quiet position was reached. And in the absence of an exchange or other instability, would stop a sequence after a ‘horizon’ of say 10 moves (and evaluate the resulting situation on the basis of the board position, eg what pieces there are and their mobility).
The chess thing is cool. I never got strong enough at chess to learn about that and I appreciate the education! Regarding ethics vs finance...
If it were the case that events in the future mattered less than events now (as is the case with money, because money sooner can earn interest), one could discount far future events almost completely and thereby make the long-term effects of one’s actions more tractable. However I understand time discounting doesn’t apply to ethics (though maybe this is disputed by some).
That said, I suspect discounting the future instead on the grounds of uncertainty (the further out you go the harder it is to predict anything) - using say a discount rate per year (as with money) to model this - may be a useful heuristic.
My hunch is that all "experienced life rewards" are essentially "ethical" in the sense that a certain "all else equal hour" in a hottub with my favorite person now, vs a week from now, vs 10 years from now shouldn't be discounted intrinsically. It might not just be "lives 1000 years from now vs lives this decade" being roughly the same value if they are roughly the same internally... but everything that should actually be cared about looking at its consumption value.
(Remember, you shouldn't buy something if the price is higher than the value. If there's no consumer surplus, don't buy something!)
I think the reason to delay lounging in a hottub is pragmatic... if you invest early, and let compound interest build up, then you can switch to leisure mode faster, and get more total leisure from living on interest.
But investing at Kelly is quite cognitively demanding, and investing in an index is slow and still often painful, (you might die before you get enough to retire if you have a bad market decade or two at random)? If you do a lot of leisure early in life and do NOT make money and put it into a compound interest setup, then you can do less total leisure.
So basically, I think that money isn't an intrinsic good, it just gives you access to certain ethical/hedonic/humanistic goods, and it is just a fact about this timeline, and this physics, and this humanity, and this historical period in the Dream Time, and these Malthusian Limits not being very binding right now, and this "general situation we found ourselves Thrown into" that makes it even true that "markets go up on average" and "investment beta is generally positive" and "money grows by default" and thus that "money now is way more precious than money in the future".
Then then money is the unit of caring. So that means that everything we care about that can be bought is something we can have sort of have more of (overall, across the whole timeline, where maybe every happy with a certain character is equally "internally happy" no matter when it happens) by being frugal, and far seeing, and investing well early, and so on... at least until the structural macroeconomic situation changes in our Lightcone to a situation where markets stop growing by default and index funds stop working?
Death and aging, of course, change all this. And children change it again. Once transhumanism succeeds and involuntary death stops being a thing A LOT of "axiological anthropology" will change as people adapt to the possibility of being 1000 years old and feeling and looking like you're 21, yet somehow also being incredibly wise, and the inheritor of 950 years of financial and emotional and ethical prudence, and also this being very normal, so society is run by and for people similar to you <3
Epistemic Status: I wrote the bones of this on August 1st, 2022. I re-read and edited it and added an (unnecessary?) section or three at the end very recently. Possibly useful as a reference. Funny to pair with "semantic stopsigns" (which are an old piece of LW jargon that people rarely use these days).
You might be able to get the idea just from the title of the post <3
I'll say this fast, and then offer extended examples, and then go on at length with pointers into deep literatures which I have not read completely because my life is finite. If that sounds valuable, keep reading. If not, not <3
The word "value" is a VERB, and no verb should be performed forever with all of your mind, in this finite world, full of finite beings, with finite brains, running on a finite amounts of energy.
However, if any verb was tempting to try to perform infinitely, "valuing" is a good candidate!
The problem is that if time runs out a billions years into the future, and you want to be VNM rational about this timescale, you need to link the choice this morning of what to have for breakfast into a decision tree whose leaf nodes, billions years in the future, are DIFFERENT based on your breakfast decision.
This would be intractable to calculate for real, so quick cheap "good enough" proxies for the shape of good or bad breakfasts are necessary. This has implications for practical planning algorithms, which has implications for the subjective psychology of preferences.
In brief, many near future outcomes from any given action are reasonable candidates for a place to safely stop the predictive rollout, analyze things locally based on their merits and potentials, and then stop worrying about the larger consequences of the action.
There is a valid and useful temptation to stop the predictive rollout at that point and declare the action good (worthy of performance) or bad (unworthy of performance) on that basis. However: it is always potentially valid to take the predictive roll-out further than that, so long as the calculating process is still essentially valid.
There is a sense in which this approach to axiology (ie "the study of value") makes the idea of "ultimate values" ultimately meaningless? You can get along in life quite well stepping from goal to goal to goal, with no big overall linkage to a final state of the entire universe that you endorse directly. Indeed, it may be cognitively impossible to have "pragmatically real and truly ultimate values" for reasons of physics and computational tractability.
That's kinda it. I've kinda said everything there is to say. Keep reading to hear it again, slower and with more examples.
Whenever you notice an "axiological stopsign", you can probably stop there if you don't have time to think more, or you can do a "California stop" and slow down and roll on through, and imagine subsequent steps and think about their value as well!
Sometimes in poker, looking at possible second steps that might occur, and figuring out how the choices you face could interact with them can be useful to mentally track and this is sometimes called "counting your outs".
If you rolled past an axiological stopsign and have accurately counted your real outs, their real value, and their real likelihood, pulling your mind back to the choice right in front of you might lead to a different value estimate than you initially had.
This is good. If that never happens then counting your outs would be pointless, and a waste of your valuable thinking time.
Outside of the context of poker, where things are discrete and simple and tidy, if you keep imagining things happening after a given goal or tragedy, and take such variations and additional outcomes into account as well, that probably also won't make the decision worse, so long as the extra imagining itself isn't particularly costly... but it could improve the way you navigate your way through a tragedy or through a success in subtle ways that unpack into large wins much later, because you saw farther, and set them up in advance... but it is hard to see very very far into the future.
That's it, that's the idea. But now we have added a metaphor for ignoring the axiological stopsign, and applied the concept to the toy example of poker.
Some readers, with very valuable time, will be able to stop reading right here, the rest of what I'll write mostly involves MORE explanation, application, and speculation that extends the pragmatics of the basic idea.
Where is it useful to put up an axiological stopsign?
The best places are points in imagined futures that are relatively stable and relatively amenable to having their value calculated. The stability of a situation makes that situation into a "natural endpoint" for a planning process to (temporarily?) treat as a far future terminal goal.
There's a temptation here to think of the mathematics of stability in terms of activation energies, or maybe actively maintained homeostatic circumstances (like a thermostat keeping the house toasty in the winter, or one's liver managing one's blood sugar at a good level)... and you might say that the pancreas has an enduring intrinsic preference for keeping the blood sugar in a certain range or the thermostat values the house being neither too hot nor too cold... but there's a pre-condition for even performing a stability analysis like this, which is that there is even any specific situation at all, for the values to be about... the thermostat doesn't care about the outside of the house... your pancreas doesn't care about other people's blood sugar... most practical values are about something specifically practical that has a defined time and place inside of boundaries that segment reality in a way that makes planning more tractable.
If there was a town in Antarctica, the empty unchanging tundra around the town would be a good candidate for a boundary line around the town, that helps define the town as a situation whose value count be estimated.
A boundary line that bisected the town would be less useful.
For example, suppose there was one murderer in the town overall. If you decide that only half the town is "worthy of consideration as a potentially stable situation" and the half you choose to measure/model/ponder lacks the murderer, then you'll probably fail to imagine a future murder happening in the half of the town you chose to look at, even though such a thing would be likely, because the murderer is likely to rove around without regard for your imaginary boundary.
A similar analysis could be done for time. If there were moments in time with few people moving around or changing things (like when everyone is asleep?) then that would be a good candidate for good place to put a time boundary to analyze "a situation" as a coherent chunk.
The ancient Greeks sometimes said: "Call no man happy until he's dead."
Here I suggest: "Call no day good until bedtime."
"Call no place safe until mapped and cleared to the edges of adversarial traversability" doesn't quite roll off the tongue in the same way, but it has the seeds of the same idea.
We could just stop now. The next section is about chess. You might not even need to read it! Maybe just jump over this section? Or not. Notice how doing the essay in la carte sections like this "shows" even as it "tells"! <3
(There's no section titles to give a table of contents in advance on purpose. That's often just how life is. One thing after another, with you having to define structure for yourself and then judge pieces of the structure independently.)
Stopsigns shouldn't be read as "dead end" signs.
If your thinking stops permanently at axiological stop signs, as if they were metaphysically absolute, then your thinking is limited, and you are some kind of fool.
(I mean... of course you can choose to be a fool if you want and its no skin off my nose, but I personally try to avoid it for myself.)
You might manage to be a clever fool, who plays the game pretty well indeed, but your growth in skill can be ultimately bounded by the axiological stopsigns that you cannot think beyond... that you cannot FEEL beyond.
This is not absolutely hard and fast and there is play in the details.
In general, precisely calibrated values about nearmode things can often substitute for the ability to see deeply into the future, but the place these values properly come from, a lot of the times, is just: from having looked at variation in the the long term consequences of different ways the near future could go, that are correlated with the "sense of value" about about the near future.
A nice property of precomputed FIXED values (or carefully calibrated methods for quickly computing values in certain situations) is that you can pre-compute in moments when things are quiet, so that these values can be used quickly during an emergency.
"In case of emergency, apply valuation methods fast and hard".
A useful analogy might be to think of chess playing algorithms, that make much use of a "board evaluation function". Such functions often consider where the pieces are, as well as the total material still on the board.
Lasker, in 1947, suggests 3.5 is right for Knights and Bishops both, puts the Queen at only 8.5, and treats pawn's differently based on their centrality (only 0.5 for the two on the edge and 1.5 for the two in the middle).
Kasparov, in 1986, suggests thinking of Knights worth 3, Bishops worth 3.15, and the Queen as 9 points.
AlphaZero estimated in 2020 that Knights were 3.05, Bishops were 3.33, and the Queen was 9.5.
Another useful component in a board evaluation function in chess is often to look at the mobility of pieces on the board. Knights at the center of an empty board can jump to 8 locations, but in the absolute corner can access at most 2 locations. Tallying up such optionality can be very fast and cheap, and this is especially useful when you've rolled out a planning tree N steps into the future and have looked at B branches coming out of each possible move, to analyze the goodness of B^N places where an axiological stopsign exists.
Once you do that calculation, the B^(N-1) board positions are approximately as valuable as each one's B outs would imply. (If more than B moves are technically possible but you only considered B moves at each point for reasons of intellectual parsimony then you're not being comprehensive and might miss weird and extremely valuable lines of play.)
The B^(N-2) board positions are probably better estimates because they take deeper search into account, and so on.
An extremely reasonable thing to do is to follow hunches about what "the best move" would be a couple steps out, and then branch off of situations closer to the present more in order to be especially comprehensive about moves you're close to taking and won't be able to undo. One of the algorithms that tries to explore optimally (with annealing temperatures that vary based on contextual factors) is the parallel terraced scan.
It is very reasonable to experience subjective fluctuations in the estimated value of events as the events get closer and closer to being actualized, and get more and more evaluative attention.
(They say that playing speed chess too much will ruin your chess game. A plausible mechanistic hypothesis, then, would be that speed chess trains some low level part of your brain to stop-and-instantly-act at certain axiological stopsigns that are useful in a game constrained mostly by thinking time, and this "tendency to stop" might bias your thinking in deep ways once the time constraints go away.)
A simplified model of human planning is that human reflexes always perform the pre-computed best fast action, conditioning on urgency and uncertainty, with the pre-computation having occurred during evolution and/or arisen under an adaptively normal growth and childhood, with many reflexive behaviors mostly suppressed by default, so that the reflexive actions are ready to go, but occur in the absence of a choice to suppress reflexive action.
So in some sense, it is useful for your impulses to focus on rare, crazy, events and choices related to extreme values that have to be decided quickly.
Most of the frontal cortex in humans has the job of suppressing action, the search term you want, if you want to study this, is inhibitory control. which can be unpacked into papers attempting to "elucidate" the neurological details on the mechanical implementation of inhibitory control.
If you just think about it from first principles, you'll notice that impulsive valuations should be relatively precise (have the value they really have, especially for extremal elements so you get the max(action) separated from the other options very often) while inhibitory valuations should have solid recall.
In case your statistics are fuzzy, recall is true_positives / (false_negatives + true_positives)... that is to say... to measure recall for real you have to have a second much much higher quality classifier that can tell every time that a false_negative (on the sloppy classifier) should have been a true_positive (on the sloppy classifier)...
You want to look at all the actual_positives, and inhibit all but the best of them, and certainly never fail to inhibit actions that are much lower than the average value of actions that are already being inhibited.
Compare Babble and Prune. Babble generates ideas that come packaged up with axiological stopsigns that make it possible to even estimate the value of an idea coherently. Prune is doing inhibitory control based on those value estimates.
I composed the first draft of this essay in August 1, 2022 and inhibited publication of it, but later, looking at all my drafts, of all the essays I could finish, it seemed like maybe it was worth cleaning this one up, and publishing here and now in 2026.
This would be the place I would have stopped even bothering to write the essay if I had published it earlier. Imagine this essay stopped here. Should it have? Or is the rest worth having written?
Something that is missing from the material above is the concept of "game temperature" in combinatorial game theory.
I will simply quote from the 2026 version of wikipedia to explain "hot" and "cold" games, and the way gamestate can interact with one's sense of time and urgency, in a relatively rigorous toy model (bold and italics not in original)...
In general, you will need to be able to execute actions based on pre-computed or easily-computed evaluation functions (and the liberal use of axiological stopsigns to make rollouts smaller and more tractable) precisely when time is limited and moves are urgent.
That is to say, it makes sense to have "values" about "hot" situations, ready to go, as preparation for handling these "hot" situations in relatively more skilled ways.
It won't always be a turned based game, and the moves might not always be helpfully labeled with their actual literal "combinatorial game theoretic temperature".
Compare and contrast the beginning of the essay, where the subject of boundaries came up, where big empty spaces that were hard to traverse made nice natural spatial boundaries and moments of relative calm (like "when most people are sleep") made nice natural temporal boundaries.
There is a game called Baduk in Korea, Weichi in China, and Go in Japanese and English and the players of this game, especially in Japan, invented a lot of technical language for getting good at the game including terms like sente, gote, miai, and aji that relate quite strongly to these ideas of "game (and subgame) temperature".
Often, when a player plays in one part of the board, the move locally raises the temperature (in the game theoretic sense) and forces the other player's best move to be a local response, but then the best response to that might also be local, and so on, until the precarious local position finally stabilizes and someone moves away.
Playing away is a sign, to an outsider of low skill who can't understand what happened, that the skilled players estimated that the local play in that sequence was "a situation with an axiological stopsign at the end", which make the full sequence of moves a good candidate for being considered something that can be valued independently in a single motion.
Human go players of high skill do, in fact, try to estimate moves in terms of their "value" (in gained or lost territory) but also, they try to predict the full sequence of moves until someone plays away.
If they predict that the other player will locally respond with a move or moves that does not force them to make a third or fifth or seventh move in response but lets the initiator play elsewhere instead... then that whole sequence leaves the first player with "initiative" to play elsewhere again...
That is, the line of play that leaves you with wide latitude at the end of the action "ends in sente". It "ends with initiative retained".
Sente is worth points because it lets you choose the next place on the board to have a another small local flurry of moves once the temperature falls back to the baseline for the game (which, in general, goes down over time).
Here is a nice simple essay on calculating the value of having sente based on board positions.
Miai is a special term for when a strategic necessity can occur in either of two ways. The strategic purpose is almost certain to be achieved...
...but if either of the ways of accomplishing the strategic certainty are attacked it raises the temperature there, in a way that is under the control of the attacker (maybe putting you one mishandled move away from losing a strategic connection that might be worth 30 points) and forces a local response based on a temperature that is based on the totality if what was strategically at risk.
Miai have a very simple and legible kind of aji.
Aji is a bigger idea, it is a fuzzy aesthetic loose term, that literally means "taste" (and supposedly "de gustibus non est disputandum" ("in matters of taste there is no valid dispute")) but having a sense of good and bad aji turns out to be essential to getting stronger at go.
Aji can be "good" or "bad". Bad aji has defects. To fix bad aji at the end of a sequence of play, one often has to lose sente, restoring good aji for the sake of being able to ignore that part of the board for a while.
Bad aji is location where a future "situation where urgent action will be required to fix something" is likely and if you are reading a position out in your head, and see bad aji arising in a potential future, the bad aji can EITHER be treated as "a situation that would need more thought (because maybe it all turns out for the best for locally weird or unique reasons)" OR as "a badness, in itself (to be avoided in play and in the mind)".
Cognitively speaking, good aji is a boundary that serves you, and functions as a sign that you've reached an axiological stopsign (if you're trying to read a position out and wondering when you can stop). In some sense, by playing with good aji you are filling the go board with more axiological stopsigns that serve you, and by playing with bad aji you are reducing the number of axiological stopsigns and you'll have to think about everything all the time. (And mostly you'll be thinking about paying debts and fixing disasters.)
Chess and go and poker have relatively defined endings. Does everything?
At a certain point in a conversation about "rolling out a value estimate" by adding more nodes in a decision tree, that model the farther and farther future, after more and more contingencies might have occurred, someone might want to point out that time might be infinite (or effectively so for us).
Certain notions of philosophically ultimate axiology that focuses very intently on the final state of every sequence of behaviors (in game theory the total rollout for an entire game is sometimes called the "total strategy" of the game (and partial strategies are harder for people to reason a lot of times))...
But what if there is literally no end to "the game" that is our physical universe?
If there is no end to the game then maybe the universe lacks a "literally ultimate" value?
On twitter onetime I ran a poll where I felt like both answers were kinda scary if you think about it long and hard...
I ran this poll a full year after I started writing this essay, so this essay's drafting probably had something to do with me even deciding to run the poll? But if I had published right away then the poll wouldn't be included in the essay!
The idea of "enjoying the journey and not worrying about the destination" has a LOT of appeal in the popular zeitgeist.
It is the standard normie response to many axiological puzzles related to utilitarianism, hedonism, deontology, and so on. And sometimes the normies are right.
However, I suspect that if one wanted to, it wouldn't be hard to point to concepts like "aji" from go, and how enjoyable it is to have slack, such that one could frame "good vibes from hour to hour, and day to day, and month to month" as being strongly related to an orientation towards time that is very mindful of time, and careful to not create emergencies where making lots of snap valuations are essential for hoping to survive or thrive after a chaotically urgent situation.
Just to say: this essay is only going to get weirder and more speculative as it goes. You could totally stop reading here if you want.
I've been an immortalist longer than I've been a transhumanist. I think life is awesome, and with a little work and a little luck it is mostly full of delightful moments, and learning, and nice surprises, and delicious meals, and neat games to play with nice people, and worthwhile puzzles, and fun things to think about or do.
I became an immortalist before becoming a transhumanist... when I read some vampire novels as a young teenager and was annoyed at how the vampires got all these cool powers, and all this time to do all these fun things, and instead they just moped and wallowed in ennui. Fuck that noise! Just don't violate ethics when you meet your weird new dietary requirements, and... enjoy life! Its not hard, right? Right???
(Later I learned that it might be hard for some people. Apparently happiness set points are a thing, and they might be largely genetic, and mine might be high? The things I've heard that make durable changes to someone's "apparent set point" (that presumably aren't strongly caused by genetics themselves) that include (1) adopt gratitude practices to push the set point up, (2) don't marry a neurotic, (3) if you're a woman in an unhappy marriage getting divorced will often make you poorer and also happier, (4) if you're a man then losing a job will hammer your happiness for a long time, and (5) in general, don't let your children die.)
Contary to the set point idea... when they do studies to sample momentary happiness on a scale from 1-10 via text messages, the minute to minute scores don't strongly connect to how satisfied you are with your life looking backwards (which sorta relies on outcomes and big things more than hard-to-remember minutes of medium happiness in your daily life) but the hour to hour scores are higher on average when you spend time happily socializing, such as in a happy family or with good friends.
Something I found, as an adult, is that I seem to get and also to give a lot of happiness from visiting with people (either having them as house-guests or going to their homes) for roughly 10 days (more than a week, but less than 3 weeks for sure).
For roughly the first 10 days (two weekends and a bit?) you keep that glow of a rare and special visit, and don't have time for things like "resentment of the way they put the dishes on the wrong shelf over and over even after you asked them not to nicely five times" to build up ;-)
I feel like this house guesting practice of mine, in life, is consistent with the idea of managing and being aware of axiological stopsigns.
It is easier to plan 10 really good days of visiting "as a unit with a beginning and middle and end" than to plan 10 really good years with the same amount of control. The end of the visit is the axiological stopsign.
Whether there is another visit (and how fun that next visit might be) can be a nice and distinct second unit of analysis, if you're trying to "count the outs" for things to do (or not do) on an near term visit, wondering which things on this visit might be worth repeating on the next visit might change how you approach what is near in time.
Anyway... if you got all the way to the end of this essay (despite the essay telling you that all the ideas will just be repetitions on the theme, and that reading further might very well be a waste of time) and you have never yet read Finite And Infinite Games by James Carse then you might like that book.
I looked in a few places just now, and if you want to buy a copy, maybe try Abe Books?
But if their inventory has changed since I looked, and they want to charge you more than $9.50 for a used one then search elsewhere. (Unless you're reading this in the far future, and inflation has changed the price levels since January of 2026, in which case maybe that "price at which to search elsewhere" will have gone stale.)
James Carse's book applies the idea of ongoingly valuable social processes, and he doesn't go much into the math or planning or cognitive aspects much, but instead focuses on the sociological and emotional and spiritual differences in the vibe around games that are intended to never end, where people have fun by finding ways to continue to have fun... as distinct from a game being driven towards a definite end state that is personally preferred by a player or team that wants the game to end... with themselves crowned as victor.