ONE: I love how "should I learn to drive for this trip right here?" cascades into this vast set of questions about possible future history, and AGI, and so on <3
Another great place for linking "right now practical" questions with "long term civilizational" questions is retirement. If you have no cached thoughts on retirement, you might profitably apply the same techniques used for car stuff to "being rich if or when the singularity happens" and see if either thought changes the other?
TWO: I used to think "I want to live this year", "If I want to live in year Y then I will also want to live in year Y+1". Then by induction: "I will want to live forever".
However, then I noticed that this model wasn't probabilistic, and was flinching from possible the deepest practical question in philosophy, which is "suicide". Figuring out the causes and probabilities of people changing from "I do NOT want to kill myself in year Y" to "I DO want to kill myself in year Y+1" suggests a target for modeling? Which would end up probabilistic?
Occam (applied to modeling) says that the simplest possible model is univariate, so like maybe there is some value P which is the annual probability of "decaying into suicidalness that year"? I do mean decay here, sadly. Tragically, it looks to me like suicide goes up late in life... and also suicides might be hiding in "accidental car deaths" for insurance reasons? So maybe the right thing is not just a univariate model but a model where the probability goes up the older you get?
This approach, for me, put bounds on the value of my life (lowering the expected value of cryonics, for example) and caused me to be interested in authentic durable happiness, in general, in humans, and also a subject I invented for myself that I call "gerontopsychology" (then it turned out other people thought of the same coinage, but they aren't focused on the generalizable causes of suicidal ideation among the elderly the way I am).
Ok three things...
THREE: I drive <3