I thought about some stuff and have written down notes for my future reference. I'm pasting it below in case others find it useful or thought provoking. Apologies for inferential distance problems caused by my language/notation, etc. Posting my notes seemed superior to not doing so, especially given that the voting system allows for filtering content if enough people don't want to see it.
Mind-states are non-stable with respect to attributes valued by some agents. This is true not only with respect to death, etc but also biological/chemical changes that occur perpetually, causing behaviors in the presence of identical stimuli/provocations to differ substantially. The English language (and many other human languages) seem to hide this by their use of "pronouns" and names (handles) for humans and other objects deemed sentient which do not change from the moment the human/animal/etc is born or otherwise appears to come into existence.
As a result of this, efforts to preserve mind-states are unsuccessful while they allow mind-states to change (replacing one state with another, without retaining the pre-change state). Even given life-extension technology such that biological death is prevented, this phenomenon would likely continue - technology to preserve all mind-states as they came into existence would likely be more difficult to engineer than such required to attain mere immortality. Yet agents may also value the existence (and continued existance) of mind-states which have never existed, necessitating a division of resources between preserving existing mind-states and causing new ones to exist (perhaps variants of existing ones after they "have a (certain) experience")). Agents with such values face an engineering/(resource allocation) problem, not just a "value realization" problem.
Also consider that humans do not appear to exist in a state of perpetual optimization/strategizing; they execute, and the balance between varying methods of search and execution does not appear to be the result of such a process - to the extent such a process occurs, the recursive depth is likely minimal. Mental processes are often triggered by sensory cues or provocations (c/p). The vast majority of these c/p encountered by humans consistently trigger a small subset of the set of mental processes which are implementable by human brains, even once the large space of processes which do not optimize along held values are excluded. Human brains are limited in the number of simultaneous processes run, so c/p triggering processes reduce the extant to which current processes continue to be run - furthermore there appears to be an upper limit to the total number of simultaneous processes run (regardless of the allocation of resources to each), so c/p sometimes trigger processes which extinguish existing processes. Thus encountering certain c/p may be significantly impactful to a human agent's values as the c/p encountered shape the mental/thought processes run. If processes likely to maximize the human's values are not those triggered by the majority of c/p encountered, efforts to optimize c/p encountered may have significantly positive expected value.
Related to the above, humans fail to manage the search/execute decision with significant recursive depth. Behaviors (actions following, or in anticipation of, certain c/p) are often not the result of conscious strategy/optimization - formation of such behaviors is often the result of subconscious/emotional (s/e) processes which do not necessarily optimize effectively along the human's values, causing the human to possibly exhibit behaviors which are "inconsistent" with its values. This may be the case even if the s/e processes hold a perfect representation of the human's values, given that the success of behaviors in optimizing along values involves prediction of physical phenomena (including mental phenomena of other agents) - inaccuracies in the model used by s/e processes may result in such "inconsistent" behavior, even if that model is not consciously accessible.