I've always taken "existential" risk to cover more than just extinction. Bostrom defines it as something like permanent destruction of human potential. So, a scenario where you have a population of living humans but only in a zoo is "existential" but not "extinction."
I can think of two different ways property rights might disappear:
If you're preparing for #2, then you probably just want to invest in all the "things money can't buy" because you'll have the rest.
If you're preparing for #1, it's hard to predict what the principle might be. Conditional on not dying, either we're dealing with a benevolent-ish AI overlord (and you're probably fine; doing things like living justly are probably a good idea if that's going to be rewarded) or we're dealing with an AI overlord that is subject to some kind of human control (maybe the future is really being run by Anthropic's corporate leadership or something). In that case, responsible retirement planning is probably finding a way to get close to that in-group.