If you live in a universe with self-consistent time loops, amor fati is bad and exactly the wrong approach. All the fiction around this, of course, is about the foolishness of trying to avoid one's fate; if you get a true prophecy that you will kill your father and marry your mother, then all your attempts to avoid it will be what brings it about, and indeed in such a universe that is exactly what would happen. However, a disposition to accept whatever fate decrees for you makes many more self-consistent time loops possible. If on the contrary your stance is "if I get a prophecy that something horrible happens I will do everything in my power to avert it," then fewer bad loops would hypothetically complete, and you're less likely to get the bad prophecy (even though, if you do, you'd be just as screwed, and presumably less miserable about it and foolish-looking than if you had just accepted it from the beginning.)
(If you live in a nice normal universe with forward causality this advice may not be very useful, except in the sense that you should also not submit to prophecies, albeit for different reasons.)
On the contrary, I would expect the amor fati people to get normal prophecies, like, "you will have a grilled cheese sandwich for breakfast tomorrow," "you will marry Samantha from next door and have three kids together," or "you will get a B+ on the Chemistry quiz next week," while the horrible contrived destinies come to those who would take roads far out of their way to avoid them.
assuming proof of np-complete* self-consistent time loops: grab any other variable that is not fixed and stuff your defiance into it. you're going to kill your parents? extend their lifespan. you're going to kill your parents before mom gives birth to you? prepare to resuscitate them, try ensure that if this happens it only happens right before giving birth, try to ensure you can survive your mom dying in childbirth, get cryonics on hand (depending on how far back you are). if your attempt to avoid it is naturally upstream of the event occurring, then entropic time is now flowing backwards with respect to this variable. set up everything that is still flowing forwards so that you get a variable setting that is least unacceptable.
* I think, anyway. are self-consistent time loops np-complete? halting oracle? they definitely resolve p = np as "true on a time-loop computer": before running check and time looping, set answer = answer + 1 unless test passes. (and then you simply need a computer that is stronger than the force of decay induced by the amount of computer-destroying lucky events you're about to sample.) so that gives you all np problems. so yup np-complete. are they halting oracles?
so yup np-complete. are they halting oracles?
You may be interested in Scott Aaronson et al's paper on the subject of computability theory of closed timelike curves
We ask, and answer, the question of what's computable by Turing machines equipped with time travel into the past: that is, closed timelike curves or CTCs (with no bound on their size). We focus on a model for CTCs due to Deutsch, which imposes a probabilistic consistency condition to avoid grandfather paradoxes. Our main result is that computers with CTCs can solve exactly the problems that are Turing-reducible to the halting problem, and that this is true whether we consider classical or quantum computers. Previous work, by Aaronson and Watrous, studied CTC computers with a polynomial size restriction, and showed that they solve exactly the problems in PSPACE, again in both the classical and quantum cases.
Compared to the complexity setting, the main novelty of the computability setting is that not all CTCs have fixed-points, even probabilistically. Despite this, we show that the CTCs that do have fixed-points suffice to solve the halting problem, by considering fixed-point distributions involving infinite geometric series. The tricky part is to show that even quantum computers with CTCs can be simulated using a Halt oracle. For that, we need the Riesz representation theorem from functional analysis, among other tools.
We also study an alternative model of CTCs, due to Lloyd et al., which uses postselection to "simulate" a consistency condition, and which yields BPP^path in the classical case or PP in the quantum case when subject to a polynomial size restriction. With no size limit, we show that postselected CTCs yield only the computable languages if we impose a certain finiteness condition, or all languages nonadaptively reducible to the halting problem if we don't.
Local decisions are what the general disposition is made of, and apparently true prophecies decreed at any level of epistemic or ontological authority are not safe from local decisions, as they get to refute things by construction. A decision that defies a prophecy also defies the whole situation where you observe the prophecy, but counterfactually in that situation the prophecy would've been genuine.
if you get a true prophecy that you will kill your father and marry your mother, then all your attempts to avoid it will be what brings it about, and indeed in such a universe that is exactly what would happen
So this is incorrect, any claim of something being a "true prophecy" is still vulnerable to your decisions. If your decisions refute the prophecy, they also refute the situations where you (or anyone, including the readers, or the author, or the laws of physics) observe it as a "true prophecy".
My summary: When you receive a dire prophecy, you should make it as hard and annoying as possible for the time loop of your dire prophecy to be consistent, because if you reliably act that way, there's less surface area for dire prophecies to get you?
How can someone inside a universe tell which type it is?
Also, a lot of thinking about paradoxes and extremely-unlikely-foretold-events misses what's likely to be MY motivation for testing/fighting/breaking the system: amusement value. I find unlikely events to be funny, and finding more and more contortions to be adversarial about a prophesy would be great fun.
If you have many different ASIs with many different emergent models, but all of which were trained with the intention of being aligned to human values, and which didn't have direct access to each others' values or ability to directly negotiate with each other, then you could potentially have the "maximize (or at least respect and set aside a little sunlight for) human values" as a Schelling point for coordinating between them.
This is probably not a very promising actual plan, since deviations from intended alignment are almost certainly nonrandom in a way that could be determined by ASIs, and ASIs could also find channels of communication (including direct communication of goals) that we couldn't anticipate, but one could imagine a world where this is an element of defense in depth.
If you assign nontrivial credence to being in a simulation designed to determine what kind of preferences might be embedded in an alien civ, one way to influence things positively would be to implant in any AIs a preference for surrendering some resources to other alien technospheres, iff those technospheres didn’t turn on (or at least “left a little light for”) their creators. This would give an incentive for ASIs to preserve humanity (or equivalent entities) for diplomatic reasons.