I think you're making a major false generalization from Newcombe's problem, which is not acausal. Information flows from Omega to your future directly, and you know by definition of the scenario that Omega can perfectly model you in particular.
In acausal reasoning there are no such information flows.
From later paragraphs it appears that you are not actually talking about an acausal scenario at all, and should not use the term "acausal" for this. A future superintelligence in the same universe is linked causally to you.
There are utility arguments around being a highly productive individual forever, regardless of whether you want to insert the word "emotional appeal" or not. Or even being a not very highly productive individual forever.
I haven't seen anyone arguing that users giving generous permissions to Claude Code are going to doom humanity.
It can and probably will mean that whatever system they're giving permissions on is going to be trashed. They should also expect that whatever information is within that system will be misused at some point, including made available to the Internet in general and every bad actor in it in particular.
It's reasonable to put it in a system that you don't care about, and control exactly what personal information you put into that system regardless of any permission settings. I don't just mean a VM within a system that you depend upon either, because high-level coding agents are already nearly as good as human security experts at finding exploits and that will only get worse.
"because by definition you have no actual information about what entities might be engaging in acausal trade with things somewhat vaguely like you." Please can you elaborate? Which definition are you using?
Acausal means that no information can pass in either direction.
"you and it are utterly insignificant specks in each other's hypothesis spaces, and even entertaining the notion is privileging the hypothesis to such a ridiculous degree that it makes practically every other case of privileging the hypothesis in history look like a sure and safe foundation for reasoning by comparison." Why? I would really rather not believe this particular hypothesis!
That part isn't a hypothesis, it's a fact based on the premise. Acausality means that the simulation-god you're thinking of can't know anything about you. They have only their own prior over all possible thinking beings that can consider acausal trade. Why do you have some expectation that you occupy more than the most utterly insignificant speck within the space of all possible such beings? You do not even occupy 10^-100 of that space, and more likely less than 10^-10^20 of it.
Are you envisaging a system with 100m tracking resolution that aims to make satellites miss by exactly 10m if they appear to be on a collision course? Sure, some of those maneuvers will cause collisions. Which is why you make them all miss by 100m (or more as a safety margin) instead. This ensures, as a side effect, that they also avoid coming within 10m of each other.
It is pointless to try to avoid two satellites coming within 10 metres of each other, if your tracking process cannot measure their positions better than to 100 metres (the green trace in my figure).
This seems straightforwardly false. If you can keep them from approaching within 100 metres of each other, then that necessarily also keeps them from approaching within 10 metres of each other.
If you are inclined to acausally trade (or extort) with anything, then you need to acausally trade across the entire hypothesis space of literally everything that you are capable of conceiving of, because by definition you have no actual information about what entities might be engaging in acausal trade with things somewhat vaguely like you.
If you do a fairly simple expected-value calculation of the gains-of-trade here even with modest numbers like 10^100 for the size of the hypothesis spaces on both sides (more realistic values are more like 10^10^20), you get results that are so close to zero that even spending one attojoule of thought on it has already lost you more than you can possibly gain in expected value.
Thought experiments like "imagine that there's a paperclip maximizer that perfectly simulates you" are worthless, because both you and it are utterly insignificant specks in each other's hypothesis spaces, and even entertaining the notion is privileging the hypothesis to such a ridiculous degree that it makes practically every other case of privileging the hypothesis in history look like a sure and safe foundation for reasoning by comparison.
It seems likely to me that "driving a car" used as a core example actually took something like billions of person-years including tens of millions of fatalities to get to the stage that it is today.
Some specific human, after being raised for more than a dozen years in a social background that includes frequent exposure to car-driving behaviour both in person and in media, is already somewhat primed to learn how to safely drive a vehicle designed for human use within a system of road rules and infrastructure customized over more than a century to human skills, sensory modalities, and background culture. All the vehicles, roads, markings, signs, and rules have been designed and redesigned so that humans aren't as terrible at learning how to navigate them as they were in the first few decades.
Many early operators of motor vehicles (adult, experienced humans) frequently did things that were frankly insane by modern standards and would send an AI driving research division back to redesign if their software did such things even once.
It's definitely not a coherent logic as those are defined to be first-order, while this is explicitly a second-order logic.
Would it be useful to consider how much of current progress has been due to algorithmic improvements rather than compute capacity? It seems that the trend so far has been that a significant proportion of capability improvement has been software-based, though whether that can continue for many more orders of magnitude is certainly debatable.